The Lens - A Journal of TIGS Science

Page 1

Volume 1 2021

1


Cover image by Claire Spicknall Year 8 - Top 10 finalist in ANSTO’s Incredible Insects Competition 2021. This image is of the scales on a moth’s wing and was taken using a scanning electron microscope. Claire painstakingly digitally coloured the image to create this stunning example of what can happen when art meets science. Cover design by Saskia Belanszky Year 9 2


We acknowledge the Dharawal and Wodie Wodie people who are the traditional custodians of the land on which The Illawarra Grammar School stands. We recognise them and all First Nations people as the first scientists on this land, who stood, like us, and made observations about the world around them. We hope to learn from their lessons of the land, sea and sky, to help care for country, as they have done for over 65,000 years. We pay respect to the Elders past, present and emerging of Dharawal and Wodie Wodie land and extend that respect to all First Nations people.

3


Mission Statement The achievement of Academic excellence in a Caring environment that is founded on Christian belief and behaviour, so that students are equipped to act with wisdom, compassion and justice as faithful stewards of our world. “De virtute in virtutem” – From Strength to Strength From Psalm 84:7

4


It is with great pleasure to present this first volume of ‘The Lens’, a selection of works from our Science students. Staff and students work collaboratively every year to produce meaningful Science, and we are proud that we can communicate that work in this way. In 2021, we have also been fortunate enough to engage with experts in the wider community, namely Mr James Hegarty (Australian Steel Mill Services), Dr Vipul Agarwal (UNSW) and Dr Emanuela Brusadelli (UoW). These people are acknowledged for their time and knowledge that they shared with our students. In Science at The Illawarra Grammar School, we aim to provide the foundational knowledge and skills for those who will become the biologists, chemists, physicists, ecologists, engineers, and technicians of the future. Society is faced with many challenges in our world today and given the quality of work that is demonstrated here, we are confident that our students will play a key part in tackling these challenges. We are sure you will enjoy this collection of works as much as we have enjoyed producing it! Regards from all of us in the Science Faculty, Kerri Baird Brenden Parsons Dianne Paton Fiona Neal John Gollan Jane Golding 5


Contents THE EFFECT OF SALT ON THE TIME IT TAKES ICE TO MELT ..................................................................... 7 CRISPR ..................................................................................................................................................... 9 SuperHero Element Poster ................................................................................................................... 13 SuperHero Element Poster ................................................................................................................... 14 SuperHero Element Poster ................................................................................................................... 15 NEST SUCCESS – DOES BUILDING HIGHER REDUCE PREDATION RATE? ............................................... 16 DOES BUILDING YOUR NEST ON A TREE BRANCH IMPROVE BREEDING SUCCESS? ............................. 19 MOZART AND MEMORY - The Effect of Mozart on Short-Term Memory ............................................ 25 BREATHE IN, BREATHE OUT! ................................................................................................................. 28 “YOU’VE HAD A MAN LOOK!” Is this a real thing? ................................................................................ 33 THEORETICAL YIELD vs EXPERIMENTAL YIELD IN A COMBUSTION REACTION ..................................... 35 FACTORS AFFECTING BACTERIA GROWTH - TEMPERATURE ................................................................ 38 ELECTRICAL CONDUCTIVITY THROUGH NEUTRALISATION OF A BASIC SOLUTION .............................. 41 THE EFFECT OF FONT STYLE ON MEMORY............................................................................................ 47 THE EFFECT OF BORAX ON THE ELASTICITY OF SLIME .......................................................................... 49 EFFECT OF ANTIBIOTICS ON THE GROWTH OF Staphylococcus epidermidis ........................................ 54 THEORETICAL YIELD vs EXPERIMENTAL YIELD ...................................................................................... 58 THE EFFECT OF AUDITORY AND VISUAL STIMULI ON REACTION TIME ................................................ 62 VARYING THE ANGLE OF ATTACK OF A VEHICLE’S SPOILER AND ITS EFFECT ON AERODYNAMIC EFFICIENCY ............................................................................................................................................ 65

6


THE EFFECT OF SALT ON THE TIME IT TAKES ICE TO MELT Isabella Carswell (Year 7)

Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Results:

Discussion This investigation, that aimed to determine the effect of different amounts of salt (sodium chloride) on ice, found that the more salt (sodium chloride) that was placed on the ice cube, the faster it melted. In other words, the ice melted more slowly with less salt (sodium chloride) on it. A plot graph (see above) of the two variables in the experiment (amount of salt and average time of ice melting) showed the data had a clear negative relationship and a line of best fit suggested a linear trend. In other words, the data shows a decreasing straight line that is directly related to the increase in salt placed on the ice decreases the time taken to melt. Therefore, the results supported the original hypothesis that the ice cube with the most salt (sodium chloride) added will melt faster.

People who live in snowy conditions, such as the Northern Hemisphere, use salt to melt ice on roads so that the roads are safer to drive on, but why does this happen? Salt (sodium chloride) lowers the freezing point of water making it impossible to freeze at 0 degrees Celsius with salt on it. This is because the salt particles interfere with the water (H2O) particles obstructing them from bonding and forming ice. When you dissolve table salt (sodium chloride) in water the sodium and chloride ions separate so the particles become small and move in between the water (H2O) molecules. The reason the more salt (sodium chloride) added to the ice cube creates a difference in the time it took to melt in my experiment, is because the more salt in or on the water/ice, the lower the freezing point becomes making the 1tsp of saltwater melt faster as its freezing point is further away from zero degrees, the usual freezing point of water. 7


The results in this experiment are considered reliable as the results collected were consistent when repeated under the same conditions (e.g., 46 min, 46min, 47min for a tsp of salt on top) and are consistent with primary sources of results (Mrs Baird’s results). Even though the results are predominantly reliable they could be monitored more closely and to the exact second instead of being rounded up. This could be done by making the timing include seconds or even milliseconds if exact results are needed. The results were not considered valid as some variables such as where the salt is poured into the test container (some of the salt may have fallen off the ice cube on to the container surface). The size of the ice cubes was not controlled, as they weren’t measured for size consistency before testing, but were of a similar size. The salt (sodium chloride) when poured, would fall onto the ice and bounce off into the surrounding tray making the measurement slightly off. One of the ice cubes for the ½ tsp of salt was broken which affected the time it took to melt as it was smaller than the others. As mentioned, the ice cubes

were not the exact same size but roughly similar. This size difference would affect the time they took to melt. The accuracy of this experiment could be improved by ensuring that the ice had completely melted. It was difficult to tell if the ice was still there or if it had melted entirely and instead of ice there were bubbles, which were deceiving. Parallax errors were difficult to avoid as it was difficult to know that the ice had completely melted. There are many things that could have been done to improve the results and method of this experiment. One could be always having full attention on the ice cubes and watching them consistently until they melt or recording a video of the ice melting to see when they had exactly melted. This would improve the accuracy of the experiment. Another improvement would be to carefully measure the water in the ice cube tray for consistent ice cube measurements before they froze. This would increase the validity of the experiment because the ice cubes are of the same size. These improvements would create a far better designed experiment.

8


CRISPR Claire Spicknall (Year 8) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Introduction Clustered regularly interspaced short palindromic repeats (CRISPR) technology was created in 1987 and became more widely used in 2012. CRISPR allows researchers to easily alter DNA sequences and modify gene function. The protein Cas9 (or "CRISPR-associated") is an enzyme that “acts like a pair of molecular scissors, capable of cutting strands of DNA” (Live Science, 2018). Essentially this means that CRISPR technology allows for scientists to edit and modify the DNA inside human cells. This report will focus on how CRISPR has been used to edit cells within the body to better locate and kill cancerous cells and the advantages and disadvantages to using CRISPR technology.

In a healthy body the cells grow, die, and are replaced in a controlled way, this process is called cell division. The genetic material of cells can be damaged by environmental or internal factors and can cause mutated cells to grow uncontrollably, this results in a mass of cancer cells or a tumour, this process is shown in Figure 1. If this is not managed it can then lead to metastatic cancer. This is when cancer cells separate from the original tumour and penetrate the circulatory system and spread through bloodstreams or lymph vessels. The cancer cells then reside on a new part of the body and form a new tumour or cancer cluster, typically on nearby organs or lymph nodes.

Metastatic Cancer CRISPR is a technology used to edit genes in diseases, this report will focus on CRISPR being used to treat metastatic cancer (stage IV cancer).

Figure 1 shows the abnormal growth of cancer cells and how it forms tumours (Cancer Council, N.D).

9


Effected parts of the body Cancer can be present and spread almost everywhere in the body, but is most frequently found on the lungs, liver, or bones. When cancer cells spread, they do so by penetrating the circulatory and lymphatic system and flowing through bloodstreams or the lymph vessels where

they then collect on a new part of the body and can form another mass of cancer or a tumour, this process is shown in Figure 2.

Figure 2 shows the spread of cancer cells and how it penetrates the blood system and lymphatic system (Cancer Council, N.D).

An example of how CRISPR can be used to treat cancer Although CRISPR is still a new medical technology it has still been used in numerous experiments to modify DNA within cells. Most experiments in fighting cancer have required modifying the cells within the blood stream. An example of this is the study done by the University of Pennsylvania in 2019 in which CRISPR was used to modify the T cells to better locate and kill the NE-ESO-1 molecule found in most cancer cells. The experiment started by extracting blood from the patients, this was done to reach the T cells, which are white blood cells that have the potential to kill cancer as well as protect the body from infection. The scientists then used CRISPR to add the receptor protein that acts like a claw and will look for the

NE-ESO-1 molecules found in cancer cells, once the receptor finds these molecules it will bind to the cell and kill it. CRISPR was also used to remove three genes that limit and/or obstruct the cells ability to kill the targeted NE-ESO-1 molecules found in cancer. Once CRISPR had been fully modified the T cells they were multiplied in the lab and infused back into the patients. The experiment is summarised in Figure 3. The results of this experiment proved CRISPR’s success as there was no evidence to suggest it is not safe to use on patients and for two thirds of patients the T cells slowed or stopped the tumour growth. With more experiments in the future scientists believe that CRISPR could become a successful and commonly used practice to fight cancer.

10


Figure 3 shows how CRISPR is used to edit the T cells and how these T cells kill cancer cells. (National Cancer Institute, 2020) Social advantages An important advantage that CRISPR has over other cancer treatments like chemotherapy is that it does not have any known side effects, this decreases the physical and mental toll that cancer treatment has on the patient. This social perspective is important because it means that patients will not have to encounter the numerous side effects (see Figure 4) that chemotherapy can inflict on their body and the physiological damage it can cause. Chemotherapy causes numerous side effects to the patient; this is because the drugs not only kill the cancer cells but may also damage or kill the healthy body cells. This is where CRISPR has the advantage because it does not harm any body cells but instead enhances them to detect and kill cancer more efficiently and effectively, this means no side effects and improves the

patients experience treatment.

with

the

cancer

Ethical disadvantages of using CRISPR A significant ethical disadvantage to using CRISPR to edit genes within a cell is that it may in future promote ableism (discrimination in favour of able-bodied people). As CRISPR technology advances so does the human desire to perfect genes and eliminate people with genetic diseases like cancer or disabilities. This raises ethical considerations, like whether this change should even be made by humans in the first place and whether scientists have the right to rewrite the future and change the eggs, sperm, or embryos for the next generation. Many say that the whole ideas of perfecting human genes can be considered ableist as it promotes ideas that people with supposably “bad genes” hold a lesser place in society and should be corrected. 11


Conclusion In conclusion CRISPR has shown great potential to become a common treatment for cancer given more experiments and a better understanding of the cancer cells and the cells in the body that are able to fight it. There are great social benefits to using CRISPR technology to treat cancer

compared to other treatments like chemotherapy, but as any technology advances there will always be weaknesses and the possibility that it may be used in ableist ways to correct “bad genes”.

Figure 4 shows the side effects of chemotherapy (Centre for Clinical Haematology, N.D)

References How CRISPR Is Changing Cancer Research and Treatment 2020, National Cancer institute, N.a, viewed 31 August 2021, <https://www.cancer.gov/newsevents/cancer-currents-blog/2020/crisprcancer-research-treatment>. Side Effects of Chemotherapy 2020, Centre for Clinical Haematolog, N.a, viewed 2 September 2021, <https://cfch.com.sg/chemotherapy-sideeffects/>.

Side Effects of Chemotherapy 2021, Cancer.net, N.a, viewed 1 September 2021, <(the discrimination in favour of able – bodied people)>. What is cancer n.d., Cancer Council, N.a, viewed 29 August 2021, <https://www.cancer.org.au/cancerinformation/what-is-cancer>. what is CRISPR 2018, Live Science, N.a, viewed 27 August 2021, <Side Effects of Chemotherapy 2020, Centre for Clinical Haematolog, N.a, viewed 1 September 2021

12


SuperHero Element Poster Annie Sheargold (Year 8) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500

13


SuperHero Element Poster Brooke Baird (Year 8) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500

14


SuperHero Element Poster Ashley Brewer (Year 8) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500

15


NEST SUCCESS – DOES BUILDING HIGHER REDUCE PREDATION RATE? Mary Ledger, Leyla Yusuf and Joel Turner (Year 9) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract In this study, artificial birds’ nests and eggs were created using coir and clay and were placed at different heights to test the hypothesis that the nests placed in elevated positions would experience similar levels of predation to those at lower levels. While improvements could be made to increase the reliability of the experiment, it was found that birds’ nests placed in higher elevated positions had a higher level of predation.

Introduction Predation on nests has been identified as the primary cause of breeding failure in open and cavity-nesting birds (Matessi & Bogliani 1999). Several studies have been conducted with artificial bird nests to investigate the effect of nest site elevation on predation. Although the effect of nest site elevation has been contested, in 1993 T.E. Martin evaluated the findings of several studies and stated that when nest predation on both types of nests is assessed in the same plot, ground nests are less preyed upon than elevated nests. In this investigation, artificial nests were made and placed in elevated and ground-level sites to monitor predation activity. In this study, it was hypothesised that the artificial nests placed in elevated positions would experience similar levels of predation as to those at ground level. Method

this affected whether or not the nests would be attacked. Every couple of days, each nest was checked for signs of predation, and if there were, the nests were removed and taken back to the classroom for further inspection. The number of attacked nests for each group was counted and recorded, and the predators responsible for the attacked nests were determined by analysing the scratches and marks left on the eggs. Results The number of bird nests that were attacked was greater than the number of bird nests that were not attacked. Of the nests which were attacked, the greater majority were placed at a larger height. Of the nests which were not attacked, it was found that each height had an equal number of nests that were not disturbed. From this, we can conclude that the higher a nest is built, the greater the chance it has to be attacked by a predator.

Using half a tennis ball, each student shaped a bird’s nest out of coir (coconut fibre) and glued it to the hemispherical shape. Two artificial eggs were made for each nest using a small ball of clay, then were placed inside the nests, and the nests were taken outside to the natural environment. Half of the nests were placed on the ground, and the other half were placed in trees to determine whether 16


of these limitations must be considered in this investigation. Future studies in this area might help to increase nest placement specificity in terms of ecological location and height. The comparison of predation rates and predators on manufactured and natural nests in elevated and ground-level sites would be valuable if natural bird nests could also be observed concurrently. Figure 1: The number of nests attacked and not attacked at different heights.

Discussion In this study, it was hypothesized that the artificial nests placed in elevated positions would experience similar levels of predation as to those at ground level, however, this was not the case. Approximately 71% of the elevated nests were attacked, which was 21% higher than the ground level nests. This correlates with research completed by T.E. Martin in his report ‘Nest Predation Among Vegetation Layers and Habitat Types: Revising the Dogmas’, which concluded that when nest predation on both types of nests is assessed in the same plot, ground nests are less preyed upon than elevated nests. This investigation utilized artificial bird nests since that was the simplest to apply approach; nonetheless, this methodology has certain limitations. The lack of a female to sit on the eggs is the most significant limitation of constructing artificial bird nests (Angelstam 1986). Furthermore, "predation rate is often higher on artificial nests than on natural nests and some important mammalian predators of natural nests are under-represented at artificial nests, compared with avian predators” (Major, Gowing & Kendal 1996:407). Both

References Matessi, G & Bogliani, G 1999, ‘Effects of nest features and surrounding landscape on predation rates of artificial nests’, Bird Study, vol. 46, no. 2, pp. 184–194. Martin,

TE 1993, ‘Nest Predation Among Vegetation Layers and Habitat Types: Revising the Dogmas’, The American Naturalist, vol. 141, [University of Chicago Press, American Society of Naturalists], no. 6, pp. 897–913, viewed 5 December 2021, Nest Predation Among Vegetation Layers and Habitat Types: Revising the Dogmas

Major, RE, Gowing, G & Kendal, CE 1996, ‘Nest predation in Australian urban environments and the role of the pied currawong, Strepera graculina’, Austral Ecology, vol. 21, no. 4, pp. 399–409, viewed 5 December 2021, Nest predation in Australian urban environments and the role of the pied currawong, Strepera graculina Atkin, N 2021, ‘The effects of forest edge and nest height on nest predation in a U.K. deciduous forest fragment’, Authorea, viewed 5 December 2021, The effects of forest edge and nest height on nest predation in a U.K. deciduous forest fragment Piper, SD & Catterall, CP 2004, ‘Effects of edge type and nest height on predation of artificial nests within subtropical Australian eucalypt forests’, Forest Ecology and Management, vol. 203, no. 13, pp. 361–372, viewed 5 December 2021, Effects of edge type and nest height on

17


predation of artificial nests within subtropical Australian eucalypt forests

Student Scientific Report n.d., ‘Nest Predation in Open and Vegetated Areas’, Nest Predation in Open and Vegetated Areas, vol. 1, no. 1.

18


DOES BUILDING YOUR NEST ON A TREE BRANCH IMPROVE BREEDING SUCCESS? Tania Kalsi, Linda Lyu and Alexa Stutchbury (Year 9) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Nest predation from other animals is a biotic factor that can have a major impact on bird populations, the study of nest predation can help scientists find out more about the bird species as well as the predators of nests. Two clay eggs were placed in nests made of halved tennis balls, coconut fibre and sphagnum moss, and then later were placed either in trees or on the ground to test which is at more risk to be preyed upon. These nests were observed and checked for disturbances over the next week and two days. 71% of the artificial eggs that were located in trees were disturbed by a possible predator, contrasted to the 50% of disturbed nests located on the ground. The reason behind this may be because nest predators have adapted to birds’ behaviour and tend to search for eggs on trees, rather than on the ground. This data would be useful for scientists in noting breeding behaviours of where birds commonly build their nests, as well as how the predators have adapted for survival. Introduction Nest predation from other animals is a biotic factor that can have a major impact on bird populations. Nest predation can be from many different species of animals, such as other birds, rats, possums, cats and lizards as eggs are a vulnerable and nutritious target. To protect their eggs from predation and increase breeding success, different birds have evolved different strategies, such as hiding the eggs in dense shrubs, building the nests on tall, open branches, or changing the colour of their eggs to avoid detection by predators, called “egg crypsis”. Nest predation is a key factor in the selection of birds that has attracted increasing attention from ornithologists. The study of nest predation can help scientists determine a bird species’ behaviour, population ecology, evolution and conservation biology.

In this practical activity, we will experiment to test if the second strategy (building nests on tree branches), will increase breeding success. We will build a bird nest, add plasticine eggs, position the nests outside, and check on the nests every lesson for signs of predation. Experiments like this one had been conducted since the 1980s, to find out more about the bird species as well as the predators of nests. Angelstam (1986) discovered in his studies that other birds were the most common predators of nests. Later in 1998, it was found out that nest predation in forests affects nests that are built above the ground more than nests that are on the ground, suggesting that nests built on trees are at more risk than nests on the ground. Hypothesis Despite the research done for the introduction, stating that nests built on trees are more likely to be disturbed, the original hypothesis is that nests positioned on the ground will be at more risk than nests positioned on trees, more nests on the ground will be disturbed. The reason is 19


that higher positions are harder to reach for predators, as dogs and some lizards cannot climb trees. Method Artificial birds nests were made out of half a tennis ball, with coconut fibres and/or sphagnum moss glued onto the ball and a hole drilled in the middle to avoid them flooding with water. Two air dry clay eggs were shaped and placed into each nest. These nests were placed in foliage, with some in trees and others on the ground, high and low. The nests were observed for just over a week, checking for signs of disturbance every science lesson. When initially placed out, photos were taken of the position and condition of the eggs. If there were signs of disturbance, the affected nest was brought back to the classroom. At the end of the experiment, each student in the class returned their nest to the classroom, the results and locations of their nests were placed in an excel worksheet for further analysis. Results Total number of nests

22

High position disturbances

10/14 (71%)

Low position disturbances

4/8 (50%)

Discussion Predators in the neighbourhood predate the egg by leaving a mark relative to the animals’ mouth shape. This includes birds, which leave triangle marks on the eggs, where the larger the bird, the larger the triangle. Rats, mice and possums leave their teeth marked with 2 teeth at the top and 2 at the bottom, as well as frequently chewing eggs until they crumble.

Moreover, lizards leave many small dots on the eggs arranged in a circular shape. Birds in the location of the experiment include lyrebirds, cockatoos, kookaburras and tawny frogmouths. The lyrebirds’ eggs are black with small dark dots to camouflage them with dirt and the dark to protect their eggs from predation. The cockatoos’, kookaburras’ and tawny frogmouths’ eggs, however, are pale because they instead have evolved to camouflage them with their large feathers. To further increase breeding success, these birds from nests to ensure the safe development of their young. They often locate their nests in dense, hidden shrubs to prevent predation, but some larger birds build them in open areas as they are more competent in protecting their eggs. Some birds thus do not even form nests at all and instead simply lay their eggs on the ground, such as Emperor Penguins. During breeding, the parents usually take turns incubating (i.e. the process of sitting on eggs to sustain their warmth before hatching) their eggs to maintain an ideal temperature for the normal development of their young. For kookaburras, however, the presence of additional male helpers and female helpers during incubation neutrally and negatively impacts the nesting success respectively. They typically have a longer length of incubation, as the larger the bird, the longer the incubation period. As the results convey, 71% of the artificial eggs which were located in trees were disturbed by a possible predator, contrasted to the 50% that was located on the ground. The trees thus appear to be the usual site for birds to build their nests, as the high level isolates their young from common ground-level 20


predators such as possums and dogs. Therefore, high-level predators, such as magpies and ravens, have adapted to their environment and are aware that other birds often lay their eggs in trees, so they aim to predate up high. Some predators, such as snakes and rats, have also adapted to birds’ behaviour and are accustomed to preying for eggs in trees. Having said that, a considerable number of eggs were predated on the ground, as nests can generally be found almost anywhere and there are numerous groundlevel predators. This data would be useful for scientists for noting the behaviours of birds when breeding in relation to where they commonly build their nests, as well as where predators often search for nests based on the egg disturbance. However, this experiment only used pale eggs, which are the easiest colour for predators to spot, thus the results obtained would be overestimated for another colour of eggs. The placement of each egg (i.e. if eggs also were placed in open, well-camouflaged or dense areas of the bush) was also dismissed, which greatly affects the likelihood of predation as explained above, thus significantly impacting the acquired data. References Birds In and Around Wollongong | Destination Wollongong 2020, Destination Wollongong,

viewed 7 December 2021, <https://www.visitwollongong.com.au/birdsin-and-around-wollongong/>. Brakefield, PM 2009, ‘Crypsis’, Encyclopedia of Insects, pp. 236–239, viewed 7 December 2021, <https://www.sciencedirect.com/topics/agricu ltural-and-biological-sciences/crypsis>. Celis, P, Graves, JA & Gil, D 2021, ‘Reproductive Strategies Change With Time in a Newly Founded Colony of Spotless Starlings (Sturnus unicolor)’, Frontiers in Ecology and Evolution, vol. 9, viewed 7 December 2021, <https://www.frontiersin.org/articles/10.3389/ fevo.2021.658729/full>. Clarke, M 2019, Birds, nests and tree hollows, The Maitland Mercury, viewed 7 December 2021, <https://www.maitlandmercury.com.au/story/ 6328111/birds-nests-and-tree-hollows/>. Ibáñez-Álamo, JD, Magrath, RD, Oteyza, JC, Chalfoun, AD, Haff, TM, Schmidt, KA, Thomson, RL & Martin, TE 2015, ‘Nest predation research: recent findings and future perspectives’, Journal of Ornithology, vol. 156, no. S1, pp. 247–262, viewed 7 December 2021, <https://link.springer.com/article/10.1007/s10 336-015-1207-4>. LibGuides: Laughing Kookaburra (Dacelo novaeguineae) Fact Sheet: Reproduction & Development 2021, Libguides.com, viewed 7 December 2021, <https://ielc.libguides.com/sdzg/factsheets/lau ghingkookaburra/reproduction>. NestWatch n.d., GENERAL BIRD & NEST INFO, Cornell University, New York, viewed 8 December 2021, <https://nestwatch.org/learn/general-birdnest-info/nesting-cycle/>.

21


THE EFFECT OF NEST HEIGHT ON BREEDING SUCCESS Chi Hin Elvis Suen (Year 9) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Bird nests, one of the easiest targets, have been prayed by many different predators, for instance, birds, lizards, rats etc. Since bird nests are easy to become targets, in this experiment, a fake birds nest would be made of half a tennis ball, modified by coconut fibre and shaped around the nest. After that, two counterfeit eggs made of clay will be put into the bird nests. There are 23 made, and half of them would be placed on the ground, and another half would be placed on the tree around 1.6 m from the ground. This would test the hypothesis that there are differences between putting the nest on the ground and on the tree. According to data, the study suggests that the higher the nests located, the more difficult to breed successfully. Introduction

Method

These are a few studies that have been conducted to explore the relationship between the height of the nest with breeding success. According to the study; The effect of nest height on the seasonal pattern of breeding success in blackbirds. This study has found out that height is critical for breeding. Height over 3m has the most success percentage compared to the other two types of nests put lower. Furthermore, according to the study, Seasonal increase of nest height of the Silver-throated Tit (Aegithalos glaucogularis): can it reduce predation risk? This investigation suggests that putting the nest lower will increase the rate of breeding success. However, refer to the study, The effect of nest height on the seasonal pattern of breeding success in blackbirds suggests an opposite result. This study completed by Ludvig et al. indicates that the breeding success will increase when the nest is put higher.

Each student would make a nest with half a tennis ball and fibre to create the nest as natural as possible. Moreover, two eggs would be made with clay. When students have a science lesson, observation would be done and recorded. According to different observations, different actions would be done; when an animal attacks the nest, the nest would be taken back to the lab for a closer look. If animals did not attack the nest, the nest would not be moved or touched. The result will be collected after a week, and the result table will construct.

In this study, nest predation is examined according to the height of the nest, and predators will be identified. This investigation hypothesises that the higher the nest are, the higher breeding success will be.

Results In this experiment, the data showed that nests that had been disturbed were significantly greater than nests that had not been disturbed. However, nests that have been disturbed were concentrated in a high place; around 71 % of disturbed nests were located on trees and ground only have around 29%. According to the data, there was no effect on breeding success in low positions, by compare using diagram, the numbers of disturbed and disturbed had no difference. These numbers suggested that putting the nest in high places would increase the chance of being disturbed and 22


diminish breeding success.

Position of nest High

Low

Total

Disturbed

10

4

14

Not disturbed

4

4

8

Total

14

8

23

Discussion The result was opposite to the hypothesis and the study “The effect of nest height on the seasonal pattern of breeding success in blackbirds” but agree with the study “Seasonal increase of nest height of the Silver-throated Tit (Aegithalos glaucogularis): can it reduce predation risk?” In the experiment environment, predators like birds, rat and lizards are around as neighbours since the environment is more likely to be their habitat while doing the experiment. The data have shown there were more nests located at high places being disturbed; this is more likely because the number of birds that live around the area

is many, which cause an increase in the chance of being disturbed. On the other hand, the result might have shown that birds could discover nests located in high places than low places even on the ground; however, these birds' behaviour has not been investigated in the experiment. Even though the result suggests the nests being set on the tree have a higher chance of being disturbed, those birds who choose to locate their nests on the tree are related to their size. According to the article, “growing Illawarra natives”, this website showed 13 types of bird that put their nests on ground and most of their sizes are big. This suggests that when birds are large, they are keen to set their nests on the ground. In Wollongong, the most of the 23


birds are small and trees are able to take their weight. Therefore, the research shows that the height of the nests might have a relationship between, but further research and experiment should be complete before make the judgement. If the idea was correct, that might explain the reason there were more nests being disturbed on the tree. Since there are more birds that are small, according to the ideas, more birds will keen to look for bird nests from the tree. However, this was a though and should be seen as correct after investigation. This result might be helpful for this type of experiment because this provided an

example that helped their predation become better, moreover, suggest there were more investigation should be complete before the understanding of birds is clear. The disadvantage of this experiment female birds was not in the nest to guide and protect the eggs. Further investigation and research may set a fake bird to protect the eggs. In addition, even predators will leave marks for identification but setting a camera to recognise the specific predators can provide detail about that predator

References Guan, H., Wen, Y., Wang, P. et al. Seasonal increase of nest height of the Silver-throated Tit (Aegithalos glaucogularis): can it reduce predation risk?. Avian Res 9, 42 (2018). https://doi.org/10.1186/s40657-018-01354 LUDVIG, e., 2021. [online] Avibirds.com. Available at: <https://avibirds.com/wpcontent/uploads/pdf/merel3.pdf> [Accessed 7 December 2021]. Bird Feeder Hub. 2021. 13 Examples of Ground Nesting Birds (With Pictures) - Bird Feeder Hub. [online] Available at: <https://birdfeederhub.com/groundnesting-birds/> [Accessed 7 December 2021]. Growing Illawarra Natives. 2021. Illawarra birds. [online] Available at: <http://blog.growingillawarranatives.org/p /local-native-birds.html> [Accessed 7 December 2021].

24


MOZART AND MEMORY - The Effect of Mozart on Short-Term Memory Rhiannon Evans (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract “Mozart makes you smarter”. But does it help a person remember? Listening to the complex music of Mozart can improve a person’s intelligence quotient or IQ score temporarily. However, some music, particularly unfamiliar music, can inhibit short-term memory performance. In this experiment, females aged 15-16 completed the University of Washington short-term memory test either in silence or with Mozart’s Piano Sonata No. 11 in A Major playing to test whether listening to Mozart effects short-term memory in people who do not study music. Those who were familiar with Mozart or studied music completed the test in silence while those who did not meet these criteria completed the test with sound. The study found that listening to Mozart negatively effects short-term memory.

Introduction ‘The Mozart Effect’ refers to the theory that listening to Mozart’s music can temporarily improve IQ scores (Swartz, n.d.) and has been proven true for spatial-temporal tasks. Test participants that listen to Mozart while completing spatial-temporal tasks showed significantly higher results than those who do not (Kliewer, 1999). However, there have been conflicted outcomes and results regarding the effect of music on memory (Musliu et al. 2017). Listening to both familiar and unfamiliar music triggers the part of the brain associated with memory, but different genres have varying effects on performance that have not yet been determined. For people who study music, listening to familiar music negatively effects concentration and memory because the part of the brain that is engaged to analyse the music is also the same part used to complete the tasks (Mori et al., n.d.) and it can be assumed that unfamiliar music would have a similar effect. In contrast, people who do not study music do not experience such effects with

music not noticeably performance.

effecting

their

This experiment aimed to investigate the effect of Mozart on the short-term memory of people who do not study music, an area of study not commonly addressed. It is predicted that Mozart will have no effect on short-term memory performance. Method Six participants were questioned on their familiarity with Mozart’s music and assigned a condition according to their response. Those who were familiar with Mozart’s music completed the test in silence and those unfamiliar with Mozart’s music completed the test with Mozart’s Piano Sonata No. 11 in A Major playing. Every test participant wore headphones for the duration of the test. Participants completed the University of Washington short-term memory test which asked them to memorise the letters that appear on the screen for three seconds and write down as many as they could remember when they disappeared, a process that was repeated six times. The results were converted to a percentage and the average calculated to 25


prove the negative effect of listening to Mozart on short-term memory. Results The study showed that the participants that completed the test while listening to Mozart performed worse than the participants who completed the test in silence. The average result for those completing the test while listening to Mozart was 75% as opposed to the 85% of the participants who completed the test in silence (Table 1). There was a 10% difference in the average result. The range of results was 27% for the participants completing the test listening to Mozart as opposed to 24% for the participants completing the test in silence (Figure 1). Average Result Range (%) (%) Mozart 75 27 Silence 85 24 Table 1: The average result and range of each group of participants.

The effect of Mozart on shortterm memory 120

Result (%)

100 80 60 40 20

Mozart Average Silence Average

Silence 1 Silence 2 Silence 3 Silence 4 Silence 5 Silence 6

Mozart 1 Mozart 2 Mozart 3 Mozart 4 Mozart 5 Mozart 6

0

Participant

Figure 1: The effect of Mozart vs silence on University of Washington short-term memory test results.

Discussion It was hypothesised that listening to Mozart would not affect the short-term memory of a person who does not study music. However, the 10% lower average of those that listened to Mozart while completing the test as compared to the average of those that completed the test in silence clearly indicates a negative effect of Mozart on short-term memory. Additionally, the similar ranges in the results for each group means that the effect of differing memory abilities of participants was negated as all results were shifted down 10% as opposed to having a wider range. An experiment conducted by Cambridge Brain Sciences (n.d.) regarding the effect of music on memory showed that listening to music either had no effect or greatly hindered short-term memory. It was determined that the part of the brain used to process the music was also used for memory, thereby dedicating a smaller percentage of brain regions to completing the memory task. However, a similar experiment regarding the effect of music on concentration concluded that music did not affect the performance of participants that did not study music (Musliu et al. 2017). The negative effect of music on the short-term memory of a person who does not study music could be caused by using similar regions of the brain to complete the memory test and process the music, thereby reducing the percentage of brain regions dedicated to the task. Regardless of whether a person studies music, listening to unfamiliar music can be expected to cause a person to attempt to process it and commit it to memory using the same brain regions involved in completing the test.

26


This study was completed under fair conditions, though a wider sample space would have been ideal to gain a clearer understanding of the difference between the two conditions. Moreover, studies have shown that emotions can impact the effect of music on a person and their memory and concentration (Musliu et al. 2017). Such influences were not accounted for, though the impact was predicted to be negligible and therefore not factored into the method. Further investigation into the effect of Mozart on people who study music could develop the understanding of the effect of Mozart on memory for most people rather than just the portion that do not study music. It would also highlight any differences between the two groups should they be evident. In addition, investigation into the effect of familiar music on both people who do study music and those who do not could provide further understanding of the effect of music on a person’s short-term memory. A study completed by Dr. Fabiny with Harvard Medical School (2015) concluded that music boosted long-term memory, most notably in elderly people and people who could not speak. Research into the effects of music on long-term memory could provide a greater understanding of long-term influences of music on memory and further develop treatment and management of diseases such

as Alzheimer’s, Dementia, and conditions that eliminate the ability to speak. In conclusion, this experiment has revealed that Mozart negatively impacts short-term memory in people who do not study music. In other words, listening to Mozart does not support short-term memory. References Cambridge Brain Sciences n.d., Can Listening to Music Actually Help You Concentrate?, viewed 9 November 2021, <https://www.cambridgebrainsciences.com/ more/articles/can-listening-to-musicactually-help-you-concentrate>. Fabiny, A 2015, Music can Boost Memory and Mood, viewed 9 November 2021, <https://www.health.harvard.edu/mind-andmood/music-can-boost-memory-and-mood>. Mori, F, Naghsh, F & Tezuka, T n.d., The Effect of Music on the Level of Mental Concentration and its Temporal Change, PDF, viewed 9 November 2021, <https://files.ifi.uzh.ch/stiller/CLOSER%202 014/CSEDU/CSEDU/Information%20Techn ologies%20Supporting%20Learning/Full%2 0Papers/CSEDU_2014_40_CR.pdf. Swartz, L n.d., The "Mozart Effect": Does Mozart Make You Smarter?, PDF, viewed 11 November 2021, <http://xenon.stanford.edu/~lswartz/mozartef fect.pdf>. Musliu, A, Berisha, B, Musaj, A, Latifi, D, Peci, D, 2017, The Impact of Music on Memory, PDF, viewed 11 November 2021, https://www.researchgate.net/publication/318 539845_The_Impact_of_Music_on_Memory

27


BREATHE IN, BREATHE OUT! Sarvani Thapaliya (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract This investigation explored the effect of prolonged physical activity on heart rate as a measure of pulse rate. The results showed a positive relationship with the amount of exercise done and the pulse rate. The reasons for this relationship include numerous different processes and mechanisms of the human body. One of which includes the muscle sympathetic nervous actions (MSNA) which studies have shown contribute to heart rate increase during exercise. Another, one of the more prominent processes, is the aerobic energy system, which utilises oxygen as part of the equation in producing ATP. The cardiovascular system is used in delivering oxygen to the mitochondria and as demand for ATP increases, so does demand for oxygen, therefore the heart rate increases.

Introduction The experiment focusses on the effect of the intensity of physical activity on heart rate, measured using a pulse monitor. There has been questioning, however, on the accuracy of using pulse rate as a measurement of heart rate. The standard practice of measuring heartrate is using an ECG (electrocardiogram), however some scientists have used pulse cycle intervals, determined from a pulse wave signal. A study conducted by A. Schafer and J Vagedes investigated the accuracy of PRV (pulse rate variability) as an estimate of HRV (heart rate variability). The results pointed to sufficient accuracy when subjects were at rest, however short-term HRV was overestimated by PRV (Schäfer & Vagedes 2013). Several studies have previously shown that physical exercise is associated with an increased sympathetic activity, resulting in heart rate increase (Javorka et al. 2002). This is because cardiovascular adjustment is required to meet metabolic demands of skeletal muscle during exercise. This is particularly related to the sympathetic nervous system which plays a role in regulating blood pressure and blood flow. Research done by K. Katayama investigated

MSNA (muscle sympathetic nerve action) regulation during exercise during humans. Results showed an increase in MSNA during static upper and lower limb exercise. In lower limb exercise at moderate intensity, MSNA rose gradually in proportion to workload (energy output). The human body also utilises ATP for doing activity and there are three energy systems which replenish ATP while engaging in exercise. For this investigation, the most applicable energy system is the aerobic system. This long-term system produces energy utilising oxygen. The production of ATP commences in the mitochondria of muscle fibres. Mitochondria possesses enzymes which aid in the breaking down of glucose, through interacting with oxygen (the by-products of this equation are water and carbon dioxide) (Khan Academy 2021). There is a process in getting oxygen from the atmosphere to the mitochondria, involving a series of processes, involving the respiratory system. The first step is the inhalation of oxygen from the atmosphere and the oxygen reaching the alveoli in the lungs. The oxygen is then dissolved in blood (through the capillaries binding the alveoli) and the oxygen binds to haemoglobin. The oxygen then disassociates from haemoglobin and 28


manner. This was repeated, with the subjects running four laps, then five laps.

Results

Subject 1 200

Pulse rate (bpm)

dissolves in peripheral bloodstream again. Through the process of diffusion, oxygen moves into the cell and the mitochondria. The cardiovascular system is also used at the step in moving oxygenated blood to the rest of the body. Therefore, as the demand for ATP increases, the demand for ATP increases and an increased amount of oxygen is demanded, therefore heart rate needs to continuously provide oxygen (Intensive Care NSW 2021). As the by-products of producing ATP include carbon dioxide, more deoxygenated blood needs to be carried back to the lungs (through the pulmonary artery) and the process repeats.

150 100 50 0

Aim The objective of this experiment is to investigate the effect of physical activity on heart rate, through monitoring pulse rate.

0

150

Subject 2 Pulse rate (bpm)

200 150 100 50 0 0

50

100

150

Distance Ran (m)

Subject 3 Pulse Rate (bpm)

Method A distance of 20m was measured using a trundle wheel. The resting pulse rate of the subjects were then recorded prior to the experiment using a pulse rate monitor. The monitor was clipped on the subject’s finger, and the rate which appeared the most common in a duration of 30 seconds was recorded with a stopwatch. Subjects ran two laps at first, then their pulse rate was recorded in the same

100

Distance Ran (m)

Hypothesis The heart rate will increase as subjects run an increased distance, therefore the pulse rate will also increase with prolonged exercise at the same proportion. This is due to the supply of blood flow needing to meet the demand of energy in skeletal muscles. Muscle sympathetic nerve action is increased during various forms of exercise. An increase in the action of the sympathetic nervous system is witnessed during exercise, often resulting in an increased in heart rate.

50

200 150 100 50 0 0

50

100

150

Distance Ran (m)

29


Discussion

Subject 4 Pulse Rate (bpm)

200 150 100 50 0 0

50

100

150

Distance Ran (m)

Subject 5 Pulse Rate (bpm)

200 150 100 50 0 0

50

100

150

Distance Ran (m)

Pulse Rate (bpm)

Subject 6 160 140 120 100 80 60 40 20 0 0

50

100

Distance Ran (m)

150

The method of this experiment shows a clear positive trend between pulse rate and prolonged exercise activity. 5 out of the 6 subjects had data supporting this claim, however an outlier was produced in the case of subject 6. Subject 6 has been diagnosed with chronic tachycardia for several years, which can be an explanation of their pulse rate going down from 148 bpm to 147 bpm. The causes for the general trends shown in this investigation are many-fold and from this experiment, it is not possible to definitively conclude which mechanism was the main cause of the trend. “High intensity exercise can result in up to a 1000-fold increase in the rate of ATP demand compared to that at rest.” (Baker, McCormick & Robergs 2010). As such, ATP must be replenished at a similar rate to ATP demand and there are three energy systems which assist in ATP regeneration: Phosphagen, Glycolytic, and Mitochondrial Respiration. During intense exercise, all three systems contribute at different degrees to the activity of replenishing ATP, directly proportionate to the contribution of the skeletal muscle motor units. However, this experiment did not account for which energy system contributed the most in these exercises. Furthermore, a muscle sympathetic nerve action (MSNA) increase is observed during intense exercise, which also includes a proportionate increase in heart rate and pulse rate. While the exercises in this experiment probably worked within the parameters of aerobic respiration (not of a high intensity) it is impossible to rule out MSNA as a factor of the trendline as the intensity of the exercise did not have a control.

30


Measuring the pulse rate of the subjects after each stage of their exercise had gathered sufficient samples of data in order to show this trendline clearly. The control variables implemented in this experiment had achieved in making the results of this investigation in proving a causation relationship between pulse rate and exercise. There were a sufficient number of subjects contributing their data to this experiment in order to notice a trend. 5 out of 6 subjects had an increase of pulse rate as the exercise became prolonged, though not definitively a proportionate increase as the hypothesis predicted. The sample size for this experiment was not large enough to conclude that there is a positive trend between pulse rate and exercise. As human beings vary between individuals, an acceptable sample size for human research is 10% of a population, if the population is not over 1000 (Piroska Bisits Bullen 2013). The investigation could be subject to less outliers in data if a medical history was taken. Using the pulse rate monitor as a measurement of heart rate posed some inaccuracies, since studies have shown that short-term variability of the heart rate was overestimated by pulse rate. Conclusion This investigation provided sufficient data in order to achieve the aim of the experiment, which was to investigate the effects of prolonged exercise on heart rate as a measure of pulse rate. The hypothesis was not incorrect in stating that the pulse rate will increase as the amount of exercise increased, however the data did not support that the increase was proportional. This experiment, however, could not conclude a definitive explanation for the

trendline as there are several mechanisms in the human body which can cause the trend (such as the energy systems or the sympathetic nervous system).

References Schäfer, A & Vagedes, J 2013, ‘How accurate is pulse rate variability as an estimate of heart rate variability?’, International Journal of Cardiology, vol. 166, no. 1, pp. 15–29, viewed 26 November 2021, <https://www.sciencedirect.com/science/article /abs/pii/S0167527312003269>. Javorka, M, Zila, I, Balhárek, T & Javorka, K 2002, ‘Heart rate recovery after exercise: relations to heart rate variability and complexity’, Brazilian Journal of Medical and Biological Research, vol. 35, no. 8, pp. 991–1000, viewed 26 November 2021, <https://www.scielo.br/j/bjmbr/a/7mmq5FLwx cxKSJJx6Ns55cx/?lang=en>. Katayama, K & Saito, M 2019, ‘Muscle sympathetic nerve activity during exercise’, The Journal of Physiological Sciences, vol. 69, no. 4, pp. 589– 598, viewed 26 November 2021, <https://jps.biomedcentral.com/articles/10.100 7/s12576-019-00669-6>. Baker, JS, McCormick, MC & Robergs, RA 2010, ‘Interaction among Skeletal Muscle Metabolic Energy Systems during Intense Exercise’, Journal of Nutrition and Metabolism, vol. 2010, pp. 1–13, viewed 29 November 2021, <https://www.ncbi.nlm.nih.gov/pmc/articles/P MC3005844/#:~:text=High%2Dintensity%20e xercise%20can%20result,rate%20complement ary%20to%20ATP%20demand.>. Burdyga, T & Paul, RJ 2012, ‘Calcium Homeostasis and Signaling in Smooth Muscle’, Muscle, pp. 1155–1171, viewed 29 November 2021, <https://www.sciencedirect.com/topics/bio chemistry-genetics-and-molecularbiology/phosphagen>. Piroska Bisits Bullen 2013, How to choose a sample size (for the statistically challenged), tools4dev, tools4dev, viewed 29 November 2021, <https://tools4dev.org/resources/how-tochoose-a-sample-size

31


Cellular respiration review (article) | Khan Academy 2021, Khan Academy, viewed 29 November 2021, <https://www.khanacademy.org/science/inin-class-11-biologyindia/x9d1157914247c627:respiration-inplants/x9d1157914247c627:fermentationand-the-amphibolic-pathway/a/hs-cellularrespiration-review Arora, S & Tantia, P 2019, ‘Physiology of Oxygen Transport and its Determinants in Intensive Care Unit’, Indian Journal of Critical Care Medicine, vol. 23, no. S3, viewed 29

November 2021, <https://www.ncbi.nlm.nih.gov/pmc/article s/PMC6785823/>. Intensive Care NSW 2021, Cardiovascular System, Intensive Care NSW, viewed 29 November 2021, <https://aci.health.nsw.gov.au/networks/icnsw/p atients-and-families/patientconditions/cardiovascularsystem#:~:text=The%20cardiovascular%20syst em%20(%20CVS%20)%20moves,carrying%20 blood%20through%20the%20body.>.

32


“YOU’VE HAD A MAN LOOK!” Is this a real thing? Alissa Tonkin (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Males, especially throughout their teenage years, are not as observant as females. In this experiment male’s observatory skills were compared to females through a timed ‘Where’s Wally’ test. Twelve males and twelve females were asked to locate Wally, where their observation skills to locate Wally were averaged to test the hypothesis that males will take a longer time to locate Wally than females. The study found out that females were almost three times slower than males when it came to locating wally, further proving that the theory of a man look is not reliable.

Introduction A common stereotype in society is that females are more observant then males. An example of this is when someone says ‘Your having a Man Look’, which refers to a person looking for something and not being able to find it because they do not look for it properly. Often, whatever they are looking for is right in front of them. This is referred to as ‘a man look; as often in society, men are seen to not pay as much attention to detail, causing them to miss things that are right in front of them. While no previous studies have been conducted to investigate male’s observatory skills compared to females, a similar study conducted by Krystnell Storr explains “Why Women are Faster to Mature than Males”. The study found that the human brain undergoes major changes anatomically and functionally as we age, and these changes make the connections in our brain more efficient. Storr’s research found that this process tends to happen at an earlier age for women than men, which may explain why some women seem to mature faster than men.

In this study, male’s observation skills were examined and compared to females to discover if gender is a factor in ones observational abilities. It is hypothesised that males will take a larger amount of time to locate Wally in a “Where’s Wally Now?” book compared to females. Method Twenty-four subjects (twelve male and twelve females), within an age bracket of 15 – 16, were asked to locate Wally in a “Where’s Wally Now?” book. Their time was recorded with a stopwatch and recorded in a table. Results The time it took to locate Wally was longer for females than it was males. While the time it took for females to locate Wally was almost three times larger than males, females would observe more objects throughout the test than males, for example Wally’s sister and dog. This suggests that females pay more attention to detail then males, and males only complete the task they were set out to do.

33


Discussion It was hypothesised that females would be faster than males when it comes to taking a Where’s Wally timed test; however, the results suggested that males are almost three times faster than females. Although the results indicate that males are faster when it comes to locating an object than females, no previous studies have been conducted to test this idea, making comparisons impossible. The method was valid as the experiment was designed to investigate whether females are faster than males when locating an object in front of them. The method was valid as only one aspect of the experiment, the independent variable, is being changed. The independent variable is changed from male to female, thus experimenting the observatory skills between the two. While the independent variable is changed, the dependant variable or the time is measured. The experiment is also valid as the same test was used on all 24 participants. While the experiment was conducted correctly, human error may play a factor in the accuracy of the results. Human error refers to the limitations of human ability. Transcriptional error may have occurred

throughout this experiment as the exact time in which a participant locates Wally may have been altered due to human error. Through the repetition of this experiment twelve times on each gender, reliability was ensured as the chance for error was reduced. This repetition reduced error as error averages to zero if enough measurements are taken and averaged. This is why the repetition of an experiment can improve the reliability of the final results of an experiment. One way that the experiment could be improved would be for the experiment to be conducted over a variety of age groups. Through doing this, the observation skills between males and females can be compared to each other through a variety of ages, further allowing the audience to understand if age plays a factor in observation skills. References Urban Dictionary: man look 2019, Urban Dictionary, viewed 29 November 2021, <https://www.urbandictionary.com/define.php?t erm=man%20look>. Krystnell Storr 2015, Science Explains Why Women Are Faster to Mature Than Men, Mic, Mic, viewed 29 November 2021, <https://www.mic.com/articles/111226/scienceexplains-why-women-are-faster-to-mature-thanmen>.

34


THEORETICAL YIELD vs EXPERIMENTAL YIELD IN A COMBUSTION REACTION Suzanne Abou Shalah (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Stoichiometry in Chemistry is the study of quantitative relationships between the amounts of reactants used and the amounts of products formed. It is based on the Law of Conservation of Mass, which states that matter cannot be created or destroyed. In chemical reactions, the amount of matter present at the end of a reaction must be the same as the amount present at the beginning. The calculations depend on balanced chemical equations, which indicate the molar ratio of the reactants and products. In this study, stoichiometry was used to determine the amount of product produced of MgO in a combustion reaction. This was tested in an exothermic, combustion reaction of magnesium. The experiment was repeated multiple times, and the results found that although the experimental yield was similar to the theoretical yield, the mass of the products varied due to experimental errors.

Introduction This study aimed to investigate how experimental yield compares to theoretical yield in a combustion reaction of magnesium. The theoretical yield was calculated using stoichiometry and was used to calculate the percentage error when the experimental yield was found. Based on stoichiometry and the Law of Conservation of Mass, it was predicted that the amount of matter in the reactants will be the same as the amount of matter in the products.

moisture was released. Subsequently, the magnesium was placed in the crucible and heated over the bunsen burner with the lid remaining closed. Oxygen was periodically allowed in the crucible to aid combustion and to react. Combustion was complete when the Magnesium no longer ignited when exposed to oxygen; the product was white and flaky. Thereafter, the crucible with the lid was weighed once again and the mass was recorded. The following steps were repeated another 2 times and the results were recorded.

Method The following equipment was used in the experiment: crucible with lid, tongs, gauze mat, bunsen burner, magnesium. Firstly, the experiment was set up as shown in Figure 1 and the mass of the empty crucible was measured. Secondly, the piece of magnesium was coiled tightly and also measured along with the crucible and lid. To ensure consistent mass throughout the trials, the crucible was heated over the bunsen burner, until the mass was stable, and all 35


produced. This is due to various errors, predominantly due to the loss of matter whilst allowing oxygen into the crucible to react with the magnesium and aid combustion. Moreover, in the first few trials, such as trial 1 in figure 2, the crucible was not heated over the bunsen burner; there would have been excess products from previous experiments, and the mass of the crucible was not stabilised. Meanwhile, when the crucible was heated in trial 2 and 3, and when the crucible was weighed after it was heated, its mass remained stable and steady; the percentage error decreased, thus the results were more accurate. Figure 1 - Equipment set up

Results Yield

Trial 1

Trial 2

Trial 3

Theoretical yield

0.48097

0.3815

0.3317

Actual yield

0.37

0.35

0.32

Percentage error

23.1%

8.2%

3.5%

Discussion The experiment was repeated multiple times to test the Law of Conservation of Mass using stoichiometry in a combustion reaction by burning magnesium. To ensure the reliability of the experiment, five trials were conducted, in which the results varied significantly from the first and the last trials. Of course, in such an experiment, burning magnesium in a crucible will not have 100% accuracy when it comes to the amount of yield

The theoretical yield was calculated using stoichiometry and molar ratios. The reason stoichiometry was used is because it allows us to predict the mass of the product after it undergoes a combustion reaction, given the mass of the reactants. With these calculations, I tested the magnesium combustion reaction, found the experimental yield/ mass of the product, and compared it to the theoretical yield. As seen in figure 2, the percentage errors in the three trials varied from 3.5% to 23.1%. To improve this experiment in the future, the magnesium ribbon could be coiled loosely rather than very tightly. Since the magnesium was coiled very tightly in some of the trials, the magnesium took longer to react, and in some cases, it did not completely react, hence less yield was produced, with a higher percentage error. Error can also occur due to the moisture in the crucible prior to experimentation. To prevent this in the future, the crucible should be heated over the bunsen burner until all moisture is released, which was done in trials 2 and 3. In conclusion, the results did not support the hypothesis, which predicted that the same amount of matter in the reactants will be the same amount of matter in the products. As 36


addressed in the discussion, there were various errors leading to the inaccuracy of the results. Nonetheless, the experiment itself was successful in determining and calculating theoretical and experimental yield using stoichiometry. References General Chemistry n.d., Determination of the Empirical Formula of Magnesium Oxide, viewed 26 November 2021, <https://www.webassign.net/question_asset s/ucscgencheml1/lab_2/manual.html>. Khan Academy n.d., Stoichiometry, viewed 26 November 2021, <https://www.khanacademy.org/science/ap -chemistrybeta/x2eef969c74e0d802:chemicalLibreTexts 2019, Stoichiometry Calculations, viewed 26 November 2021, <https://chem.libretexts.org/Courses/Bellar mine_University/BU%3A_Chem_103_(Ch ristianson)/Phase_2%3A_Chemical_Proble mSolving/5%3A_Reaction_Stoichiometry/5. 3%3A_Stoichiometry_Calculations>. Lumen n.d., Reaction Stoichiometry, viewed 26 November 2021, <https://courses.lumenlearning.com/boundl ess-chemistry/chapter/reactionstoichiometry/>.

OER Services n.d., Yields, viewed 26 November 2021, <https://courses.lumenlearning.com/sunyintroductory-chemistry/chapter/yields/>. Revolutionized 2019, What Is Stoichiometry?, viewed 26 November 2021, <https://revolutionized.com/what-isstoichiometry/>. Royal Society of Chemistry n.d., The change in mass when magnesium burns, viewed 26 November 2021, <https://edu.rsc.org/experiments/the-change-inmass-when-magnesiumburns/718.article#:~:text=The%20equation %20is%3A,2Mg%20%2B%20O2%20%E2 %86%92%202MgO>. School Work Helper 2020, Magnesium Oxide: Percent Yield Lab Report, viewed 26 November 2021, <https://schoolworkhelper.net/magnesiumoxide-percent-yield-lab-report/>. The Organic Chemistry Tutor 2017, Stoichiometry Basic Introduction, online video, 11 August, viewed 26 November 2021, <https://www.youtube.com/watch?v=7Cfq 0ilw7ps>. ThoughtCo. 2019, Introduction to Stoichiometry, viewed 26 November 2021, <https://www.thoughtco.com/introductionto-stoichiometry-609201>.

37


FACTORS AFFECTING BACTERIA GROWTH - TEMPERATURE Isabel O’Brien (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Food consumed by humans is stored and refrigerated or more commonly cooked before being consumed. In this experiment, bacteria sample Staphylococcus epidermidis was placed on 4 petri dishes. A control was incubated at 30 degrees, and the other three stored in the refrigerator at 5oC. Staphylococcus epidermidis was also boiled at 95 degrees Celsius for 10 minutes and then placed on a further three petri dishes, and then grown in an incubator at 30 degrees to test the hypothesis that bacteria do not grow outside the temperature range of 5 – 60 degrees Celsius, hence the reason food consumed by humans is stored outside of this temperature range. The study found that the boiled and refrigerated bacteria did not grow at all after 24 hours, yet the control petri dish was covered in bacteria.

Introduction The aim of this study was to investigate the way in which temperature affects the growth of bacteria. When bacteria is grown below or above its growth range, it will not grow. But if it is grown within the growth range it will thrive. Bacterial growth is defined as the increase in the bacterial population rather than the growth in size of individual cells. Bacteria multiply using mitosis, with the division of cells. Mitosis occurs in the parent cell, where DNA is copied and the cell splits into two daughter cells. Many factors influence the rate and growth of bacteria which include temperature, nutrients, pH, water, salt, and gaseous concentration. Temperature affects the growth of bacteria in various ways. There is a minimum and maximum temperature at which bacteria can grow, and an optimal temperature within this range where the bacteria can thrive. This range changes between different types of bacteria. Bacteria that grow in food have a growth range between around 6 to 60

degrees Celsius. The bacteria will not grow below the minimum as the membrane solidifies and nutrients cannot be transferred or above the maximum temperatures where proteins and enzymes denature. If temperature increases consistently from the minimum, bacterial growth increases until a maximum growth rate is reached, known as the optimal temperature. If the temperature continues increasing, the growth rate of bacteria declines until the maximum temperature is reached bacterial growth ceases. Availability of nutrients affects the growth of bacteria. Furthermore, pH affects bactericidal growth as well as Water availability and Gaseous concentration which are important factors in bacterial growth. The temperature at which bacteria grow is of particular importance as bacteria can grow in food that we eat and therefore can cause us diseases and food poisoning. When food is stored in the refrigerator or heated up before eating, this is to prevent or kill bacterial growth. When we leave food out on the bench, or when food is lukewarm for long periods of 38


time this is creating the optimal temperature for bacteria to grow, meaning bacteria growth is rapid at this stage. Some bacteria could be potentially pathogenic and cause disease. Bacteria that can cause food poisoning from food that humans consume is directly affected by temperature and grows best at temperatures between 5oC and 60oC. Therefore, we store food in refrigerators and freezers, to keep food below that temperature of 5 degrees where bacteria will grow in food or heat it up by cooking it past 60 degrees. Hence, it is important for us to store food in refrigerators and to cook food, and to investigate the effects of storing food at different temperatures not within the optimal temperature range for bacteria growth. Method The bacteria sample of Staphylococcus epidermidis was used, 2-3 drops of the bacteria were dropped onto 4 of the dishes using the pipette and the inoculating loop was used to spread it thinly around the petri dish. 10 -15 drops of the bacteria were dropped into a test tube and placed in boiling water for ten minutes. The temperature was taken as 95 degrees Celsius. This boiled bacterium was dropped onto the remaining 3 petri dishes using the pipette and spread using the inoculating loop. The lids of the petri dishes were sealed with masking tape. The dishes were labelled as either control, boiled or refrigerated accordingly. 4 dishes were placed in the incubator (the 3 boiled and 1 control dish), and 3 dishes in a refrigerator. The dishes were left in their locations for 24 hours. Results were recorded recording percentage coverage of bacteria by estimation through

observation on each petri dish and this was recorded in a results table. Results The petri dishes which were stored in the refrigerator, and the petri dishes stored in the incubator that had been boiled had no bacteria coverage. The control petri dish had around 85-90% coverage of bacteria.

Location Dish Refrigerated Incubated (boiled) Incubated (control)

% covered Dish 1 0% 0%

% covered Dish 2 0% 0%

% covered Dish 3 0% 0%

85-90%

n/a

n/a

Average coverage of bacteria 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Refrigerated

Boiled

Incubated (control)

Percentage covered Dish 1

Percentage covered Dish 2

Percentage covered Dish 3

Discussion It was hypothesised that bacteria that had been boiled or refrigerated would grow minimal bacteria, as this is outside the optimum range. 39


The results therefore supported the hypothesis. This is because the bacteria cannot grow outside a range of temperatures. For Staphylococcus epidermidis this temperature range is 15oC to 45oC, however it grows best at 30oC to 37oC. This is similar to other bacteria’s such as Norovirus, Salmonella, Clostridium perfringens, Campylobacter, and other Staphylococcus bacteria such as Staphylococcus aureus which can grow in food consumed by humans and cause illness. The bacteria that were incubated were incubated at 30oC to ensure optimal conditions for growth. The reason the refrigerated bacteria did not grow was that it was grown in conditions below the temperature range that it can survive (at 5oC). Bacteria will not grow below the minimum as the membrane solidifies and nutrients cannot be transferred. The bacteria that were boiled at 95oC for ten minutes and then incubated did not grow since 95oC is outside the maximum range for most bacteria (especially Staphylococcus epidermidis) and bacteria grown above its maximum temperature range cannot survive as it is at these temperature levels where proteins and enzymes denature. This therefore supports the theory that refrigerating, and cooking (heating) food is done to prevent bacteria growth and illness from food. This also supports this theory as the control, which was grown at 30oC, or close to room temperature, means perishable food not stored in a refrigerator or not heated before consumption may provide optimum conditions for bacterial growth and may cause illness if the bacteria that has grown is consumed. Reliability in this experiment comes from the repeatability of the experiment. 3 samples were grown in both temperature conditions, and in addition a control sample was utilised. The results were repeated in each petri dish that was used.

Validity in this experiment is achieved by the variables being controlled. Only one variable was changed (temperature) and other variables were able to be controlled such as bacteria sample used, sterilisation of materials, materials used, period given for bacteria to grow etc. Given more resources, more control could have been used to find more consistent results. Future studies may look at exactly what temperature bacteria stops growing in, by growing bacteria at specific temperatures clustered around 60oC and 5oC. Hence there was a clear difference between temperatures in terms of growth. Conclusion In summation, bacterial growth is significantly affected by temperature. Food is stored above and below certain temperature to limit bacterial growth on food that may cause sickness. If bacteria is grown below or above its growth range (for Staphylococcus epidermidis this at or below 5oC or at or above 60oC), it will not grow. If it is grown within this range (e.g., at 30oC) it will thrive. References Conditions needed for bacterial growth - Food safety – CCEA - GCSE Home Economics: Food and Nutrition (CCEA) Revision - BBC Bitesize 2021, BBC Bitesize, viewed 17 November 2021, <https://www.bbc.co.uk/bitesize/guides/z77v 3k7/revision/1>. CDC 2020, Foodborne Germs and Illnesses, Centers for Disease Control and Prevention, viewed 17 November 2021, <https://www.cdc.gov/foodsafety/foodbornegerms.html>. Kundrat, L 2016, Environmental Isolate Case Files: Staphylococcus epidermidis, Microbiologics Blog, viewed 17 November 2021, <https://blog.microbiologics.com/environme ntal-isolate-case-files-staphylococcusepidermidis/>.

40


ELECTRICAL CONDUCTIVITY THROUGH NEUTRALISATION OF A BASIC SOLUTION Molly Mills (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract pH and conductivity are both measurements of ions in solution. pH measures for a specific ion (hydrogen/hydroxide), where conductivity is non-specific in its measurement of concentration in both positively and negatively charged ions (in solution). Thus, both are proportional. In this experiment, a neutralisation experiment was carried out (using Hydrochloric acid and Sodium Hydroxide) past the point of equivalence whilst measuring conductivity (S/cm) and pH simultaneously to test the hypothesis that as the solution is neutralised it will become less conductive and following the equivalence point, as the solution becomes more basic, the conductivity will begin to rise again. The study found this trend to be accurate, although the equivalence point was not exactly measured to completely verify the theory.

Introduction Conductivity The conductivity is the ease in which charged particles move through a specified distance of a material or solution, in this experiment measured as: S/cm (Andy Connelly, 2017). In a material (e.g., metal wire), the charged particles are electrons, in solution, however, they are ions. Two ways in which ions impact the size of the current is in nature (mobility, charge and size) and concentration. The more ions that are in the solution, the greater the conductivity – therefore conductivity can also be used as a measure of concentration.

pH The acidity or basicity of aqueous solutions can be measure on the pH scale, with acids defined by being less than 7 and bases higher than 7 (Encyclopaedia Britannica, 2020), thus 7 being neutral. In solution, Acids

produce H+ (Hydrogen Ion), and bases produce OH- (hydroxide Ion). The higher the concentration of OH- is in a solution, the more basic it is and the higher the pH level is. Contrarily, the higher the concentration of H+ in a solution, the more acidic it is, and the lower the number on the pH scale will be. Pure water is formed from a reversible reaction of the two (Mark Bishop, 2013), as seen here

H+ + OH- → H2O

Therefore, combining Acids and Bases will result in a more neutralised solution, and neutralising an acid by adding a base will result in the production of salt and water. In the case of this experiment,

HCl + NaOH → H2O + NaCl. 41


with one another, cultivating a greater understanding of the world. pH and conductivity through neutralisation of an acid The neutralisation of an acid is when a base is added and reacts to form water and salt. When a solution is neutralised, there are equal weights of acid and base as the amount of acid that would give one mole of H+ and the amount of base that would give one mole of OH- are equivalent. Salts form in neutralised reactions therefore  parts of acid will always neutralise  parts of base (LibreTexts, 2020). The point of neutralisation can also be referred to as the equivalence point, the point where the solution is neutralised and hence the ratio of acid to base is 1:1, or when pH=7. In a solution, the charged particles are ions. Also, the higher the concentration of OH- is in a solution, the more basic it is. Contrarily, the higher the concentration of H+ in a solution, the more acidic it is. Therefore, as the pH of a solution becomes more neutral, the conductivity will decrease due to the lower concentration of basic OH- ions or acidic H+ ions. Conductivity in an aqueous solution is measured by the ease of movement of ions, measured in S/cm. pH is a log scale which represents the acidity and basicity of a liquid. The relationship of these two concepts can be explored through neutralising an acid and measuring the ph. Every liquid in the world can be measured on the pH scale, thus understanding the relationship between the concentration of ions and the acidity or basicity of a given aqueous solution subsequently results in a greater understanding of how ions act in a compound and how chemicals react

Aim To investigate the effect of pH level on the conductivity of a solution through neutralising HCl acid with NaOH base and simultaneously measuring the two independent variables to establish their relationship. Hypothesis In this experiment, the base Sodium Hydroxide will neutralize the acid Hydrogen chloride and the pH and conductivity will be measured, as the solution is neutralised it will become less conductive and following the equivalence point, as the solution becomes more basic, the conductivity will begin to rise again. This is because the higher the concentration of ions in a solution, the more conductive it is, and following this trend relative to the pH scale, the more H+ or OH- ions there are in a solution the more respectively acidic or basic it will be. Upon neutralising the acid, which is measured using the pH log scale, the conductivity will decrease until the point of equivalence (1:1 ratio acid and base) and then begin to rise again once the solution begins to become more basic. Method Independent variable Both pH and conductivity will be measured to identify their relationship in a neutralisation experiment. They will be measured using probes connected to a logger that will record these results once every second. Dependent variable The pH level will be deliberately changed throughout the experiment, neutralising HCl acid. This will be changed through adding 3mL 42


(with dropper) of NaOH every two seconds to ensure time and amount of base added can be equivalent measures of change. In short, the addition of the base NaOH will be added to alter the pH of H2O diluted HCl. Controlled variables

4. Place the probes into the large beaker (with HCl and H2O). 5. Press start on the logger and begin using a 3mL dropper every two seconds (starting 1 second in) for until all the NaOH is gone. Ensure data is saved to the logger at the conclusion of the trial.

To ensure validity of data, the following measures were put in place. 6. Once the trial is complete, rinse all beakers and probes with water and repeat steps one and Temperature plays a role in the movement of ions two. Then continuing with the same logger, and subsequently the conductivity of a substance. repeat steps four and five. In order to best monitor this variable, all three tests were conducted on the same day to warrant 7. Repeat this three times. the most consistent external temperatures. Results The same measuring equipment was used in every experiment so that the results would be fair The trend of lowered conductivity to the point and thus the trend would be most accurate. of equivalence (and increase on movement away from this point) on either side of the pH In order for each experiment to be as valid as the scale was shown in the results of this other, all equipment was rinsed with neutral H2O experiment. The equivalence point of pH=7 before each trial to generate the same starting differed in relation to time in each trial, place for each reaction. however each time was in line with the lowest point of conductivity in the respective trial. In An equal amount of NaOH was added to an trial one where pH level 7.324 (the closest equal amount of H2O diluted HCl in each measured point to pH=7) is in line with conductivity level 317.773 S/cm, the lowest experiment to maintain consistency. point in that test. Due to the pH scale being a 0.1 Mole HCl and an equal concentration of log scale, the changes surrounding the NaOH was used in each experiment also to equivalence point occurred very quickly, with the exact measurement of pH=7 never being maintain consistency measured. Trial one had the closest measurement, thus most clearly displaying the Methodology hypothesised trend compared to the averages of 1. Place 25mL of 0.1 Molar HCl in a 150mL all the trials. Figure 3 shows the results of trial one. The exact relationship was not beaker with 50mL of H2O. hypothesised; however, it is of note that the 2. Place 50mL of 0.1 Molar NaOH in a 100mL conductivity never completely returned to the level it was to begin with regardless of the fact beaker. that the same number of ions were present 3. Connect the conductivity probe and pH (hydroxide ion following neutralisation of probe to the logger and open an appropriate app hydrogen ion) as the distance from the to measure and graph conductivity and pH equivalence point was relatively equal. levels. 43


pH level 14 12

pH / conductivity relationship 326

15

324 10

322 320

5

pH level

Conductivity (S/cm)

The graphs shown below are the final results with outliers removed to best display the trend of the relationship between pH levels and conductivity levels throughout the neutralisation of acid.

318 316

0 0

20

40

60

80

Time (s)

pH level

10 8

conductivity

6 4

Figure 3: pH and conductivity over time. Discussion

2 0 0

20

40

60

80

Time (s) Trial One

Trial Two

Trial Three

Conductivity 326 324 322 320 318 316 0

20

40

60

80

Time (s) Trial One

Trial Two

Figure 2: Conductivity over time.

The reaction that took place was: HCl + NaOH → H2O + NaCl

Figure 1: pH over time.

Conductivity ((S/cm)

pH level

Trial Three

Positive hydrogen ions react with negative hydroxide ions which form water; + (H + OH →H2O), and the remaining positive sodium ions and negative chloride ions form salt; (Na+ + Cl- → NaCl). Which are the two substances produced in a neutralisation experiment (cK-12, n.d.). The pH and conductivity graphs show the trends of the data collected over 70 seconds of a neutralisation experiment. The pH shows a relatively steady increase, followed by a steep increase around the 40 second mark which is the equivalence point of the experiment. The incline is so steep as the pH scale is a log system, meaning the increase is not measured at constant intervals but larger intervals the further from neutral the solution becomes. The conductivity graph displays the decrease in conductivity to the equivalence point around the 40cm mark which aligns with the equivalence point in the pH graph. These results confirm the hypothesis, ‘as the solution is neutralised it will become less conductive and following the equivalence point, as the 44


solution becomes more basic, the conductivity will begin to rise again’. The point in which concentration was lowest directly corelates with the spike in pH. This confirms that as the concentration of H+ from the acid decreases and reacts with the OH- from the base, they form a neutral solution, which has the lowest conductivity as it also has the lowest concentration of ions (0). Once past the point of neutralisation, the conductivity begins to rise again as the hydroxide ions are more prevalent than the hydrogen ions which results in a more basic solution and a no longer neutral solution. It is possible the incomplete return to the initial conductivity following the equivalence point is due to the inequal time on either side of pH 7. The trend is present, but the data doesn’t completely display the theory of the trend due to these statistics. This could however be due to an inaccurate method, which is discussed below. This experiment achieved the expected trend and relationship at the equivalence point with the lowest conductivity that was predicted. The neutralisation reaction was successful, as can be seen by the change in pH changes. This confirms the hypothesis of H+OH→H2O, a neutral aqueous solution. Although this experiment was successful, it was not without flaws. To further develop the hypothesis, the measurement of the point of neutralisation should have been included in the method. The exact amount of base that was used to neutralise the acid could have been measured in a slower more controlled experiment, as only an estimation can be made in this experiment.

In the hypothesis, a trend was predicted as opposed to exact data, and this can be most clearly proven through the visual display of the graph. The control of the experiment had flaws; however, this did not negate the confirmation of the proposed outcome as the neutralisation still occurred and the conductivity still followed closely with the trend. The method could be improved to increase accuracy of data. The greatest improvement that could be made would be isolating amount of base being added from time, in other words control the rate of base being added absent from the time. This was an issue as the two seconds was not enough time to measure exactly 3mL of liquid and check it was correct before placing in the solution. This could be done by measuring precisely 3mL of base in the dropper and mixing through the solution to ensure it is stable before measuring the conductivity and pH level. This would, however, be a much slower experiment than the one we conducted but would certainly be much more accurate and eliminate human error a great deal. Conducting the experiment more times with this improved method would result in a much clearer and more valid trend in the results to further prove the hypothesis. Conclusion Although the method was flawed, the trend of the results as described by the hypothesis were clear. As the solution neutralised the concentration decreased, then as it became basic, the conductivity began to increase. The point of equivalence was not measured accurately, which made this data less valid. The aim was achieved as a strong relationship was found in all trials between the pH and conductivity, and this trend corresponds with 45


the hypothesis. Although many improvements could be made, this experiment achieved the core desired outcomes of investigating the relationship between neutralisation and conductivity in a solution. RefeReferences Bishop, M 2013, pH and Equilibrium viewed 20 November 2021, <https://preparatorychemistry.com/Bishop_pH_ Equilibrium.htm>. cK-12 n.d., Neutralisation reaction, viewed 20 November 2021, <https://www.ck12.org/c/physical-

science/neutralization-reaction/lesson/AcidBase-Neutralization-MS-PS/>. Connelly, A 2017, Conductivity of a solution, viewed 20 November 2021, <https://andyjconnelly.wordpress.com/2017/07/1 4/conductivity-of-a-solution/>. Encyclopaedia Britannica 2020, pH chemistry, viewed 20 November 2021, <https://www.britannica.com/science/pH>. LibreTexts 2020, Neutralization, viewed 20 November 2021, <https://chem.libretexts.org/Bookshelves/Physic al_and_Theoretical_Chemistry_Textbook_Maps/ Supplemental_Modules_(Physical_and_Theoreti cal_Chemistry)/Acids_and_Bases/Acid_Base_R eactions/Neutralization>.

46


THE EFFECT OF FONT STYLE ON MEMORY Bridie De Lutiis (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Written information that is presented in different fonts can result in varied memory length and learning outcomes. Previous studies have shown that text written in smaller writing is harder to read and requires more concentration so that the information is retained for longer. Other studies have proven the opposite where it was time based if they were able to remember the study words. In this experiment, 10 participants were given 15 words in 5 different fonts to test if fonts that were harder to read were easier to remember. After a filler activity they were given 3 minutes to write down the words they remembered. The study found that the last 3 words of the test which were the hardest to read in the Edwardian script font were easier to remember after the filler test. Although, the first 6 study words that were shown were also recalled a similar amount even though they are in easier fonts.

Introduction How does the font size of learning material effect memory? While remembering study materials is also based on difficulty of the content and learning strategy it can also be affected by the style of the text and how easy it is to read. ‘Results from the prior experiments have converged on a similar pattern: participants regard large items as more memorable than easier to read items. We suggest that this occurred because participants regarded large items as subjectively more fluent and thus more memorable, than small items’, (Rhodes & Castle 2008). However, in contrast, ‘a second line of research that focused on other perceptual features of learning materials such as font type or clarity suggested that, in some cases, presenting materials in a perceptually degraded format can enhance rather than impair learning’, (e.g., DiemandYauman, Oppenheimer, & Vaughan, 2011).

It is hypothesised that the last 3 words of the study in Edwardian font will be easiest to remember because they are harder to read.

Method In this experiment risks were considered negligible therefore a risk assessment was not conducted. 10 participants were seated in front a table where I would hold up a sheet with different words in multiple fonts. A series of 15 words at size 72 would be held up for 5 seconds each with 5 different fonts including Calibri, Bodoni MT Poster, Freestyle Script, Rage and Edwardian Script which were progressively harder to read. After the reading participants were asked to complete a filler test for 2 minutes where they had to recall as many states of America they could to act as stimuli that is not further interest to the experiment. Finally, each participant was given 3 minutes to recall and 47


write the words they remembered. This data was recorded and transferred into a graph.

Results

Number of words remembered

Which fonts are more effective on memory 16 14 12 10 8 6 4 2 0

Fonts (easiest to heardest to read)

Discussion It was hypothesised that the fonts that were harder to read would be easier to remember. The results of the experiment suggest that while the words in hardest font to read were remembered by more participants, the first 6 study words used in the experiment were also recalled easier and more often. Out of the ten participants, nine remembered at least one of the words in the hardest fonts while 18% of the words that were recalled by all participants were the first 3 words that were read. This suggests that participants remembered the first few words because their focus was stronger and had no previous words to think about which does not relate to font style.

varied in how easy they were to read. The methodology of having 15 study words was chosen because it allowed for three words per font type. The disadvantages to the validity of the experiment are that participants are all at different leaning levels and age levels which is not controlled. However, other control variables such as the amount of time allowed to review each word and what words were used were the same for all participants. The experiment was repeated 10 times to ensure a small number of outliers and to make the results reliable. The results of other current research show that very small font size can be a desirable difficulty and hence the results provide support for the counterintuitive notion that perceptually degraded materials can enhance learning outcomes. This experiment has proven that words in a font harder to read are easier to remember as well as the study words that are first read during the experiment. References Halamish, V 2018, Can very small font size enhance memory?, viewed 9 November 2021, <https://link.springer.com/article/10.3758/s 13421-018-0816-6> Carey, B 2011, ‘Come On, I Thought I Knew That!’, The New York Time, 18 April, viewed 15 November 2021, https://www.nytimes.com/2011/04/19/healt h/19mind.html Rhodes, M & Castel, A 2008, Memory Predictions Are Influenced by Perceptual Information: Evidence for Metacognitive Illusions, Research Gate, viewed 27 November 2021, https://www.researchgate.net/publication/2 3463867_Memory_Predictions_Are_Influe nced_by_Perceptual_Information_Evidenc e_for_Metacognitive_Illusions

The use of different fonts was the easiest way to test different styles of fonts that 48


THE EFFECT OF BORAX ON THE ELASTICITY OF SLIME Mieke Jones (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Slime is a toy that is enjoyed by children all over the world. In this work, PVA glue and borax solution was used to create the non-Newtonian fluid, slime. This experiment involved creating 3 separate slimes with different amounts of borax solution. The elasticity of slime was evaluated and in particular the effect of borax on the elasticity of the slime. To evaluate the elasticity each slime was stretched along a ruler until they snapped and the final length at breaking was recorded. The study found that the slimes with a lower amount of borax solution in them had a higher elasticity.

Introduction Polymers are a class of natural or synthetic substances comprised of very large molecules which are made up of multiples of smaller units called monomers. When monomers join together to make polymer, they join by forming covalent bonds through sharing electrons (Evans, D & Watkins, S 2017). Polymers can be found in living organisms, minerals and many man-made materials. Polymers are used extensively in children’s toys (Polymer, 2021). Polyvinyl acetate (PVA) glue is a vinyl polymer (Wikipedia 2021) which is made by chemically combining vinyl acetate monomers into long chain-like network molecules (Polymerisation, 2020). The PVA polymer is used in many applications. It can be found in paints, food additives and adhesives for wood, paper and cloth (Oksman, K 2017). Crosslinking is the process of joining long chain-like network polymer molecules together. This process can alter the overall structure of the polymer and change its

properties, for example its elastic behaviour (Kuckling, D 2012). PVA can be cross-linked with different cross-linkers. One such cross-linker is sodium tetraborate, commonly known as borax. Many studies have been done on cross-linking PVA by using borax or boraxbased substances and cross-linking of solutions of PVA and water resulting in gellike materials has also been reported. Some studies also showed that borax type additives also cause the PVA to become more mouldable (Oksman, K 2017). In 1976 Mattel Toys released a product called Slime that was designed to be a gross, oozing substance. It was a light green coloured and was supplied in a little green garbage can. Presumably during the development of this product by Mattel the effect of the cross-linker on the base polymer would have been studied to provide a product with the desired properties. This report investigates the cross-linking reaction between PVA glue and borax and the effect of increasing volume of borax on the properties of the final polymer. (The Science behind Slime, 2017.). 49


Method Three batches of slime were made using PVA glue, water and borax solution. The borax solution was made by adding 1 teaspoon of borax into half a cup of water and stirring until the borax is completely dissolved. Into three different cups a PVA and water solution was made in the ratio of 1:1; one spoon of PVA glue and 1 spoon of water. These PVA solutions were mixed carefully until the PVA glue was evenly mixed into the water. A single drop of food colouring was added to the PVA solution and mixed in thoroughly. To the first cup of PVA solution 25 drops of borax solution was added and mixed to make slime. A 1m ruler was laid on the bench. The slime was removed from the cup, moulded into a ball and by pinching gently between the fingers of each hand the slime was pulled apart above the ruler until it broke. The final length of the slime when it broke was measured on the ruler. The slime was collected, remoulded into a ball and stretched a further 2 times (3 times total). Each measurement of the stretched slime was recorded. This process was repeated for the remaining PVA solutions by adding 35 drops of borax to one and 45 drops of borax to the other. The results of all experiments were collated and graphed.

Results

The results for the stretched length of each slime recipe are collated in Table 1 and represented graphically in Figure 2. In this work, the elasticity of the slime is being represented by the length achieved during the pull test, where a longer length equates to a greater elasticity. The three pull test results were averaged for each slime recipe. Based on the average length results from the pull test; the slime recipe with the greatest elasticity was the recipe with 25 drops of borax, and, the recipe with the least elasticity was the recipe with the 45 drops of borax. The average elasticity increased by 176% when the volume of borax was reduced from 45 drops to 25 drops. The average elasticity increased by 24% when the volume of borax was reduced from 45 drops to 35 drops. The change in the elasticity based on the volume of borax did not follow a linear relationship. This can be seen in Figure 1 where the trend line is the liner relationship and the average length in the pull test for 35 drops of borax is below this line. Table 1: Results of slime pull test for elasticity. Slime Recipe 25 drops of borax 35 drops of borax 45 drops of borax

Slime Elasticity (cm) Trial Trial Trial Average 1 2 3 50

60

28

46

18

16

28

20.67

10

18

22

16.67

50


Figure 1: Average elasticity for slime recipes with increasing amounts of borax.

Discussion Making slime is a common experiment used in schools to discover and learn about monomers, polymers, bonding and crosslinking. Documentation on the experimental work done when slime was discovered by Mattel toys in the 1970s is most likely unavailable due to it being a company secret and to avoid direct copies. Many experiments have been done since then to create slime but few have complete documentation available. The science however is well known and this experimental work supported the hypothesis that the volume of borax will change the properties, in this case elasticity, of slime. This experimental work showed that as the volume of borax is increased the elasticity of slime is decreased. The PVA polymer solution consists of long molecules of the PVA polymer moving around in water. When the borax is added the borate, ions attach to the PVA molecules by the process known as cross-linking. Cross-linking reduces the ability of the PVA molecules to move and the PVA solution produces a less

liquid substance known as slime. By adding more borax, more borate ions are available to attach to the PVA molecules and the number of crosslinks increases (Carnegie Mellon University, n.d.). which in turn further reduces the ability of the PVA molecules to move or slide relative to each other (American Chemical Society, 2021). For the PVA polymer solution to have elasticity it needs some crosslinking (to make it slime and not liquid) but not too many cross-links to stop it from being able to stretch far without breaking (Questacon, n.d.). The graph in Figure 1 shows that there was variability in the results of the pull test. There are several variables that could be the cause of this variability. Firstly, the original moulded shape of the slime could vary such that the slime is thinner or already stretched a little bit before starting the pull test. It is suggested that a more controlled and consistent method of measuring the elasticity of the slime would reduce the variability of the individual results. 51


A second variable relates to the slime acting like a non-Newtonian fluid. For a nonNewtonian fluid, it’s physical behaviour can change depending on the nature of the applied force (Helmenstine, A 2019). For example, it was observed that if the speed of the pull test changed the results were different. For a faster pull speed, the slime broke at a shorter length (The Science Behind Slime n.d.). It was difficult to pull the slime at the same speed and force every time which likely contributed to the variability in the results. By comparing the results for the average length of the pull tests for each slime recipe the increase in elasticity from 45 drops to 35 drops was only 24%, while the increase in elasticity from 35 drops to 25 drops, the same change in volume, was 176%. The change in elasticity did not follow a linear relationship. It is acknowledged that the number of results in this experiment are limited and it is recommended that more experiments be conducted with volumes below 25 drops and between 25 and 35 drops to better understand how the elasticity changes with borax volume and perhaps identify the optimum volume of borax for maximum elasticity. Looking at the individual results for the 45 drops and 35 drops recipes it is suggested that the difference in elasticity might not be significant because both recipes had results of 18cm and the other results were not very far apart. The variation between these two recipes might just be experimental variation. To explore this you could increase the number of individual tests from 3 to 10 to gain better understanding of the results (Accuracy, Precision, and Error n.d.). If the results are similar this could identify that for the PVA solution in this experiment there is a limit to the number of cross-links that can

form and the extra borate ions available are not used. References Accuracy, Precision, and Error n.d., viewed 17 November 2021, <https://courses.lumenlearning.com/introch em /chapter/accuracy-precision-anderror/>. American Chemical Society 2021, Time for Slime, viewed 15 November 2021, <https://www.acs.org/content/acs/en/educat ion/whatischemistry/adventures-inchemistry/experiments/slime.html>. Evans, D & Watkins, S 2017, Polymers: from DNA to rubber ducks, Australian Academy of Science, viewed 10 November 2021, <https://www.science.org.au/curious/everyt hing-else/polymers>.) Helmenstine, A 2019, The Science of How Slime Works, ThoughtCo, viewed 16 November 2021, <https://www.thoughtco.com/slimescience-how-it-works-608232>. Kuckling, D 2012, Polymers for Advanced Functional Materials, viewed 14 November 2021, <https://www.sciencedirect.com/topics/eng ineering/cross-linked-polymer>.). Meza, V 2018, How does the amount of borax affect the elasticity of slime?, Prezi, viewed 17 November 2021, <https://prezi.com/p/pdcsu2qdeiuo/howdoes-the-amount-of-borax-affect-theelasticity-of-slime/>. Polymer 2021, Britannica, viewed 12 November 2021, <https://www.britannica.com/science /polymer>. Polymerisation 2020, Br, viewed 13 November 2021, <https://www.britannica.com/science. polymerization>.). Polyvinyl Alcohol Slime n.d., Carnegie Mellon University, viewed 16 November 2021, <https://www.cmu.edu/gelfand/lgceducational-media/polymers/polymerarchitecture/polyvinyl-alcohol-slime.html>. Oksman, K 2017, ‘Plasticizing and crosslinking effects of borate additives on the structure and properties of poly (vinyl acetate)’, RSC Advances, no. 13, viewed 13 November 2021, <https://pubs.rsc.org/en/content/articlelandi ng/2017/ra/c6ra28574k>.

52


Questacon n.d., Borax Slime, Questacon, Canberra, viewed 17 November 2021, <https://www.questacon.edu.au/outreach/pr ograms/science-circus/activities/boraxslime>. Science Mom 2017, The Science behind Slime, online video, 9 December, viewed 15 November 2021, https://www.youtube.com/watch?v=4F9uk CQvP20.

The Science Behind Slime n.d., viewed 16 November 2021, <https://littlebinsforlittlehands.com/basicslime-science-homemade-slime-for-kids/>. Wikipedia 2021, ‘Polyvinyl acetate’, wiki article, 2 November, viewed 12 November 2021, <https://en.wikipedia.org/wiki/Polyvinyl_a cetate>.)

53


EFFECT OF ANTIBIOTICS ON THE GROWTH OF Staphylococcus epidermidis Grace Schofield (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract In this experiment antibiotic disks consisting of tetracycline, penicillin, ampicillin, chloramphenicol, streptomycin and sulphatriad, were placed on agar plates spread with the bacteria Staphylococcus epidermidis, to determine the effectiveness of antibiotics proven through the zone of inhibition. S. epidermidis was spread on an agar plate and an antibiotic disk was placed on top. To ensure reliability and sanitation, tools used tweezers and inoculation loops were heated with a bunsen burner to rid of any unwanted bacterias. This was repeated another 2 times to ensure reliability. Agar plates were put in an incubator set at 30 degrees for 2 days. The diameter of the zone of inhibition was measured and results concluded which antibiotic was most effective against the growth of S. epidermidis. It was hypothesized that tetracycline, ampicillin, chloramphenicol and penicillin would have larger zones of inhibition as they fight against gram-positive bacteria. Results supported the hypothesis as tetracycline had the largest zone of inhibition (3.9cm) however, results did not support the hypothesis for penicillin as it was essentially non-effective with a zone of inhibition of 0.1cm.

Introduction Bacteria are a small single celled organism found in all types of ecosystems and the human body. Pathogens and parasites are bacteria that cause diseases however, pathogens are very rare, whereas other bacterias are very helpful to the ecosystem and human body, such as bacterias found in a human gut. Staphylococcus epidermidis is a gram positive non-pathogenic bacteria, it is part of human flora typically skin flora, it is a facultative anaerobic bacteria. Bacteria that produces adenosine triphosphate (ATP) by anaerobic respiration if exposed to oxygen however, S. epidermidis is capable of switching to fermentation if oxygen is absent. The zone of inhibition is an area of media where bacteria colonies are unable to grow. The zone of inhibition is measured to determine the effectiveness of antibiotics against bacteria.

Ampicillin, an extended spectrum of penicillin, and tetracycline antibiotics, are active against both gram-positive and gramnegative bacteria therefore, will be effective against S. epidermidis. Chloramphenicol is an antibiotic with a broad spectrum of activity against gram-positive, gramnegative and Rickettsia, hence effective against S. epidermidis. Penicillin works best on gram-positive bacteria through inhibiting peptidoglycan production, which is found in most bacterial cell walls including S. epidermidis. Penicillin fights against peptidoglycan making it effective against S. epidermidis. Streptomycin commonly fights against gram-negative bacteria however, also fights off a small group of bacteria including the staphylococci group, streptomycin is potentially effective against S. epidermidis. Sulphatriad fights against gram-negative bacterias and essentially cannot fight against gram-positive bacteria, 54


consequently sulphatriad is non-effective against S. epidermidis. The aim of the experiment was to determine which antibiotic is most effective against the bacteria, Staphylococcus epidermidis. It is hypothesized that the more effective antibiotics, ampicillin, tetracycline, chloramphenicol and penicillin will have a larger zone of inhibition against S. epidermidis. Method

Heat tweezers with the bunsen burner and let cool making sure the tweezers do not touch any surfaces. Once cool, pick up mastring antibiotics and gently place on agar plate. Gently press down mastring antibiotics with tweezers to ensure contact with agar plate. Immediately shut the agar plate lid and tape edges with masking tape. Repeat another 2 times to ensure reliability. Place agar plates in an incubator at 30 degrees for 2 days. Results

Heat up the inoculation loop on the blue flame of a Bunsen burner. This will remove unwanted bacteria. Gather bacteria with pipette and put three drops of S. epidermidis on agar plate. Spread bacteria with inoculation loop until evenly and thinly spread out and immediately shut agar plate lid. 55


Discussion Results proved that tetracycline was the most effective antibiotic against Staphylococcus epidermidis. This was proven by the average zone of inhibition of 3.9cm, followed by chloramphenicol (2.8cm), ampicillin (2.5cm), streptomycin (2.1cm), penicillin (0.1cm) and sulphatriad (0cm). Tetracycline was the most effective antibiotic due to its ability to fight against both gram-positive and gram-negative bacterias proven through extensive research. S. epidermidis is a gram-positive bacteria further proving why tetracycline was the most effective antibiotic against the bacteria S. epidermidis. The hypothesis was somewhat supported as penicillin was the second least effective antibiotic against S. epidermidis when it was predicted to be one of the most effective, however, the results supported the hypothesis as tetracycline, ampicillin, and chloramphenicol were the top most effective antibiotics against the bacteria. An answer to the aim was provided as results proved which antibiotic (tetracycline) was most effective against the bacteria S. epidermidis. Variables were well controlled as the incubator was kept at 30 degrees for 2 days, the same antibiotics disks were used for all 3 agar plates and the same bacteria (S. epidermidis) and amount (3 drops from pipette) was also used for all 3 agar plates. The control was an agar plate with just S. epidermidis spread on it, this plate was also kept at 30 degrees for 2 days in the incubator and proved that the zone of inhibition was due to the antibiotics proving that no other factors contributed to the inhibition. The experiment was reliable as it was successfully conducted and repeated 3 times for each antibiotic with no faults in the method resulting in consistent quantitative

results of the zone of inhibition. All antibiotics averaged with a difference of 2mm at the largest however, the largest difference in results was for penicillin with one agar plate the zone of inhibition (0.4cm) differing to the other two agar plates that had a zone of inhibition of 0cm. To further evaluate the effectiveness of antibiotics against the growth of S. epidermidis, specifically penicillin, extensive research and an experiment on the concentration of penicillin could be conducted. An experiment will determine if the concentration of the penicillin had an effect on the bacteria. Research proved penicillin would fight against S. epidermidis as it is gram-positive; however, results showed lack of effectiveness against the bacteria (average zone of inhibition 0.1cm). Conclusion In conclusion, tetracycline was the most effective antibiotic proved through research as tetracycline fights against gram-positive and gram-negative bacterias. Research also concluded the bacteria S. epidermidis is a gram-positive bacteria, therefore, explaining why tetracycline was most effective against S. epidermidis. The hypothesis was somewhat supported as tetracycline was the most effective antibiotic (average zone of inhibition 3.9cm) however, penicillin was essentially non-effective against the bacteria (average zone of inhibition 0.4cm) indicating there was some kind of resistance against the bacteria. References Bacteria 2021, Genome.gov, viewed 16 November 2021, <https://www.genome.gov/geneticsglossary/Bacteria>. facultative anaerobe | microorganism | Britannica 2021, Encyclopedia Britannica, viewed 17 November 2021,

56


<https://www.britannica.com/science/facult ative-anaerobe>. Hunter, JP & Gilbert, JA 2019, ‘Access for Renal Replacement Therapy’, Kidney Transplantation - Principles and Practice, pp. 69–89, viewed 18 November 2021, <https://www.sciencedirect.com/topics/me dicine-and-dentistry/staphylococcusepidermidis>. Kirst, HA & Allen, NE 2007, ‘Aminoglycosides Antibiotics’, Comprehensive Medicinal Chemistry II, pp. 629–652, viewed 18 November 2021, <https://www.sciencedirect.com/topics/che mistry/streptomycin#:~:text=Streptomycin %20has%20a%20broad%20spectrum,fever %2C%20plague%2C%20and%20others.>. Mastring Antibiotic Sets 2021, Southern Biological, viewed 16 November 2021, <https://www.southernbiological.com/mast ring-antibioticsets/#:~:text=Description%3A%20A%20M astring%20is%20a,bacterial%20strain%20a gainst%20several%20antibiotics.>. Measuring Drug Susceptibility | Boundless Microbiology 2021, Lumenlearning.com, viewed 16 November 2021,

<https://courses.lumenlearning.com/boundl ess-microbiology/chapter/measuring-drugsusceptibility/>. Mulamattathil, SG, Esterhuysen, HA & Pretorius, PJ 2000, ‘Antibiotic-resistant Gram-negative bacteria in a virtually closed water reticulation system’, Journal of Applied Microbiology, vol. 88, no. 6, pp. 930–937, viewed 18 November 2021, <https://pubmed.ncbi.nlm.nih.gov/1084916 8/>. Otto, M 2009, ‘S. epidermidis — the “accidental” pathogen’, Nature Reviews Microbiology, vol. 7, no. 8, pp. 555–567, viewed 17 November 2021, <https://www.ncbi.nlm.nih.gov/pmc/article s/PMC2807625/>. Weird Science: Penicillin and the Cell Wall | manoa.hawaii.edu/ExploringOurFluidEart h 2021, Hawaii.edu, viewed 17 November 2021, <https://manoa.hawaii.edu/exploringourflui dearth/biological/aquatic-plants-andalgae/structure-and-function/weird-sciencepenicillin-and-cell-wall>.

57


THEORETICAL YIELD vs EXPERIMENTAL YIELD Leo Ding (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 Abstract Experimental yield will always have a slight practical error when in comparison with the theoretical yield. To prove this statement an experiment is carried out to combust magnesium in a crucible, calculating the theoretical yield and to measure the experimental yield after combustion will allow a percentage of error to be calculated. The experiment was completed 7 times to assure reliability, resulting in a more reliable percent error. The study illustrated that the experimental yield is always lower than the theoretical yield due to practical errors, such like; the release of magnesium oxide, resulting in the loss of mass.

Introduction Many experiments were carried out to compare the theoretical yield and practical yield. Theoretical yield is the result of an experiment completed at 100% efficiency, whilst the practical yield is the result of an experiment completed with man made errors and less efficiency. It is proven through research that the theoretical yield is always correct as it makes no error. In this study, the results of the experimental yield and theoretical yield of magnesium through combustion is being compared, this produces a percent error of the experimental yield. Hypothesis Experimental yield will have factors which allows the crucible to lose mass, as the magnesium ribbon will produce magnesium oxide (MgO), allowing it to rise out of the crucible and into the atmosphere. Whilst theoretical yield conveys the fact that the crucible will gain mass due to the magnesium oxide produced during

combustion. When magnesium combusted, light and heat is created.

is

2Mg(s) + O2(g) →2 MgO (s) The chemical equation of a magnesium ribbon being combusted is shown above. Method Prior to the experiment the weight of the empty crucible, the magnesium ribbon, and the magnesium ribbon inside the crucible was noted, through the use of a scale. Followed by the heating of the crucible along with the magnesium ribbon inside, held on top of a Bunsen burner placed on top of a heatproof mat. Allowing oxygen to enter the crucible once every minute, whilst restricting the release of magnesium oxide by quickly opening and closing the crucible lid. This process was continued until the magnesium was fully combusted, leaving white ashes and magnesium oxide in the crucible. The crucible was placed to cool down, then was measured to find the weight of the crucible along with the combusted magnesium and gas with a scale. The

58


weights were noted then compared with the theoretical yield.

Results Experimental yield results Trial

1

2

3

4

5

6*

7**

Empty crucible(g)

38.02

31.21

40.49

34.58

35.02

34.63

39.62

Magnesium(g)

0.22

0.09

0.06

0.05

0.45

0.08

0.20

Before combustion(g)

38.24

31.30

40.55

34.63

35.47

34.71

39.82

After combustion(g)

38.32

31.33

40.56

34.63

35.54

34.70

39.94

Change(g)

+0.08

+0.03

+0.01

0

-0.07

-0.01

+0.12

*Experiment 6 only allowing oxygen in before magnesium ignites and waited 10 minutes **Experiment 7 crucible was heated to remove any moisture before magnesium was added.

Theoretical yield results Trial

1

2

3

4

5

6

7

Empty crucible(g)

38.02

31.21

40.49

34.58

35.02

34.63

39.62

Magnesium(g)

0.22

0.09

0.08

0.05

0.45

0.08

0.20

Before combustion(g)

38.24

31.30

40.55

34.63

35.47

34.71

39.82

After combustion(g)

38.39

31.36

40.62

34.66

35.77

34.76

39.96

Change(g)

+0.15

+0.06

+0.07

+0.08

+0.30

+0.05

+0.14

Percent error calculation 59


Trial Percent error (%)

1 46.67

2 50

3 85.71

4 100

5 76.67

6 80

7 14.29

6 20

7 85.71

Average percent error through the combustion of magnesium 64.76% Percentage yield

Trial 1 Percent 53.33 yield (%)

2 50

3 14.29

4 0

5 23.33

Average percentage yield through the combustion of magnesium 35.24%

Discussion It was hypothesised that the experimental yield will have factors allowing it to lose mass when in comparison with the theoretical yield. This was proven to be true due to the results above. The percent error calculated is the difference between the experimental yield and theoretical yield in comparison with the theoretical yield presented in a percentage. If the percent error is high; such like >80% then the error is seen as extremely large. The calculated percentage yield is at around the middle of the spectrum, sitting at 64.76%, providing a valid, though not reliable

experiment. The human errors and flaws must have been severe to see such numbers, by conducting the experiment several times and weighing out the average allows the experiment to be seen as reliable, The percentage yield is the percent ratio of the experimental yield and the theoretical yield. The higher the percentage yield, the better the experiment was completed at. Though the percent yield average for this study was extremely low, only trial 7 demonstrated a reasonable result, this is due to more steps being in placed to ensure excess mass was removed. Trial 7 was different from all the other experiments as the crucible was heated to remove any 60


moisture before magnesium was added. If more time was given, all experiment would follow the exact procedure to convey a more reliable and reasonable average. Research was conducted to prove that; - Theoretical yield is calculated based on the stoichiometry of the chemical equation. - Experimental yield is experimentally determined. - Percent yield is determined by calculating the ratio of actual yield to theoretical yield. Through the conduction of the experiment multiple times, to prove the reliability, it can be noted that the experimental yield is always lower when in comparison with the theoretical yield.

References Theoretical Yield and Percent Yield 2016, Chemistry LibreTexts, viewed 29 November 2021, <https://chem.libretexts.org/Bookshelves/I ntroductory_Chemistry/Book%3A_Introdu ctory_Chemistry_(CK12)/12%3A_Stoichio metry/12.09%3A_Theoretical_Yield_and_ Percent_Yield#:~:text=Theoretical%20yiel d%20is%20calculated%20based,actual%20 yield%20to%20theoretical%20yield.>. Steve 2021, Burning of Magnesium, Rutgers.edu, viewed 29 November 2021, <https://chem.rutgers.edu/cldfdemos/1016-cldf-demo-burningmagnesium>. TutaPoint Online Tutoring Services 2021, TutaPoint, viewed 29 November 2021, <https://www.tutapoint.com/knowledgecenter/view/theoretical-vs-actual-yield>. Vedantu 2020, Percentage Error, VEDANTU, Vedantu, viewed 26 November 2021, <https://www.vedantu.com/maths/percenta ge-error>.

61


THE EFFECT OF AUDITORY AND VISUAL STIMULI ON REACTION TIME Loren Yusef (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 ABSTRACT Our ability to react to stimuli is key to many daily activities we all perform. This experiment investigates if the type of stimuli (either auditory or visual) affects the speed of a person’s reaction time. Participants of this experiment were asked to complete each type of reaction time test 5 times to compare their averaged results. When all data was collated, it was found that this set of participants on average, had a faster reaction time to visual stimuli, however, a closer look at the data reveals that this is not conclusive and that there is no clear difference in reaction time based on stimuli. Although further experimentation and data collection need to be done to confirm these results, it seems that a person’s reaction time can simply be dependent on their personal factors. Introduction Our reflexes and ability to react to things in our surroundings are key parts of our everyday life. From catching a ball or driving a car, our ability to have fast reactions to a situation is often very important to our survival or ability to protect ourselves. Our brains receive lots of information and stimuli which it then processes and determines if there is a need to react to the information it receives. This has been part of human survival instincts from primitive times, particularly regarding our flight, fight or freeze reaction.

This experiment aims to test this theory, through online reaction time tests completed by several participants. Method A computer was set up with the visual and auditory reaction tests in a quiet place. Then participants were brought over and given instructions on how to complete the reaction time tests. The visual reaction test was completed 5 times by the participant, and then they completed the auditory reaction time test 5 times, with all results being recorded. A total of 20 participants took part in this experiment, providing a total of 200 reaction times to be averaged and compared. Results

There are different types of stimuli, the main 2 being visual and auditory. Visual stimuli are any information we process with our eyes. Auditory stimuli are information we receive via sound and our ears. There have been some studies that suggest that our brains can process auditory stimuli faster than visual stimuli, allowing us to react faster.

Although the averaged results did seem to reflect that responses to visual stimuli were faster, a closer look at the data reveals that the results are less conclusive than they first seemed. 40% of participants had a faster reaction to auditory stimuli, 5% had equally fast reactions to both stimuli, and 55% had a faster reaction to visual reaction times. Through increasing the number of participants, and consequently the pool of results, it will be easier to determine if this 62


data happened to be influenced by the smaller pool of participants or if reaction

times simply depends on the person and the situation.

Discussion

participants and would therefore benefit greatly from further data collection from a larger group such as 50-100 participants. This would help resolve some of the complications and differences reflected in the data. Although the averaged results did seem to reflect that responses to visual

Although this experiment had a wellplanned method and effectively sought out its aim, further data and experimentation would be beneficial. This experiment was conducted on a relatively small group of

63


stimuli were faster, a closer look at the data reveals that the results are less conclusive than they first seemed. This experiment fulfilled its aim to investigate the effect different stimuli (auditory or visual) have on the reflex reaction time of a person. Although the results were not particularly conclusive (with almost a 50/50 split in the data), they do provide a base point at which further investigation can continue to provide more conclusive results. This experiment ensured validity by controlling all other variables which could have affected the results. Some of these variables included the age of participants (all participants being of 15-17 years old), time of day (with all participants completing the test within 1 hour during the middle of the day), removal of major distractions (with the tests being conducted in a quiet space outside the classroom away from other devices and students), and system variables (with all tests being conducted on the same computer, using the same websites, and same volume/brightness for auditory and visual tests). The considerations of all of these factors and the elimination of additional effects have resulted in valid data being collected from all participants. This experiment also ensured the accuracy of all the data collected. Due to the controlling of all other variables, all the reaction times collected from individuals and the overall group were within a small

range of each other. Further, all reaction times were measured in milliseconds to ensure that the data collected was precise and easily reflect any change or patterns revealed by the data. The reliability of the data collected in this experiment is also ensured through the method. This experiment collected reaction times from 20 participants, preventing and specific individual factors from altering the data and making any outliers clear. Further, each participant repeated each test 5 times to ensure the accuracy of individual results and eliminate any outlier reaction times within a participant’s performance. References Jain, A, Bansal, R, Kumar, A & Singh, K 2015, ‘A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students’, International Journal of Applied and Basic Medical Research, vol. 5, no. 2, p. 124, viewed 17 November 2021, <https://www.ncbi.nlm.nih.gov/pmc/article s/PMC4456887/>. Shelton, J & Kumar, GP 2010, ‘Comparison between Auditory and Visual Simple Reaction Times’, Neuroscience and Medicine, vol. 01, no. 01, pp. 30–32, viewed 9 November 2021, <https://www.scirp.org/html/42400003_2689.htm>. Solanki, J, Joshi, N, Shah, C, HB, M & PA, G 2012, ‘A Study of Correlation between Auditory and Visual Reaction Time in Healthy Adults’, International Journal of Medicine and Public Health, vol. 2, no. 2, pp. 36–38, <https://www.ijmedph.org/sites/default/file s/IntJMedPublicHealth_2012_2_2_36_108 395.pdf>.

64


VARYING THE ANGLE OF ATTACK OF A VEHICLE’S SPOILER AND ITS EFFECT ON AERODYNAMIC EFFICIENCY Austin Miao (Year 10) Science Faculty, The Illawarra Grammar School, Western Avenue, Mangerton, 2500 ABSTRACT The experiment was conducted to find the most ideal angle of attack on a spoiler to improve an automotive’s aerodynamic performance. The experiment showed that the spoiler angle of attack of 10° was the most effective. However, due to various issues faced during the experiment, there was failure to identify a more accurate answer.

Introduction The main concerns of automotive aerodynamics are reducing drag, reducing wind noise, and preventing undesired lift forces at high speeds. A spoiler* on an automotive aim to ‘spoil’ undesirable air flow at the rear to improve the overall laminar flow around a vehicle. The most turbulent airflow an automotive experience is at its rear edge, where the shape of the vehicle pulls air downward, causing turbulent, low-pressure air pockets. This turbulent downward pull of high-speed air causes dangerous undesirable lift of the rear of the vehicle. With the installation of a spoiler, the main laminar airstream over a vehicle will flow around the spoiler without entering the low-pressure pocket. This also allows the airflow to pass in a horizontal manner thus not interfering with the performance of the vehicle. By preventing airflow from entering a region with an unfavourable body shape, the flow around the entire vehicle can be improved, and thus improve the overall aerodynamic efficiency of an automotive. An effective spoiler allows an automotive to enhance performance, safety, manoeuvrability, and fuel efficiency, thus decreasing greenhouse emissions. However, does varying the angle

of attack on a spoiler affect a vehicle’s aerodynamic efficiency? Or does the design of the spoiler make no significant difference in performance and its presence is simply enough to maintain laminar flow at the rear of the vehicle? *Not to be confused with a ‘wing’ or an ‘automotive airfoil,’ which is an inverted airfoil mounted on the rear of an automotive to improve manoeuvrability by generating downforce, which increases tire traction during high-speed turns. Aim The aim of the experiment is to determine the most beneficial angle of attack on a spoiler that best mitigates drag. The horizontal force (air resistance) experienced by the car will be recorded for each set spoiler angle of attack (AOA); -10°, 0°, 10° and 20°. Hypothesis The spoiler angle(A) that is greater than zero degrees and smaller than ten degrees (0°<A<10°) will allow the vehicle to be the most efficient as the minimal angle of attack deflects airflow from the low-pressure 65


turbulent pocket, thus increasing aerodynamic efficiency. The spoiler angle of 20° will be the least effective at reducing drag as the greater angle of attack will guide the laminar flow over the vehicle upwards, resulting in a greater area of low air pressure and turbulence. The spoiler angle of -10° will guide air flow towards the back of the vehicle to mitigate issues with low pressure behind the vehicle, however, it may not be able to successfully deflect laminar airflow over the main body of the vehicle to reduce turbulence. Method 1. with the airflow around the model. Place the anemometer next to or near the model to measure the windspeed. Connect the force meter to a Data Logger) 2. Set the Data Logger to take data during a 60 second interval 3. Adjust the spoiler angle to -10° 4. Turn on the wind tunnel and immediately turn on the Data Logger 5. As the wind speed is slowly reaching the desired speed of ~8.74m/s, light some incense and allow the smoke to enter the air intake of the wind tunnel 6. Observe the airflow via the smoke that travels around the model to ensure the spoiler is in fact interacting with the laminar flow. 7. After the Data Logger stops taking data, turn off the wind tunnel.

8. Save the data entry on the Data Logger 9. Repeat steps 2-7 (excluding step 6) with spoiler angles of 0°, 10°, 20°, repeat the experiment dedicated to each angle of attack three times and record the Data Logger results for each test 10. Place the ramp in the designated indoor area, ensure the ramp has an angle of elevation of 20°. 11. Fit the car model shell onto its ‘wheel base’, 12. Set the spoiler angle of attack to 10°. 13. Place the car on top of the ramp and let go. Using the tape measure, record the distance the car rolls before coming to a stop. 14. Repeat steps 12-13 with the angle of attack 0°, 10°, 20°. 15. Repeat each category of spoiler angle of attack 5 times, record all distances.

Figure 1: A cardboard coupe car model shell fitted on top of a force meter in a wind tunnel.

66


Results Table 1: Distance travelled by car with varying spoiler angles after release from a ramp.

Spoiler

Test 1

Test 2

Test 3

Test 4

Test 5

Angle

(cm)

(cm)

(cm)

(cm)

(cm)

Ave. Distance (cm)

-10°

550

509

546

559

493

531.4

526

530

538

528

498

530.5

10°

531

539

521

534

544

533.8

20°

536

548

537

502

528

530.2

Table 2: The horizontal force experienced by a model car shell with varying spoiler angles in wind speed of 31.464 Km/h.

Spoiler Angle -10°

Average horizontal force in the last 10 seconds (newtons) 0.7685742967

0.728385555

10°

0.58292365

20°

0.85831549

Discussion As seen in the graph, the spoiler angle of attack (AOA) of 10° allowed the most aerodynamic efficiency. When spoiler AOA was set to 10°, the vehicle experienced the least horizontal force (0.58292365) and had the greatest average distanced travelled (533.8cm) after being released from a ramp. The spoiler AOA of -10° achieved moderate aerodynamic efficiency with an average force of 0.7685742967 newtons, and an average distance of 531.4 cm. The spoiler AOA of 20° was the least aerodynamically efficient, as the vehicle experienced the greatest average horizontal force of 0.85831549 newtons, and an average

distance travelled of 530.2cm. The spoiler AOA of 0° had conflicting results, as its average horizontal force and average distance travelled is inconsistent to the trend of the horizontal force and distance travelled being inversely proportional relative to data of other AOA. There were various random errors in this experimented that could have contributed to this inconsistency in results. Some of these random errors include the tires of the model car occasionally abrasing against the wheelhouse of the model when going down the ramp as the car meets the point where the ramp was in contact with the floor. This occasional abrasion with the wheelhouse 67


would cause major hindrance to the distance the vehicle has the potential to travel. When slight abrasion has been observed or when an outlier in the distance is identified, steps were taken to mitigate these issues, such as removing the outlier from the data bank or redo the experiment. However, slight abrasions may have been overlooked, causing the final average distance travelled by the vehicle to be inconsistent with the overall trend of the data. Due to the time limit and resources provided in this experiment, the car model could not be modified in time to mitigate the aforementioned issues. For future improvement, the car model shell should be made to be fitted onto the base and reinforced with a strong adhesive such as PVA or duct tape, rather than placed on top of the wheel base using grooves cut onto the bottom. Additionally, another random error experienced in the experiment was the amount of movement the model experienced in the wind tunnel. The car would not be completely stabilised on top of the force meter due to the construction of the wind tunnel. As a result, the model occasionally tilted sideways when experiencing relatively high windspeeds. This tilting caused the incorrect horizontal force data to be logged into the data logger, as it detected displacement and force that was not horizontal. As a result, there are positive data for push experienced by the model during the early stages of wind tunnel being active, which means the model moved forward as the wind pushed backwards onto it, which is a highly unlikely event to occur. The car tilting is the main suspect for the positive horizontal force, other reasons are unknown. The utilisation of a fluid simulation software such as computational fluid dynamics (CDF) would have allowed

for a more reliable, valid and accurate experiment and results. The experiment has limited validity due to the lack of AOA tested during the experiment. The increments of 10° between each spoiler AOA and the data for only four different angles meant there is insufficient data to determine the most beneficial AOA on a spoiler to best mitigates drag. Increments of 1° starting from -90° to 90° would have been adequate in truly determining a clear trendline and defining which spoiler angle(s) allow for the most aerodynamic efficiency and whether if the AOA of attack on spoiler makes any difference to the aerodynamic performance of an automotive. Moreover, the experiment was also invalid because there was no control variable. Prior to the adjustable spoiler being attached to the vehicle, a separate test should have been conducted to determine the aerodynamic efficiency of the model car, so to determine whether if the addition of a spoiler mitigated drag. Overall, the experiment designed had numerous flaws which resulted in the inconsistent data collected and is unsuitable with the time given and the resources provided. Furthermore, the experiment was not very reliable either, considering each AOA category was only repeated 3-5 times under the dodgy circumstances. For improvement, each category of AOA should have been repeated at least 10 times or used a simulation software. Conclusion In conclusion, the hypothesis was accurate in determining that the angle of 10° was the most effective spoiler angle at improving aerodynamic performance. Although the experiment was not able to pinpoint a specific angle that is the most ideal, the

68


experiment provided a range that provides a brief idea of where the ideal angle might be.

1

0.8

535 0.768574297

0.85831549 534 0.728385555 533.8

0.6 0.4 0.2

0.58292365 531.4 530.5

530.2

0 -10° 0° 10° 20° Vehicle Spoiler Angle of Attack (Degrees) distance

533 532 531 530 529 528

Average distance travelled (cm)

Average Horizontal Force on Vehicle (Newton)

Average horizontal force experienced by a car and its average distance travelled after being released from a ramp, with the spoiler angles of -10°, 0°, 10°, 20°

force

69