Oculus Science Journal Issue 7

Page 1

Oculus Science Journal

Issue 7


FOCUS —An Introduction to the Application of Statistics in Big Data By JIWON LEE

Abstract Statistics, in its modern sense, is a field of study that aims to analyze natural phenomena through mathematical means. As a highly diverse and versatile discipline, statistics has been developed not only in the areas of STEM subjects, but also in the spheres of social science, economics, and humanities. More recently, the use of statistics in big data has increased, mainly in relevance to machine learning and artificial intelligence. In amalgamation with these subjects, statistics has been used in numerical and textual analysis, and is also starting to be applied in areas previously thought of as exclusively the domain of humans, such as the arts. Although there are differences in conventional statistics and these new developments, their purpose, which is finding associations in data, remains the same.

Introduction


By reviewing this history and current developments of statistics, this article aims to outline the possible future trajectory for statistics in this new age of big data, as well as the increased role statistics now has in primary and secondary schools as a result of its expanding role in multiple disciplines. Moreover, it also addresses some realistic criticisms and concerns about the subject in the context of the rapidly advancing technology of our world.

Historical Development While historical records date the first population census — an important part of statistics — to have been in 2AD during the Han Dynasty of China, the first form of modern statistics is cited to have emerged in 1662, when John Graunt founded the science of demography, a field for the statistical study of human populations. Among his notable contributions to the development of statistics, his creation of census methods to analyze survival rates of human populations according to age paved the way for the framework of modern demography. While Graunt is largely responsible for the creation of a systematic approach to collecting and analyzing human data, the first statistical work had emerged long before his time in the book Manuscript on Deciphering Cryptographic Messages. This was published some time in the 9th century, and the author, Al-Kindi, discusses methods of using statistical inference—the usage of statistics from sample data to derive inferences about the entire population—and frequency analysis—the study of repetition in ciphertext—to decode encrypted messages. This book later laid the groundwork for modern cryptanalysis and statistics[1]. The first step toward the development of statistics in its modern form was in the 19th century when two mathematicians, Sir Francis Galton and Karl Pearson, introduced to statistics the notion of a standard deviation, a numerical representation of the deviation of a set of data from its mean; methods of identifying correlation, a measure of the strength of a directly proportional relationship between two quantitative variables; and regression analysis, a statistical method to determine the graphical relationship between the independent variable and the dependent variable in a study. These new developments allowed for statistics to not only be more actively used to study human demographics, but also became a participant in the analysis of industry and politics. They later went on to found the first university statistics department and the first statistics journal. This was the beginning of statistics as an independent field of study[2]. The second wave of modern statistics came in the first half of the 1900s, when it started becoming actively incorporated into research works and higher education curricula. A notable contributor in this time period was Ronald Fisher, whose major publications on statistics helped outline statistical methods for researchers. He gave directions on how to design experiments to


avoid unintentional bias and other human errors; described how statistical data collection and analysis methods could be improved through means such as randomized design, a data collection method where subjects are assigned random values of the variable in question which in turn removes possible unintentional bias on the part of both subjects and researchers; and set an example of how statistics could be used to explore various questions if a valid null hypothesis—the hypothesis in a statistical analysis that states that there is no significant difference (other than those as a result of human error during sampling) between two population variables in question—an alternative hypothesis—the hypothesis that states that there is a discernible difference between the two variables in a statistical study—and a set of data could be generated from the experiment. One such example was proving the existence of changes in crop yields, analyzed and published by Fisher in 1921[3]. In the latter half of the 20th century, the development of supercomputers and personal computers led to greater amounts of information being stored digitally, causing a rapid inflation in the amounts of large data. This resulted in the advent of the term “big data,” which refers to sizable volumes of data that can be analyzed to identify patterns or trends in them. Applications of big data range from monitoring large-scale financial activities, such as international trade, to customer analysis for effective social media marketing, and with this growing role of big data has come a subsequent increase in the importance of statistics in managing and analyzing it.

The role of statistics in education and research Having become an official discipline at the university level in 1911, statistics has since then been incorporated into departments of education on various different levels. Notably, basic statistical concepts were first introduced to high schools in the 1920s. The 1940s and 1950s saw vigorous effort to broaden the availability of statistical education from younger years, as spurred on by the governmental and social efforts during and after the Second World War, where statistical analysis became frequently used to analyze military performance, casualties, and more. While educational endeavors laxed in the 1960s and 1970s, the boom of big data brought back the interest in statistics from the 1980s onward[4]. Presently, statistics is taught in both primary and secondary schools, and is also offered as Honor and Advanced Placement courses to many high school students hoping to study the subject at the college level and beyond. The field of statistics has also become a crucial element in research, ranging from predicting the best price of commodities based on levels of consumer demand in the commercial sphere to determining the effectiveness of certain treatments in the medical profession. By incorporating statistics into research, researchers have been able to find ways to represent the credibility of their findings through data analysis, and have also been able to find and prove causal relationships using hypothesis testing. Statistics is especially necessary and irreplaceable in


research in that, as mentioned, it is the most accurate form of measuring the reliability of the results drawn from a study. Whether that be measuring the confidence interval of a population mean, or testing whether a new treatment has any effect on patients when compared with a placebo, it places mathematical limitations on the objective aspects of research[5]. Moreover, statistics allows for a study conducted on a sample from a defined population to be extended to that general population given that the research satisfies a number of conditions, the sample being randomly chosen being one such prerequisite. This is one of the greatest strengths of statistics: the ability to extend the findings from a sample to the entire population without having to analyze every single data point.

Statistics in Big Data and Artificial Intelligence In the age of big data and artificial intelligence (AI), intellectual reasoning and ability demonstrated by machines as opposed to humans, statistics is being utilized in education and research more than ever. Often combined with computer science and engineering, statistics is being used in many different capacities such as generating probability models through which complex data can filter through and then generate a model of best fit[6]. Even in this day and age, statistics continues to be transformed and applied in new ways to cope with the growing size and complexity of big data, as well as the many other rapid advancements being made in artificial intelligence. While a large portion of big data consists of quantitative data, qualitative statistics also plays a large role in it. Notably, the analysis of text messages using statistical techniques by artificial intelligence has become one of the forefronts of the application of modern statistics. Text mining is the process of deriving information, such as underlying sentiments, from a piece of text. This method is intertwined with sentiment analysis, which, to put simply, is the subjective analysis of textual data. The fundamental purpose of sentiment analysis is the classification of text through its underlying sentiments — positive, negative, or neutral[7]. For example, “the chef was very friendly” has a positive underlying sentiment, while the sentence “but the food was mediocre” has a negative connotation. While previous statistical techniques were underdeveloped for sentiment analysis to work effectively, recent developments in deep learning, which is a subfield of AI dedicated to mimicking the workings of the human brain[8], has allowed for greater, more complex sentiment analysis. A main application of sentiment analysis is in natural language processing (NLP)—a field of study of how computers can analyze human language and draw a conclusion about the connotation of a piece of text—which is often used to measure sentiments within corporates’ financial statements. For example, when a top management comments on its quarterly or annual performance, the level of positivity in this comment can be analyzed through NLP. The top


management report is generally a piece of unorganized text, which NLP converts into a structured format that AI can then interpret. Through this process, the performance levels of companies can be gauged more effectively and accurately. To train computers to be able to identify these implicit undertones, researchers must first provide and educate it with a set of data related to its purpose. This training method also goes beyond sentiment analysis; if a machine is being trained to recognize and locate a human face in an image, as is often used in camera applications on phones, it must be given a large data set of pictures with human faces which can then be used for training purposes. This data set can be split into three different sections; training data, validation data, and testing data. Training data is the data that helps the AI machine learn new material by picking up patterns within the set of data. Training data consists of two parts: input information and corresponding target answers. Given the input information, the AI will be trained to output the target answers as often as possible, and the AI model can re-run over the training data numerous times until a solid pattern is identified. Validation data is similarly structured to training data in that it has both input and target information. By running the inputs in the validation data through the AI program, it is possible to see whether the model is able to churn out the target information as results, which would prove it to be successful. Testing data, which comes much after both training and validation data, is a series of inputs without any target information. Mimicking real-world applications, testing data aims to recreate a realistic environment in which it will be able to run. Testing data makes no improvements on the existing AI model. Instead, it tests if the AI model is able to make accurate predictions based on this testing data on a consistent basis[9]. If it proves successful in doing so, then the program is ruled to be ready for real-world usage. An example of these types of data used to create an AI program can be found in AlphaGo. AlphaGo is a computer program designed to play Go, a two-player board game involving black and white stones that the players alternate placing. The goal is to enclose as much of the board’s territory as possible. Countless records of previous professional Go games spanning back centuries contributed to the training data used to teach the AlphaGo program. Through analyzing the different moves that were taken by the Go players, the creators of AlphaGo then set up different versions of the program to play against each other, which served as its validation data. AlphaGo’s widely broadcasted matches against professional players, most notably Lee Sedol, was the program’s testing data[10]. The quality and quantity of training data is also crucial in creating an effective AI model. A large set of refined data will aid the AI in identifying statistical patterns and thereby more accurately fulfill its purpose. Using the aforementioned facial recognition example, this point can be elaborated on more clearly; if a large set of images containing human faces are given to the AI during training, it will be able to recognize patterns within human faces, such as the existence of


two eyes, a nose, and a mouth, and thereby increase its success rate in identifying faces during testing. However, if images of trees and stones are mixed into the training data, then the AI program may find it more difficult to accurately perceive patterns within the given data set, and consequently become less effective in fulfilling its initial purpose. Moreover, being given a larger set of training data allows an AI model to make more accurate predictions, since it has a larger pool of information in which it can identify and apply patterns to. Training data is used for a range of purposes, such as the aforementioned image recognition, sentiment analysis, spam detection, and text categorization. A common theme among these different types of training data, however, is the possibility of wrong methods of training. Artificial intelligence, with its ability to mimic the process of human thought, also raises possibilities of negative inputs with incorrect target results creating a machine with a harmful thought process. For example, if an AI program is continuously shown images of aircrafts being bombed, and taught that the target result should be positive, then the machine may consider terrorist bombings or warfare to be positive when applied to real life. Artificial intelligence, like all things created by mankind, retains the potential to be used for a malevolent cause. In particular, because we do not understand all of the statistical techniques being used by computers to analyze training data, we must continue to tread cautiously in our efforts to develop and understand AI through the application of statistics. The statistical methods used to understand and categorize big data are by no means as simple as those used by human statisticians; in fact, many of the mechanisms used by computers to find and analyze patterns in data sets still remain a mystery to us. They cannot be labeled with discrete descriptions such as “standard deviation” or “normal distribution.” Instead, they are an amalgamation of various complex pattern-identifying and data-processing techniques. Furthermore, the statistical techniques used in the realm of big data and artificial intelligence are somewhat different from previous applications of statistics. For example, the previously mentioned training data is a novel subject that was only incorporated into statistics after the subject’s introduction to AI. Statistics, which had almost exclusively dealt with quantitative data in the past, is now also used to analyze qualitative data, creating a necessity for this training data. Training data also indicates another difference between conventional and modern applications of statistics, which is that statistics in AI and machine learning require supervised learning to find relationships in data, while conventional statistics requires regression analysis[11]. Conventional statistics is more intuitive to humans but limited in its usage. On the other hand, statistics in AI and machine learning is essentially a black box that cannot be explained through previous rules, but proves more efficient in deriving implications from larger and more diverse sets of data.


However, despite these many distinctions, the subject’s fundamental purpose has not changed; statistics, in the end, is an effort to mathematically approach phenomenon, identify patterns in data, and apply our findings to new situations. Consequently, recent developments in statistics and its traditional applications should be used in conjunction with each other, cancelling each other’s drawbacks with their strengths.

Criticisms about statistics Apart from the concerns raised on the use of statistics in the realm of artificial intelligence and big data, conventional statistics also has its fair share of criticisms. As a constantly changing, improving discipline, there continues to exist imperfections in statistics that we should always be cautious of when using statistical analysis in any situation. For example, in 2012, statistician Nate Silver used statistical analysis to successfully predict the results of the presidential election for all 50 states in the U.S[12]. While this brought about much media attention to the role of statistics in fields beyond the scope of learning it was commonly associated with, this event led to what could arguably be referred to as an overreliance on statistical prediction in the next U.S. presidential election. As can be seen by this example, there certainly exists shortcomings in statistics, both in the collection of statistical data and our use of it. Among the multiple criticisms frequently made about the subject, there is a recurring theme that can be found; they often condemn how it distorts our perception of phenomena by oversimplifying it. While statistics is a tool used to conveniently perceive the message portrayed to us by large sets of data, it is, in the end, a discipline based on averages and predictions. The real world does not always act with this in mind, and therefore deviates from statistical predictions most of the time. Moreover, data analysis is mostly done in the realm of quantitative data, so qualitative aspects of socio economic phenomena are often underrepresented in statistical results. This also makes it easier for statisticians to use data to understate or exaggerate the issue at hand, therefore making some statistical data unreliable[13]. However, we do need some form of numeric representation for situations that require comparison, so utilizing statistics is necessary. This is why overreliance on statistical analysis is both easy and dangerous to do. One example is the overreliance on GDP statistics; this usually leads to the conclusion that the economic situations of most citizens of a country are improving. This is not always the case, especially for countries whose economic disparity is also widening. The individual welfare of the population is not accurately and entirely reflected in the GDP of a nation, which only tells us its overall economic status — including its corporations, government, and net exports. Therefore,


relying only on GDP statistics may lead to the inaccurate analysis of the personal welfare of the people. Statistics, in the end, is a discipline of averages and predictions. No matter how much effort researchers put into refining the analysis methods of numerical data, they will always fall short of being able to fully represent a real-life phenomenon by only deploying numbers. Statistics will always fall short of giving a definite answer about virtually anything. All conclusions made about hypotheses are never certain, and comparisons between two sets of data at best give a solid prediction. However, it must also be understood that this is the very definition of statistics. Statistics serves to give a better interpretation of complicated issues by removing certain factors that bring about uncertainty during the process of research; thus, it may be too much to expect statistics to be able to give an exact one-to-one portrayal of the situation it is analyzing. It is, like all other disciplines, used best when amalgamated with other approaches and fields.

The future of statistics Statistics, with its ability to explore different social phenomena using situation hypotheses and reliably interpret nonphysical trends, is a rapidly growing discipline in the modern world. With the ability to be used in conjunction with a variety of other subjects such as mathematics, economics, the social sciences, and computer science, statistics is relevant and necessary in all kinds of different fields. While the future of statistics is not entirely clear — predictions on which domain it will be used most often in, and which spheres of knowledge it will most frequently intermingle with vary — it is safe to say that statistics will be taking on a similarly important, if not greater, role in our future than it is now. Statistics has already played a large role in helping us understand general trends in data, and with the world becoming increasingly interconnected, this unique aspect of statistics will only become more necessary. Big data and artificial intelligence are becoming the centerpiece of modern technological development, and because the statistical techniques being used in these fields are very different and entirely transcendental of the statistical mechanisms previously used by human statisticians, the adaptation of data analysis and statistical usage to this new trend is all the more necessary. Amalgamated with statistics, big data and AI have been explored in numerical and textual analysis for many years. This is not, however, the boundary of their potentials; efforts are already being made to expand their usage into the field of human creation, such as the arts. A major example is the development of artificial intelligence algorithms to find similarities between


paintings by various artists by a team of researchers from MIT[14]. In a world that is increasingly reliant on different types and greater amounts of big data, statistics must evolve to fit its needs, and it, at this moment, seems to be walking down the right path.


References [1] “History of Statistics.” Wikipedia, Wikimedia Foundation, 15 Aug. 2020, https://en.wikipedia.org/wiki/History_of_statistics. Accessed 19 Aug. 2020. [2] “Statistics.” Wikipedia, Wikimedia Foundation, 17 Aug. 2020, https://en.wikipedia.org/wiki/Statistics. Accessed 19 Aug. 2020. [3] Fisher, R. A. “Studies in Crop Variation. I. An Examination of the Yield of Dressed Grain from Broadbalk: The Journal of Agricultural Science.” Cambridge Core, Cambridge University Press, 27 Mar. 2009, www.cambridge.org/core/journals/journal-of-agricultural-science/article/studies-in-crop-va riation-i-an-examination-of-the-yield-of-dressed-grain-from-broadbalk/882CB236D1EC60 8B1A6C74CA96F82CC3. Accessed 6 Oct. 2020. [4] Scheaffer, Richard L, and Tim Jacobbe. “Statistics Education in the K-12 Schools of the United States: A Brief History.” Journal of Statistics Education, vol. 22, no. 2, 2014, pp. 1–14., doi:https://doi.org/10.1080/10691898.2014.11889705. Accessed 15 Aug. 2020. [5] Calmorin, L. Statistics in Education and the Sciences. Rex Bookstore, Inc., 1997. [6] Secchi, Piercesare. “On the Role of Statistics in the Era of Big Data: A Call for a Debate.” Statistics & Probability Letters, vol. 136, 2018, pp. 10–14., https://www.sciencedirect.com/science/article/abs/pii/S0167715218300865. Accessed 16 Aug. 2020. [7] Gupta, Shashank. “Sentiment Analysis: Concept, Analysis and Applications.” Towards Data Science, Medium, 19 Jan. 2018, https://towardsdatascience.com/sentiment-analysis-concept-analysis-and-applications-6c94 d6f58c17. Accessed 19 Aug. 2020. [8] Brownlee, Jason. “What Is Deep Learning?” Machine Learning Mastery, Machine Learning Mastery Pty. Ltd., 16 Aug. 2019, https://machinelearningmastery.com/what-is-deep-learning/. Accessed 20 Aug. 2020. [9] Smith, Daniel. “What Is AI Training Data?” Lionbridge, Lionbridge Technologies, Inc., 28 Dec. 2019, https://lionbridge.ai/articles/what-is-ai-training-data/. Accessed 20 Aug. 2020. [10] “AlphaGo: The Story so Far.” DeepMind, Google, 2020, https://deepmind.com/research/case-studies/alphago-the-story-so-far. Accessed 6 Oct. 2020. [11] Shah, Aatash. “Machine Learning vs Statistics.” KDnuggets, KDnuggets, 29 Nov. 2016, www.kdnuggets.com/2016/11/machine-learning-vs-statistics.html. Accessed 19 Aug. 2020. [12] O'Hara, Bob. “How Did Nate Silver Predict the US Election?” The Guardian, Guardian News and Media, 8 Nov. 2012, www.theguardian.com/science/grrlscientist/2012/nov/08/nate-sliver-predict-us-election. Accessed 21 Aug. 2020. [13] Davies, William. “How Statistics Lost Their Power – and Why We Should Fear What Comes Next.” The Guardian, Guardian News and Media, 19 Jan. 2017,


www.theguardian.com/politics/2017/jan/19/crisis-of-statistics-big-data-democracy. Accessed 21 Aug. 2020. [14] Gordon, Rachel. “Algorithm Finds Hidden Connections between Paintings at the Met.” MIT News, Massachusetts Institute of Technology, 29 July 2020, https://news.mit.edu/2020/algorithm-finds-hidden-connections-between-paintings-met-mus eum-0729. Accessed 6 Oct. 2020.


Supertasters: A Genetic Blessing or a Curse? By ERIC YOON Urged on by your peers, you take your first sip of coffee, and a whirlwind of sensation hits you: the jolt of the caffeine hit, the sweetness of the extra milk added in preparation, and above all an overpowering aftertaste of bitterness that can’t leave your tongue. Your friends are laughing at you—allegedly, they didn’t break a sweat with their first espresso—but a purely biological reason could account for the difference in reaction: you may just be a supertaster. Constituting about 25 percent of the world’s population, supertasters are gifted with a heightened sense of taste. The mark of a supertaster is not just in their tongue: besides having a greater number of taste buds concentrated in the bumps that scatter the organ, supertasters are highly sensitive to 6-n-propylthiouracil (PROP), a bitter chemical that openly reveals itself to the genetic elite of taste. The rest of the population would have a very different experience when exposed to the chemical: regular tasters, making up half of the population, would find PROP bitter but handleable, while non-tasters, who occupy a fourth of the population, wouldn’t be able to taste the chemical at all. This translates to different eating habits among individuals for foods of strong flavor, on top of factors cultivated by culture and the environment. “Having more tastebuds means there are also more pain receptors, and that is why supertasters often cannot handle spicy foods and generally avoid anything bitter,” says Dr. Linda Bartoshuk, an American psychologist specializing in genetic variations in taste perception. "Why would nature do that? Because bitter is our poison detection system." Notably, sensitivity to taste can also play a role in the severity of diseases. Researchers at Baton Rouge General divided one hundred patients with COVID-19 into groups of non-tasters, regular tasters, and supertasters and documented the patients’ sensitivity to taste by a taste strip test. They found that non-tasters were at greater risk for symptoms of the virus; in fact, all hospitalized candidates were solely from the non-tasters category, and not a single supertaster was present in the sample size of COVID patients. With a loss of taste being one of the hallmarks of the virus, and documented evidence showing the perception of taste impacts how the body fights respiratory illnesses, the research reveals insight into how sensitivity to taste can be an indicator for current treatments to diseases. The comparatively better health of supertasters extends to benefits in lifestyle. Supertasters are less likely to smoke and drink due to an increased sensitivity to the taste of cigarettes and alcohol respectively. They also (wisely) avoid bitter vegetables like broccoli and kale, though science says this is a lost opportunity for the potential well-being of supertasters against diseases like


cancer. Supertasters have a taste of the extremities when consuming food; for supertasters, that fact is theirs to make into a blessing or a curse.

Works Cited: Barham, Henry P et al. “Does phenotypic expression of bitter taste receptor T2R38 show association with COVID-19 severity?.” International forum of allergy & rhinology vol. 10,11 (2020): 1255-1257. doi:10.1002/alr.22692 Crosby, Guy. “Super-Tasters and Non-Tasters: Is It Better to Be Average?” The Nutrition Source, 31 May 2016, www.hsph.harvard.edu/nutritionsource/2016/05/31/super-tasters-non-tasters-is-it-better-tobe-average/. Hayes, John E, and Russell S J Keast. “Two decades of supertasting: where do we stand?.” Physiology & behavior vol. 104,5 (2011): 1072-4. doi:10.1016/j.physbeh.2011.08.003 Ly, A, and A Drewnowski. “PROP (6-n-Propylthiouracil) tasting and sensory responses to caffeine,sucrose, neohesperidin dihydrochalcone and chocolate.” Chemical senses vol. 26,1 (2001): 41-7. doi:10.1093/chemse/26.1.41 Turner-McGrievy, Gabrielle et al. “Taking the bitter with the sweet: relationship of supertasting and sweet preference with metabolic syndrome and dietary intake.” Journal of food science vol. 78,2 (2013): S336-42. doi:10.1111/1750-3841.12008 “What Makes a Supertaster and How to Know If You Are One | CBC Radio.” CBCnews, CBC/Radio Canada, 19 Apr. 2019, www.cbc.ca/radio/what-makes-a-supertaster-and-how-to-know-if-you-are-one-1.5103847.


Titan: Our next home? By JOHN K. LEE

Figure 1: A 1024 x 1024 picture of Titan, taken by the Cassini Orbiter on the Cassini-Huygens mission to the Saturnian system. The photo portrays Titan as a fuzzy, orange ball. Source Credit: Nasa Planetary Data System, Jet Propulsion Laboratory, Caltech (LINK) For the past few years, humanity has given unrequited attention to Earth’s next door neighbor, Mars. With aerospace companies like SpaceX on the rise, and new rovers successfully landing on the surface of Mars, it is nearly impossible not to be excited about new discoveries. It has been three weeks since the landing of the most recent Mars rover mission, Perseverance, and this 3 meter long metallic beast has been capturing nearly 400 photos every single day, allowing scientists to thoroughly investigate the geographic features and habitability of the planet. However, it seems as if the spotlight is slowly shifting from Mars to a less known celestial body—Titan. Titan, characterized by its murky atmosphere and its musty, pear-like color, is the largest of Saturn’s seven moons. Discovered by Chirstiann Huygens with the help of his brother Constantijn Huygens, Titan was first spotted by the brothers’ 12-foot long telescope on March 25, 1655. Despite being only 3 times as large as and 1.8 times heavier than our moon, Titan seems to have gained much recent attention for its potential habitability. Robert Zubrin, an American aerospace engineer, asserts in his 1999 book “Entering Space: Creating a Spacefaring Civilization” that “in certain ways, Titan is the most hospitable extraterrestrial world within our


solar system for human colonization” (Zubrin 163-166), citing Titan’s climate and geological activity for being some of the primary reasons for its possible habitability. While Titan is primarily composed of ice and rock, its seas, lakes, rivers, and clouds are teeming with an abundance of organic molecules. Accordingly, it has been, for quite some time, predicted that Titan could possibly host some extraterrestrial organisms, although none have been found as of now. In talking about Titan’s potential habitability, we must talk about its atmosphere, climate, and its geographic features. According to an article in the Planetary and Space Science journal, Titan’s atmosphere is a thick layer of nitrogen (96.3 percent), isotopes of nitrogen (1.08 percent), methane (2.17 percent), and a variety of other organic compounds, including but not limited to acetylene, ethylene, ethane, propyne, propene, propane, and diacetylene. Due to this chemical composition and Titan’s relatively small mass, the moon has a very extended, opaque atmosphere that blocks out nearly all visible light. Furthermore, Titan has both a greenhouse and anti-greenhouse effect: while its methane atmosphere traps a lot of light from the sun within the atmosphere, it also serves to reflect light back into space, keeping more from entering the atmosphere, and also helping regulate the moon’s temperature. Titan is a cold, barren moon, but this might change in the future. Currently, Titan has a temperature of -180ºC. Contrary to our planet’s 70 percent, only 1 percent to 8 percent of Titan’s atmosphere is covered by clouds. With its clouds, it has non-frequent, liquid methane rains. However, with the Sun becoming a red giant, and Titan being within the habitable zone, it is predicted that it will have more suitable temperatures in the future. Titan’s geology mainly consists of wide, rocky plains, with the occasional ice Ih, which is the hexagonal crystal ice we are used to seeing on Earth. But more interesting are the hydrocarbon seas—the Kraken Mare being the largest of them all—and the possible liquid water sea underneath the surface of Titan. This is very important in that firstly, water is obviously a vital component of life. Secondly, in devising possible forms of life on Titan, a hydrocarbon solvent-based model was proposed. While hydrocarbons are less useful as a solvent when compared to water, it is possible that, analogous to marine life on Earth, populations of organisms might thrive in the hydrocarbon oceans of Titan, feeding off ethane and methane. Furthermore, a form of methanogenic, oxygen free model of life has been proposed by a team of Cornell University Researchers, according to this 2015 Phys.org article. Due to these valuable, possibly life-sustaining properties of the moon, several aerospace organizations have set forth to probe deeper into Titan. In 1979, Pioneer 11 first visited the Saturnian system. Later in 2004, the Cassini-Huygens aircraft arrived at the Saturnian system, and took close-up shots of Titan. On Jan. 14, 2005, the spacecraft landed on Titan, and since


then, it has been probing the surface. The Dragonfly mission, consisting of a robotic rotorcraft designed to detect organic materials and discover new biochemical properties, will launch and head for Titan in 2027. According to one New York Times article, Valerio Poggiali, a researcher at Cornell University, claimed that a submarine is on the way. With increasingly sophisticated technology on the rise, the future for Titan exploration seems bright. While the habitability of Titan is uncertain at best, such global scientific interest has made an opportunity for the masses of people to work towards clearing that uncertainty, and perhaps making Titan a habitable moon.


Works Cited: “Catalog Page for PIA14602.” NASA, NASA, photojournal.jpl.nasa.gov/catalog/PIA14602. “Mars 2020 Perseverance Rover.” NASA, NASA, mars.nasa.gov/mars2020/. “Huygens and the Improvement of the Telescope.” Digitaal Wetenschapshistorisch Centrum Huygens and the Improvement of the Telescope Comments, www.dwc.knaw.nl/biografie/christiaan-huygensweb/instrumenten-en-uitvindingen/huygens -and-the-improvement-of-the-telescope/?lang=en. Zubrin, Robert. Entering Space: Creating a Spacefaring Civilization. Jeremy P. Tarcher/Putnam, 2000. Magee, Brian A., et al. “INMS-Derived Composition of Titan's Upper Atmosphere: Analysis Methods and Model Comparison.” Planetary and Space Science, vol. 57, no. 14-15, 2009, pp. 1895–1916., doi:10.1016/j.pss.2009.06.016. Gohd, Chelsea. “Methane Rain Falls on Titan's North Pole from Cloudless Skies.” Astronomy.com, 21 Jan. 2019, astronomy.com/news/2019/01/methane-rain-falls-on-titans-north-pole-from-cloudless-skies #:~:text=But%20while%20scientists%20aren%27t,gravity%2C%20the%20raindrops%20fa ll%20slower. Shiga, David. “Titan's Changing Spin Hints at Hidden Ocean.” New Scientist, 20 Mar. 2008, www.newscientist.com/article/dn13516-titans-changing-spin-hints-at-hidden-ocean/. Ju, Byanne. “Life 'Not as We Know It' Possible on Saturn's Moon Titan.” Phys.org, Phys.org, 27 Feb. 2015, phys.org/news/2015-02-life-saturn-moon-titan.html. “In Depth.” NASA, NASA, 16 July 2019, solarsystem.nasa.gov/missions/pioneer-11/in-depth/. “Cassini Orbiter.” NASA, NASA, 25 Apr. 2019, solarsystem.nasa.gov/missions/cassini/mission/spacecraft/cassini-orbiter/. Overbye, Dennis. “Seven Hundred Leagues Beneath Titan's Methane Seas.” The New York Times, The New York Times, 21 Feb. 2021, www.nytimes.com/2021/02/21/science/saturn-titan-moon-exploration.html.


What is soft robotics? By WOOSEOK KIM

Soft robotics is a relatively new field of study that has been skyrocketing in popularity. Simply said, soft robotics involves the concept of utilizing softer, more flexible materials like rubber and silicon, rather than the commonly used rigid components in mechanical devices. While it may seem somewhat counterintuitive to use elastic materials—as doing so may result in both decreased stability and strength—the increased flexibility provides distinct advantages that cannot be achieved by rigid movement. This is mainly due to the fact that plasticity is an uncommon attribute among typical machines. As a result, this idea of incorporating bendable elements into machines for increased fluidity allowed multiple breakthroughs regarding previously impossible tasks. The first records of development in machines involving elastic attributes date back to the 1950s, when an American physicist named Joseph Laws McKibben invented pneumatic artificial muscles. Pneumatic artificial muscles, also known as PAM, were initially invented for the purpose of aiding polio patients who suffered from paralysis. Unlike the conventional actuators at the time, PAM was not only lighter, but was also more efficient and much easier to manipulate, resulting in better functionality. As a result, PAM was soon widely utilized in multiple different machine designs. Since this change in paradigm, the usage of flexible machines in various fields have increased steadily. However, considering how it has almost been an entire century since the birth of the first soft machines, the specific concept of ‘soft robotics’ was established quite recently during the late 1990s. Even then, it was considered as a minor field that did not receive much attention from the scientific community. The fact that there were less than a hundred research papers regarding the topic of soft robotics before the 21st century effectively displays its obscurity within the scientific community. It has only been a decade or so since the field of soft robotics started accumulating more and more recognition. Beginning from around 2008, research papers regarding soft robotics increased exponentially in number, while the phrase “soft robotics” was accepted as a keyword in academic articles. Spearheaded by the most prestigious universities including Harvard University and Massachusetts Institute of Technology, publications regarding soft robotics soon dominated the robotics section in scholastic journals such as the Web of Science by 2015.


Figure 1: One of the multiple soft robot actuators that are being developed by researchers in NASA for space exploration. Source Credit: LINK Nowadays, soft robotics is utilized in a wide range of different fields ranging from manufacturing to medicine. One of the more novel breakthroughs regarding the usage of soft robotics was the development of more effective implants. A key issue that was common for most implant devices until now has been the body’s rejection of the implant. For instance, Fibrosis, the clotting of fibrous material around the implanted device, is a potentially hazardous side effect that is caused by such foregin body response. However, researchers from the National University of Ireland Galway, AMBER, and Massachusetts Institute of Technology discovered how this problem could successfully be solved through implementing the soft robotics technology in implants. Their study stated that the body’s response could be mitigated through creating oscillations through the usage of flexible soft robotics mechanisms. Like this example, soft robotics is slowly but steadily becoming essential for the development of various technologies focusing on the safety and convenience of humanity.


Works Cited: Alici, Gursel. "Softer Is Harder: What Differentiates Soft Robotics from Hard Robotics?" MRS Advances, vol. 3, no. 28, 5 Feb. 2018, pp. 1557-68. Springer, doi:10.1557/adv.2018.159. Accessed 20 Mar. 2021. Bao, Guanjun, et al. "Soft Robotics: Academic Insights and Perspectives through Bibliometric Analysis." Soft Robotics, vol. 5, no. 3, June 2018, pp. 229-41. Liebert Online, doi:10.1089/soro.2017.0135. Accessed 20 Mar. 2021. "Soft Robotics Breakthrough Manages Immune Response for Implanted Devices." MIT News, Massachusetts Institute of Technology, 4 Sept. 2019, news.mit.edu/2019/soft-robotics-breakthrough-manages-implanted-devices-immune-resp onse-0904. Accessed 20 Mar. 2021.


Turning Down the Volume: Noise Pollution in the Ocean By XAVIER KIM

The ocean is celebrated worldwide as the home of some of the richest biodiverse ecosystems in the world. It also happens to be the single most vulnerable biome on Earth, plagued with environmental issues like overfishing, acidification, pollution, warming, habitat fragmentation—the list goes on. Perhaps one of the most urgent problems overlooked by both the public eye and scientists alike is noise pollution. Also known as sound pollution, the phenomenon is caused by manmade noise intruding in nature and can have devastating consequences. That is not to say, however, that our understanding of the issue is entirely new. Though researchers in the field have known about the potential harm to marine life from anthropogenic noise for years, its severity was always overshadowed by its terrestrial counterpart. Contrary to oceanic sound pollution, which the average person may be unfamiliar with, terrestrial sound pollution in locations like urban cities boast significantly more public awareness as it is often taught in schools. Nevertheless, scientists’ studies and reports on the ocean’s noise pollution have been accumulating quietly but steadily over the past few decades. On Feb. 5, a paper titled “The soundscape of the Anthropocene ocean” was published in the journal Science by marine ecologist Carlos M. Duarte and his team of over 20 other experts. They screened upwards of 10,000 papers written by other researchers and compiled their information to create the most extensive synthesis of evidence supporting the environmental repercussions of noise pollution in the ocean. All in all, its findings show that the situation is a lot more critical than what was previously thought.


Figure 1: Model depicting the temporal duration in direct comparison to the spatial magnitude of various biotic, geologic, and anthropogenic sources of sound. Source Credit: science.sciencemag.org (LINK) A soundscape is the sum of all the acoustic elements in a given setting, and the ocean soundscape is particularly vast. Sound plays a crucial role in the communication between aquatic organisms, such as in navigation and mate attraction. The Science article claims that prior to the industrial revolution, the ocean soundscape was in a relative state of balance. It was composed primarily of geophony (sounds from geological sources) and biophony (sounds from biotic sources) with little to zero interference from anthrophony (sounds from human sources). As technology advanced, so did the volume and ubiquity of manmade sounds in the ocean. The two most prominent sources of anthrophony are ship traffic—which is doubling every decade—and oil and natural gas exploration. The latter is particularly concerning, as it involves massive explosions using seismic air guns to map out where fossil fuels lay in the Earth’s crust. These explosions are six to seven orders of magnitude louder than the noise produced by ships. Marine organisms are particularly sensitive to the increasing volume of anthropogenic sound in their natural habitat. The sound can hinder communication, cause permanent hearing damage, and even cause mortality. In 2002, for example, the orca whale population off the coast of the Broughton Archipelago in British Columbia faced a sharp decline when acoustic harassment devices were installed to prevent wild seals from feeding on salmon farms. The orca whales


migrated out of the area to avoid the sound and were thereby forced to give up territory and compete for the same resources in a smaller area.

Figure 2: Diagram illustrating four separate situations of varying levels of noise pollution: the state of the ocean before the Industrial Revolution, the current conditions, and both a poorly-managed and optimal scenario as anthropogenic activity continues to increase into the future. Source Credit: science.sciencemag.org (LINK) Though the overall situation may initially seem bleak, there are multiple solutions that can theoretically be put into practice to reduce oceanic sound pollution. Some solutions are relatively straightforward: slow down ships to reduce the sound created, or shift maritime trade routes to avoid sensitive ecosystems. Other solutions require a more technical approach. The implementation of quieter propellers is a viable solution that not only significantly reduces cavitations, or loud screeches produced by the popping of vapor bubbles, but also increases fuel efficiency. Scientists are also looking into alternative methods of oil and gas exploration that do not require seismic air guns. Marine vibrators, or vibroseis, do not produce the loud impulse noises that current practices do: at the same time, the limited sound that is produced is in a considerably narrower band of frequency. Currently, companies are already working on prototypes where the


vibroseis devices are mounted on submarine vehicles that roam the seafloor, avoiding harmful impacts on marine life. With all the tools set to mitigate noise pollution in the oceans, there are no tangible obstacles standing in the way of taking vital steps to resolve the issue. All that needs to happen at this stage is to inspire purposeful political action: the exact goal of Duarte’s article published last month. Historically, the law has never recognized noise as an impactful factor in the degradation of marine life. “There needs to be a policy that mandates acoustic mitigation in the marine environment,” said Duarte. “We have noise standards for cars and trucks, why should we not have them for ships?”

Works Cited: Baker, Aryn. “Humans Are Making Oceans Noisier, Harming Marine Life.” Time, Time, 5 Feb. 2021, time.com/5936110/underwater-noise-pollution-report/. Duarte, Carlos M., et al. “The Soundscape of the Anthropocene Ocean.” Science, vol. 371, no. 6529, 2021, doi:10.1126/science.aba4658. Imbler, Sabrina. “In the Oceans, the Volume Is Rising as Never Before.” The New York Times, The New York Times, 4 Feb. 2021, www.nytimes.com/2021/02/04/science/ocean-marine-noise-pollution.html. Schiffman, Richard. “How Ocean Noise Pollution Wreaks Havoc on Marine Life.” Yale E360, 31 Mar. 2016, e360.yale.edu/features/how_ocean_noise_pollution_wreaks_havoc_on_marine_life.


New hope for sustainable fuel: Green Hydrogen By SUNMIN LEE

Figure 1: Mechanism used to produce green hydrogen through electrolysis Source Credit: columbia.edu (LINK) Did you know that 90 percent of all atoms in this universe is hydrogen? This might be exciting news to those who have read about President Biden’s plan on investing in clean energy. Hydrogen fuel has been a well-known resource for a long time, especially since it surprisingly only produces one bioproduct when burnt—water. Unfortunately, hydrogen atoms do not exist by themselves, but rather are in combined forms just as they are in water and plants. Therefore, hydrogen should be decoupled from other atoms before turning into energy, but it is not as easy as it sounds. This stage of decoupling hydrogen atoms has actually been the major problem for researchers. For years, the only practical way to turn hydrogen into usable energy was by using fossil fuels. Through a process called steam methane reforming, a catalyst helps methane and high temperature steam react, resulting in hydrogen as well as carbon monoxide. Then the carbon monoxide acts as a reactant, producing more hydrogen. However, hydrogen is not the only product in this chemical reaction. In both steps of steam methane reforming, an abnormal amount of carbon dioxide is produced, and due to high demand for hydrogen energy—mainly for chemical industries such as ammonia, fertilizer and oil refining manufacturers—830 million metric tons of carbon dioxide are produced every year. As per this negative side effect, the hydrogen energy created through this process is called ‘grey hydrogen.’ Despite the fact that it is an efficient energy source for many industries, the continuous production of grey hydrogen poses a great threat to the environment through air pollution. Therefore, scientists have been focusing heavily on searching for new hydrogen-producing methods that could be more eco-friendly.


One alternative is to capture and store CO2 from the steam methane reforming process; the resulting hydrogen is called ‘blue hydrogen’. However, there is a better mechanism. Instead of using fossil fuels to generate energy, as the aforementioned two methods do, we can use renewable energy to produce what we call ‘green hydrogen.’ Using solar or wind power, a machine can employ an electric current that can split water into hydrogen and oxygen, a process called electrolysis. If this could be converted into a real machine and if the majority of hydrogen fuel could be replaced with green hydrogen, it could possibly be introducing a new valuable resource that is efficient and nonpolluting. So why are we not using green hydrogen yet? The reason behind the slow development of green hydrogen is due to the extreme amount of electricity needed for electrolysis. However, with recent technological advancements leading to cheap renewable energy and the growing availability, it seems as if green hydrogen could be produced in the near future, possibly in the next decade. Moreover, the electrolyzer, a cell in which electrolysis occurs, has become more efficient, and with the progress in the industry, it will soon be advanced enough so that the expense would no longer be a concern. Recently, green hydrogen has been of interest to various sectors for its great potential as a top sustainable fuel. Some companies have started to directly employ electrolyzers to their renewable power projects, and many countries including Chile, China, and South Korea have revealed their plans on implementing hydrogen fuel for other technological growth and vehicles. Furthermore, the Department of Energy of the US and the European Union have invested 100 million and 430 billion dollars respectively to support research on hydrogen fuel cells with high expectations towards the future green hydrogen can possibly bring. As many people believe, the success of developing green hydrogen could introduce a new level of energy efficiency. Even the Paris Agreement goal of eliminating approximately 10 gigatons of CO2 per year does not seem to be a distant future with green hydrogen.


Works Cited: “From Grey and Blue to Green Hydrogen.” TNO, www.tno.nl/en/focus-areas/energy-transition/roadmaps/towards-co2-neutral-industry/hydr ogen-for-a-sustainable-energy-supply/. 7, Renee Cho |January, et al. “Why We Need Green Hydrogen.” State of the Planet, 11 Jan. 2021, blogs.ei.columbia.edu/2021/01/07/need-green-hydrogen/#:~:text=If the electricity is produced,growing interest in green hydrogen. Carbeck, Jeff. “Green Hydrogen Could Fill Big Gaps in Renewable Energy.” Scientific American, Scientific American, 10 Nov. 2020, www.scientificamerican.com/article/green-hydrogen-could-fill-big-gaps-in-renewable-ener gy/.


Biomimetic glue opens door to new opportunities By HANNAH KIM

Figure 1: A diagram of the mussel plaque and the chemical compositions of its glue.

When things need a quick fixing, there are certain tools that we all turn to. One of them is glue. However, there are not only limits to the glue we use but also several problems. Many of the glue we use are too weak or cannot hold in a wet environment. Even without these limitations, the glue is most likely unsuitable, even toxic, to be used in our body during medical procedures. In addition, glues are in fact bad for the environment. The toxic chemicals within the glue can pollute the soil and damage the fauna and flora in that area. Recognizing these problems, scientists strived to find an alternative. The solution, to the surprise of many scientists, was closer than we thought. Mussels, classified as bivalve mollusks, are marine organisms. Spending most of their lives in lakes and streams, mussels usually stick to surfaces and feed on planktons around them. What’s more fascinating, however, was that mussels are able to adhere to any surface, even underwater. This is in stark contrast with the current form of glue that we have, which cannot stick properly underwater. As a result, researchers began investigating the methods behind the mussels’ ability to maintain adhesiveness underwater. Mussels extend thin fibers known as byssal threads from their shells and attach to surfaces using a strong adhesive. The end of each thread has a flat, circular surface called a plaque. This plaque secretes adhesive proteins with unique chemical structures. The most notable component among the proteins is the amino acid DOPA, also known as 3,4-dihydroxyphenylalanine. DOPA has a


catechol functional group, which allows it to form strong bonds with the catechols on the adjacent molecules, as well as the metal molecules that compose most of the natural solid substrates. Another notable component is the ability of the catechol chains to drill into the surface instead of interacting with water molecules. This allows the adhesive to remain fixed even in the presence of water. The research on mussel glues started decades ago, and hundreds of research papers have already been published regarding this topic. However, the reason why we are still not able to produce a glue that mimics mussels’ such behavior is because there is a more complicated mechanism behind the adhesive proteins than the mere assembly of proteins. Factors that must be considered are the salt concentrations, timing, and pressure, which are harder to control. Companies such as ACatechol are looking for ways to recreate these perfect conditions for the glues to be made and mass producing them for future uses. Though not yet ready for commercial usage, a number of plans are already set up to bring this glue in effect. Among the various fields that aim to use mussel adhesion, most of the attention is concentrated in the medical field. Particularly in dentistry, dentists have struggled to attach dental crowns and dentures in the mouth because the inside of our mouths are always humid and wet. Most glues are not very effective in wet conditions and eventually fall off, leading to many problems for the patients. The development of “mussel glues” might have the potential to resolve this issue as it has the ability to stick even in water. In addition to dentistry, these glues can also be used in treating blood vessels or even delivering drugs to specific regions of our body. Medical researchers are also considering the possibility of using this glue for tissue adhesion, as it is not only safe for human bodies, but also very effective. Another factor that this glue can contribute to is the environment. Glues made from biotechnology are easily degradable, meaning that certain objects that were put together using this adhesion could be recycled despite having glue on it. Not only that, but it reduces the chance of polluting the soil like the glues created out of lethal chemicals. This is better for the environment. Despite the challenges associated with the creation of these adhesives, some predict that these glues can be out in the market in just a few years. Although the glues have not yet been used in real life settings, just from what was shown in the experiments by far, many people are expecting great results in the future.


Works Cited:

AskNature Team. “AskNature - Biological Strategy - Sticky Proteins Serve as Glue.” AskNature Sticky Proteins Serve as Glue Comments, asknature.org/strategy/sticky-proteins-serve-as-glue/. Venere-Purdue, Emil. “Mussels Inspire Glue That Sticks despite Water.” Futurity, 13 Mar. 2017, www.futurity.org/mussels-glue-adhesives-1377312-2/. Service, Purdue News. “Shellfish Chemistry Combined with Polymer to Create New Biodegradable Adhesive.” Purdue University News, www.purdue.edu/newsroom/releases/2017/Q1/shellfish-chemistry-combined-with-polym er-to-create-new-biodegradable-adhesive.html. Patel, Prachi. “Mussel-Inspired Polymer Glue Sticks to Wet Surfaces.” C&EN, cen.acs.org/materials/adhesives/Mussel-inspired-polymer-glue-sticks/98/web/2020/07. Mian, Shabeer Ahmad, and Younas Khan. “The Adhesion Mechanism of Marine Mussel Foot Protein: Adsorption of L-Dopa on - and -Cristobalite Silica Using Density Functional Theory.” Journal of Chemistry, Hindawi, 15 Jan. 2017, www.hindawi.com/journals/jchem/2017/8756519/. “Solvent-Based Adhesives & The Environment.” Sure Tack Systems, 31 May 2018, suretacksystems.com/2016/11/are-solvent-based-adhesives-bad-for-environment/#:~:text =Soil%20Pollution,comes%20into%20contact%20with%20it.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.