The Journal of Young Physicists - Issue 1

Page 1

The Journal of Young Physicists – Issue 1

THE JOURNAL OF YOUNG PHYSICISTS ISSUE 1 (journalofyoungphysicists.org)

1


The Journal of Young Physicists – Issue 1

2


The Journal of Young Physicists – Issue 1

CONTENTS Preface (5) Our founder on the theory of everything (7) Will the expansion of the universe result in a Big Freeze, Big Crunch or Big Rip? (15) General relativity: a simple discussion (25) An introduction to the brachistochrone problem (31) Two problems with scientific research (35) Black holes (39) The battle of the milk bags (43) White holes: a place of no entry (47) Quantum chemistry and chemoinformatics (51) The Standard Model of particle physics (55) Changing the paradigm: from a childhood dream to the first person on Mars (59) Velocity of falling objects with non-negligible air resistance (63) Grape plasma: finally explained (67) The physics behind oximeters (71) The automobile rebirth: electric and autonomous (79) Quantum immortality: could you survive 50 gunshots? (85) We shouldn’t exist: the matter-antimatter annihilation (89) Ultra cold atomic systems: the investigation and application of Bose-Einstein condensates (93) The theory of everything: quantum gravity (101)

3


The Journal of Young Physicists – Issue 1

4


The Journal of Young Physicists – Issue 1

PREFACE Let me preface this edition of the Journal of Young Physicists by setting up an important expectation: the articles published may not be at par with what most journals would consider worthy of publication. But you must keep in mind that essentially the JYP is a physics blogging site. You’re not going to find any groundbreaking primary research in any of the articles, and if you read the Journal with that expectation, you will inevitably be disappointed. Instead, we chose to publish this collection of articles, because we believe in giving the next generation of researches in the field of physical sciences a chance to share what they find to be significant. We also think that it can be immensely valuable for students with an interest in physics to read articles written by researchers their age, as research papers written by graduate students, post docs and professors can be very hard reads for many young physicists. Hence, we find this collection of essays to be a pretty nice introduction to some interesting, more advanced physics, narrated from the perspectives of fellow aspiring scientists. This is not to say that we’re only targeting high schoolers; in fact, we would absolutely encourage more experienced researchers to give this a read. I’m sure you’ll be interested in the perspectives that some of our authors give on certain topics! Overall, as the Chief Operational Officer of this Journal, I have to say that we’ve created something that is at the very least unique. We deeply hope that the first edition of the Journal of Young Physicists offers you, regardless of your age and experience, some insight into the intriguing subject of physics. Thank you for supporting the Journal, and we hope you enjoy it. Leonhard Nagel

5


The Journal of Young Physicists – Issue 1

6


The Journal of Young Physicists – Issue 1

OUR FOUNDER ON THE THEORY OF EVERYTHING It gives me immense satisfaction to bring out the first issue of the Journal of Young Physicists. I founded the JYP nearly 2 years ago, with the aim of providing a platform to young students to get their physics articles reviewed and published. At the same time, the JYP aims to communicate physics to a broader audience and spark interest in the subject. We started our journey from July, 2020. And we’ve already published nearly 40 physics articles and formed a team of over 30 young physicists from around the world. We publish articles on physics, math and other related topics for a general audience. Our articles mostly review a known topic in an engaging manner. We also publish blog articles, where the author may share his/her personal views (which should, of course, be scientifically and ethically acceptable) on a physics topic. We hope to grow into a leading platform for young students to publish their physics articles one day. Before we start, you should know that if you are a young student interested in physics and mathematics, you can join our team! You can be a contributor or editor, or both. Or you can also just read our articles and explore physics. Anyway, if you want to know more about anything related to the JYP, feel free to reach out to us at journalofyoungphysicists@gmail.com. Physics has been progressing at dizzying speeds from the very beginning. Although fundamental physicists are stuck with theories like string theory and loop quantum gravity for so long, there has been exciting breakthroughs in other areas of physics. In our first issue, we present some very interesting articles written by our contributors. While most of these articles review a known physics topic, there are also articles discussing recent developments in physics. Take a look at the contents to get an idea on the variety of topics covered. Some of us do physics for living, some of us focus on how we can make our lives more easy and convenient using physics. But a good number of us (I think) are deeply interested in the fundamental nature of reality, in the theory of everything, or in other words, in fundamental physics. A theory of everything. We all know what that is. A theory that would explain all possible interactions in Nature. Our universe is dynamic, and everything in it is changing. As you may know, force causes this change. Force can change the state of motion, direction and configuration of an object. There are many different forces, but often many forces are just different forms of the same fundamental force. Over the years, physicists have come to the conclusion that there are four fundamental forces. One, the gravitational force. Second, the electromagnetic force. Third, the weak 7


The Journal of Young Physicists – Issue 1

nuclear force and finally, the strong nuclear force. These forces can pretty much explain every interaction that can possibly take place in our universe. Just think about it. All the possible interactions. Physicists today are trying to unify these four forces into a single theoretical framework. Then, in a single theoretical framework, we could explain all the interactions that can take place in this universe. Such a theory deserves to be called a theory of everything. Perhaps the best example of unification in physics is the unification of electricity and magnetism by James Maxwell. Electricity and magnetism were understood to be different phenomena, initially. There are electric fields and magnetic fields. A charge in rest produces an electric field, and a charge in motion produces a magnetic field. Electric field lines point away from positive charges and toward negative charges. A positive or a negative charge can exist independently, and so electric field lines can emerge out of a positive charge and spread till infinity. But in magnetism, we see that magnetic monopoles can’t exist (or can they?). If there is a north pole, there must be a south pole. Or in other words, a north pole can’t exist independently from a south pole. Magnetic lines of force travel from the north to the south pole outside the magnet. Since monopoles can’t exist, we see that the number of magnetic field lines entering a region must be equal to the number of field lines leaving that region. As there are no magnetic charges from which magnetic field lines emanate or at which magnetic field lines terminate, any magnetic field line entering must exit through the surface. Next, Michael Faraday discovered the phenomenon of electromagnetic induction. Just like a moving charge can produce a magnetic field, changing magnetic flux can produce electric current. This principle is used in generators, for instance. What Maxwell did was describe all of electromagnetism in four equations. These were not his own equations, but he did modify Ampere’s law. Maxwell also found that electric fields and magnetic fields can ‘reinforce’ each other and propagate as an electromagnetic wave even in vacuum. He also found that the speed of the electromagnetic wave is equal to the speed of light, and from this he concluded that light is an electromagnetic wave. Then it was well established that light has a wave nature. In Newton’s day, light was thought to be made of a stream of ‘corpuscles.’ But to explain phenomena like interference, diffraction etc., it was proposed that light has a wave nature. Now, let us discuss gravity. Gravity here is the odd one out, because all the other forces can be explained in terms of exchange of certain fundamental particles. But we haven’t yet discovered such a particle for gravity. Today, most physicists see gravity as a force arising out of the curvature of spacetime. Albert Einstein, 8


The Journal of Young Physicists – Issue 1

in his theory of relativity, proposed that matter and energy are basically the same thing, and so are space and time. The presence of matter/energy can curve the spacetime around it. And this curvature of spacetime affects the movement of the bodies present around. So in this view, two bodies are not exactly pulling each other. They are moving toward each other due to the curvature of spacetime. There is no particle exchange involved. However, recent discoveries, like gravitational waves from two colliding black holes, hint at the existence of gravitons. There are some problems with gravitons, like the fact that they are not renormalizable. But we can get around such problems with the mathematical framework of string theory. Also, it is possible that the graviton may exist in a higher dimension. When Einstein was busy fighting his battle in the thickets of spacetime, other physicists were developing a new and revolutionary theory: quantum mechanics. (It should be noted that Einstein himself also played an important role in the development of quantum mechanics, although he later refused to accept quantum mechanics as a complete theory.) Quantum theory was born when Max Planck showed that energy can’t be radiated constantly, but in discrete packets called quanta. This was an essential step in resolving the ultraviolet catastrophe. Planck showed that the energy is directly proportional to the frequency, and thus is equal to a constant multiplied with the frequency. This constant is what we call Planck’s constant. It was Einstein’s insight that to explain the photoelectric effect, we must think of light as a stream of discrete quanta, called photons. Einstein reintroduced the particle nature of light, but it was necessary. Then we had Louis de Broglie’s great idea that matter has a wave nature. This has been verified by studying the diffraction of electron, and has wide ranging applications, like the electron microscope. The electron was thought of as a point-like particle. Then quantum theory proved that electron, and in fact everything else, has a wave nature too. Werner Heisenberg showed that it was impossible to determine both the position and momentum of a particle simultaneously. There is a certain amount of uncertainty in Nature deep down. Then came Schrödinger’s equation. There was the interpretation of the wave function as a function the square of which gives the probability of finding the particle in the given region. Only on observing the particle (don’t get me started on consciousness here), does the wave function ‘collapse’ to a distinct ‘value,’ and we observe the particle in a distinct location. Before observation, the particle has a non-zero probability of existing everywhere, even in distant galaxies. (Here I’m referring to the quantum measurement problem, and there 9


The Journal of Young Physicists – Issue 1

are a lot of philosophical difficulties here, but we’ll discuss that another time.) And, roughly speaking, Nature tries out all possibilities, so if we wait long enough, we can observe the effect of quantum tunneling: the particle can instantaneously disappear from here and appear in a distant locations. Tunneling has been observed already. But don’t expect we will be walking through walls any time soon! Anyway, quantum mechanics disproved all common sense notions about reality. In this new world, a particle can simultaneously pass through two slits, you have a certain probability of tunneling to a distant galaxy (although not in your lifetime, such extremely rare events will occur after a long, really long time) and you get the idea. There is a certain amount of randomness in Nature. We can’t know everything precisely. Everything is reduced to probability. Some physicists accepted this revolution, while others were not so happy about this. But quantum mechanics has been repeatedly verified experimentally, and it has given rise to successful theories like the Standard Model. Now, we all know what a charge is. It is an intrinsic property of some particles, and the electron carries the smallest amount of charge that can exist independently. The electron carries a charge which is in nature very different from the charge carried by protons. There are two types of charges, positive and negative and unlike charges attract one another while like charges repel one another. This attraction and repulsion fall under the electromagnetic interaction. This attraction holds the nucleus and electrons in an atom together. The strong nuclear and weak nuclear forces operate on a much smaller scale and hold the nucleus of the atom together. These three forces are all caused by the exchange of fundamental particles called bosons. In fact, there are two types of fundamental particles. Bosons, which give rise to forces, and fermions, which make up the matter. At this point, it should be noted that even the unification of just these three forces was not easy. After Einstein published his geometrical theory of gravity, he, along with many other scientists, started looking for a geometric interpretation of electromagnetism. The weak nuclear and strong nuclear forces were not known at that point. It was soon discovered, by Theodor Kaluza, that the existence an extra, hidden dimension can account for electromagnetism in a world which is consistent with Einstein’s general relativity. In other words, Kaluza, as Lee Smolin writes in The Trouble With Physics “applied Einstein’s general theory of relativity to a five-dimensional world and found electromagnetism.” Oskar Klein further developed the theory and, as Smolin writes “gravity and electromagnetism [were] unified in one blow, 10


The Journal of Young Physicists – Issue 1

and Maxwell’s equations are explained as coming out of Einstein’s equations, all by the simple act of adding a single dimension to space.” The weak and strong nuclear force can also be, in some sense, unified with gravity and electromagnetism by adding even more dimensions. But why don’t we see these dimensions? The initial response was that these dimensions were curled up to very small lengths and to make these theories work, you also have to ‘freeze’ the geometry of the extra dimensions. Plus, such solutions are unstable. Plus there were a lot of possible unified theories, and it was difficult to choose one out of them. As Smolin writes “Over and over again in the early attempts at unifying physics through extra dimensions, we encounter the same story. There are a few solutions that lead to the world we observe, but these are unstable islands in a vast landscape of possible solutions, the rest of which are very unlike our world.” Fundamentally, we can ask why some particular solutions are fulfilled while others are not, and we can also say that in some parallel universe, Nature has followed the alternate solutions. But let’s not get into that here. Before we move forward, it should be noted that there has been no significant breakthrough in physics since the discovery and verification of the Standard Model. There are, as Lee Smolin writes in his book The Trouble With Physics, five major problems in theoretical physics. First, the unification of the four forces and second, the reconciliation of general relativity and quantum mechanics. These two problems are perhaps interdependent: to solve one we need to solve the other. Next, the third problem is about the Standard Model. Although most of us believe it to be the most successful theory there is, it has a lot of constants with values we don’t know the origin of. We can experimentally find the values, but we don’t know why these constants have to take on those particular values. Plus we need to explain the mass of neutrinos and a lot of other things. The fourth problem is about dark matter and dark energy. We know very little about them, and they were postulated simply to explain away some crazy cosmological observations which couldn’t be explained by our current theories. The last problem is, according to me, the most important problem in all of science. Making sense of quantum mechanics. What causes the wave function to collapse? Did we just accidentally come to this world, which would exist even if we didn’t? Or is consciousness something fundamental, and the very existence of reality depends on consciousness? You must keep in mind that a theory of everything must answer each of these questions. The best candidate we have for a theory of everything is, perhaps, string theory. If string theory turns out to be correct, then the correct question about the final 11


The Journal of Young Physicists – Issue 1

theory should be asked not in terms of forces or fields or particles, but in terms of strings. String theory proposes that one dimensional ‘strings,’ the different modes of vibrations on which correspond to the different particles, are the most fundamental building blocks of the universe, and no particle is any more fundamental than any other particle. In string theory, forces arise from the joining and breaking of strings. All forces and particles can be explained by assuming strings propagate in a fixed background in such a way so as to minimize the area taken up. String theory basically replaces the idea of zero dimensional point particles with the idea of a one dimensional string of energy. Interestingly, string theory was initially developed as a theory of the strong nuclear interaction. But then it was discovered that string theory includes all the three forces plus gravity. The latter, as a requirement, must be included for the theory to work. Gravitons, according to string theory, arise from the vibrations of only closed strings. This suggested that instead of just describing the strong nuclear interaction, string theory is in fact the theory that unifies all the four forces of Nature. There were problems, however. Some string theories predicted the existence of faster-than-light particles called tachyons which rendered the theories unstable. Okay, tachyons can be eliminated by using supersymmetry. But string theory requires twenty-five spatial dimensions and one time dimension to work. (At this point, it would be interesting to consider the arrow of time, and ponder whether string theory allows for more than one time dimension, or whether time can after all be treated as just another spatial dimension.) After supersymmetry was incorporated into string theory, we could reduce the number of required dimensions to ten (nine spatial dimensions). The initial explanation was that these dimensions are curled up to such small lengths that they are not perceivable. But, as Smolin writes in The Trouble With Physics “This gave rise to great opportunities, and great problems... earlier attempts to use higher dimensions to unify physics has failed, because there were too many solutions... It also led to problems of instabilities, because there are processes by which the geometry of the extra dimensions unravels and becomes large and other processes whereby it collapses to a singularity.” More problems remained, and new problems came up as string theory developed. There was still excitement, for it was proved that string theory is finite and consistent. All the previous quantum gravity theories were not finite and consistent. String theory promised to be a ray of hope. String theory is beautiful and elegant. (But we must also remember that beautiful theories have failed before.) And then it was discovered that string theory is not a unique theory. Different versions of string 12


The Journal of Young Physicists – Issue 1

theory were discovered. The hope was that all these theories are different manifestations of some deeper, underlying theory. Before we go further, there is one more problem with string theory. And this, I believe, is why string theory, as we know it, can’t be the final theory. (Note that by the final theory, we mean the only or at least the simplest theory that explains everything in Nature.) String theory is background dependent. We describe strings moving in a fixed background, in fixed space and time. But general relativity is background independent. And as far as we know, a final theory must also be background independent. Background independence requires that, quoting from Wikipedia “the defining equations of a theory to be independent of the actual shape of the spacetime and the value of various fields within the spacetime. In particular this means that it must be possible not to refer to a specific coordinate system - the theory must be coordinate free. In addition, the different spacetime configurations (or backgrounds) should be obtained as different solutions of the underlying equations.” The background evolves, and is not fixed. And this should be the case with a fundamental theory. The background must be derivable from first principles, and not be fixed. As is explained in the Wikipedia article, we must not increase the number of inputs the theory needs to make its predictions. Well, string theorists assume the background to be almost fixed with small disturbances, and use perturbation techniques to account for these disturbances. And there is also some hope that the different background dependent versions of string theory are emergent from a deeper, background independent theory. In an interview I took of the renowned physicist Edward Witten, he said “Spacetime, gravity, and everything we see are in some sense emergent from a much deeper structure.” There are alternatives to string theory. The best alternative being loop quantum gravity which attempts to apply the principles of quantum mechanics to gravity. General relativity, as we know, describes gravity as a consequence of the curved geometry of spacetime. Loop quantum gravity, however, suggests that space itself is discrete, quantized and granular (not continuous). Loop quantum gravity assumes that space is emergent from discrete building blocks. Loop quantum gravity makes some testable predictions, and may also be a successful, finite and consistent theory of quantum gravity. And loop quantum gravity is background independent as well. Some string theorists are, interestingly, trying to apply the methods of loop quantum gravity to string theory, and, as the Quanta Magazine article String Theory Meets Loop Quantum Gravity explains, loop quantum gravity and string 13


The Journal of Young Physicists – Issue 1

theory may be different sides of the same coin. Personally, I believe that understanding the emergence of spacetime from a deeper structure will indirectly shed light on the emergence of consciousness, especially if, as I believe, consciousness is a fundamental phenomenon. Anyway, the problem with string theory is that there is no concrete experimental support for it. String theory has made no unique and viable prediction. The predictions of string theory, if proved to be true, will not conclusively prove that string theory is true. And even if these predictions are false, string theory might still be true. So far, there has been no exciting findings regarding cosmic strings or extra forces or if you believe in the supersymmetric string theory, superpartners. In conclusion, it can be said that physics may take an unexpected turn any moment now (for instance, the muon g-2 experiment points at the existence of new particles and makes us think twice about the Standard Model, which is supposed to be perhaps the most successful theory, and although the Standard Model can be modified to account for the mass of the neutrino, there are other problems with the Standard Model), and it is really difficult to predict whether we are actually close to finding a theory of everything, because future discoveries can make the task more complicated than it seems now. As of now, we have some really good insights like the AdS/CFT correspondence, which likely will take us a long way toward a theory of everything. The journey so far has been incredible, and is bound to be more so in the future. Coming to the Standard Model, it is mainly based on two principles: gauge principles (which has unified all the three forces except gravity) and spontaneous symmetry breaking (which explains the difference between these forces). It may be the case that initially there was a single force, which due to spontaneous symmetry breaking gave rise to three different forces. Breaking of symmetry is essential for stability. And this is a very fundamental idea. But I am not going into this here. I have left out important bits of the story, like the Higgs field which is responsible for spontaneous symmetry breaking, SU(5) symmetry and proton decay. But let’s keep it for another time. In the end, I would like to say that some people believe that once we find a theory of everything, physics would come to an end. Of course not. We have just started unravelling the mysteries of the universe, and there are more surprises in store for us. Arpan Dey

14


The Journal of Young Physicists – Issue 1

WILL THE EXPANSION OF THE UNIVERSE RESULT IN A BIG FREEZE, BIG CRUNCH OR BIG RIP? Author: Aman Burman Mentor: Sandip Roy (Ph.D. student at Princeton University)

ABSTRACT The eventual fate of the universe has been a question that has been around and tried to be answered for many millennia. Currently there are 5 main theories about the ultimate fate of the universe: Big Freeze, Big Rip, Big Crunch, Big Bounce and Big Slurp. A lot of research has been conducted by satellites such as NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) satellite on the different variables that are important in deducing how the universe is going to end which we are going to discuss in this article. By assessing these variables and what it means toward the momentum of expansion and the pull (or push) of gravity, we will see the most probable method of the demise of the universe. However, as with most concepts and theories in astronomy and astrophysics, our ideas are changing every day so what we believe today may be different to what we think will happen tomorrow.

INTRODUCTION Since the dawn of the earliest forms of humans, there has always been an innate sense of inquisition and curiosity. For example, the invention of the wheel and the ability to control fires were the result of trial and error and the ‘survival of the fittest’ mentality that has been ingrained in the human brain for millennia 15


The Journal of Young Physicists – Issue 1

due to our brains taking its form from a long sequence of events through thousands and tens of thousands of years of evolution through natural selection. This interest gradually turned to space and the cosmos. Greek astronomers would draw the sky every day and examine the change in the positions of the stars and stars would be used to navigate oceans. The expanding universe has slowly become one of the main points of interest for scientists and humans. Along with the discovery and discussions over the universe expanding at an accelerating rate, scientists have wondered how the universe is going to eventually end. Will it keep expanding forever or will it collapse in the same way stars collapse?

DISCUSSION First discovery of the universe expanding It was first discovered in 1929 by Edwin Hubble through observations of the light from distant galaxies that the universe is expanding. Hubble analyzed the light that he recorded and measured the wavelength of the spectrum of light that was detected. Hubble discovered that the wavelength of light from distant galaxies are red shifted – the light is seen as shifted toward the red end of the light spectrum. Essentially what this means is that the wavelength of the light has become longer. Galaxies that are further away are red-shifted more meaning they are moving away from the earth faster. Thus, it was established that the universe is in fact expanding and that galaxies are moving away from each other. In fact, from this observation of the fact that the universe is expanding, scientists established that the universe must have been extremely small at the very beginning. Thus, the Big Bang Theory of the universe was established from this fact additionally. This was the theory that the universe started as an extremely small hot dense ball and expanded and cooled into the universe that we know today forming stars, galaxies, galaxy clusters and many more astronomical bodies along the way. The accelerated expansion of the universe It was thought after the discovery that the universe is expanding that it is doing so at a decelerating rate due to gravity. Due to the vast expanse of the universe, it seemed intuitive to scientists and theoretical physicists that the force of gravity must be bringing the universe together. However, in 1998, two projects found evidence that the opposite was in fact the case. The Supernova 16


The Journal of Young Physicists – Issue 1

Cosmology Project and the High-Z Supernova Search Team both analyzed type Ia supernovae. A type Ia Supernova is thought to result as a result of a white dwarf exploding outward due to it going over the Chandrasekhar limit which is the threshold for white dwarfs before they cease to exist by exploding into a type Ia Supernova. The Chandrasekhar limit is 1.4 solar masses. The luminosity of these Supernovas is all the same which makes them extremely useful for analyzing the structure for the universe. The researchers found that the light produced from these supernovas was dimmer than expected. Since light emitted from astronomical bodies are dimmer when they are farther away, this shows that the universe expanded more than they thought it did thus showing that the universe is expanding at an accelerating rate. Why the universe is expanding at an accelerated rate? There have been many propositions for why the universe is expanding at an accelerated rate. One of the most famous explanations has been dark energy. Dark energy is thought to make up 68% of the universe. Very little is actually known about what actually dark energy is but there have been different theories to what it could be. One of these theories is that dark energy is a property of space. Einstein’s field equation that contains a cosmological constant says that ‘empty space’ can actually possess its own energy. Since dark energy is a feature of space itself in this scenario, the abundance of it would not reduce. Thus, as the universe expands and more of space appears, there is more of this energy causing even more expansion creating a continuous loop of space being created at a faster rate. Another explanation is that dark energy has properties that are opposite to that of normal matter and standard energy. This exotic energy and properties have been named quintessence. Dark energy could actually be pushing space in all directions causing it to expand. Cosmological principle Cosmological equations can actually help us with modelling the expansion of the universe. For example, the Friedmann equations describe how the universe will expand or contract in the future. The Friedmann equations are built on two essential assumptions (cosmological principle) which are that the universe is homogeneous and isotropic. But what does that actually mean? Homogeneity The universe is described as homogenous which means that it is translation invariant. The universe is the same everywhere and that there is no special or 17


The Journal of Young Physicists – Issue 1

unique part. This fact came to be through N-body simulations which model and simulate the behavior and movement of particles in systems. Therefore, it was established that for distances of 260/h Mpc or more, the universe is homogenous. Isotropy When we describe the universe as isotropic, we are saying that it looks the same in all directions and that there is no preferred direction in the universe. Again, this is the case mainly on very large scales. This is very similar to homogeneity, but homogeneity is more to do with how uniform the universe is whereas isotropy is how the universe looks the same from one point. It is also important to realize that homogeneity and isotropy are different and one thing being homogenous does not imply or necessarily mean that it is also isotropic. Robertson-Walker metric Before explaining the Robertson-Walker metric, it is important to define what a metric actually is. In mathematics, a metric is a system or function that gives the distance between two points in a set of points in n-dimensional space. In the simplest sense, length is a metric and is measured in meters. There are metrics present for 2 dimensional and 3 dimensional space. However, once it gets to 4 dimensional space, metrics become slightly difficult to conceptualize. For very large cosmological distances, Einstein’s general relativity needs to be taken into account. General relativity links the third dimensional space and 1 dimensional time together to give 4 dimensional spacetime. The name of the metric that links two distances in 4 dimensional spacetime is called the Minkowski metric, the equation of which is: 𝑑𝑠 2 = −𝑐 2 𝑑𝑡 2 + 𝑑𝑟 2 + 𝑟 2 𝑑Ω2 However, the Minkowski metric does not take into account the assumptions we have established which are that the universe is homogeneous and isotropic at extremely large scales. Thus, Howard Robertson and Arthur Walker independently (Arthur Walker built on the work produced by Howard Robertson) introduced a new metric that satisfied the cosmological principles and axioms. This is named as the Robertson-Walker metric and the equation of this metric is: 𝑑𝑥 2 𝑑𝑠 = −𝑐 𝑑𝑡 + 𝑎(𝑡) [ + 𝑥 2 𝑑Ω2 ] 1 − 𝑘𝑥 2 /𝑅02 2

2

2

2

18


The Journal of Young Physicists – Issue 1

There are two very important terms that appear in this equation which are a(t) and k which appear in the Friedmann equation as well which will be described later in the article. The Friedmann equation The Robertson-Walker metric can further be evolved using the Einstein field equations to finally get the Friedmann equation. This can be done using the equation: 𝐺𝛼𝛽 =

8𝜋𝐺 𝑐4

𝑇𝛼𝛽 , where 𝐺𝛼𝛽 is the curvature of space, and 𝑇𝛼𝛽 is the

stress-energy tensor. It describes the distribution of mass/energy. Additionally, 1

we have: 𝐺𝛼𝛽 = 𝑅𝛼𝛽 − 𝑔𝛼𝛽 𝑅 which is another one of the Einstein field 2

equations. 𝑅𝛼𝛽 is the Ricci tensor and 𝑅 is the Ricci scalar. The Ricci tensor measures the curvature and how the shape of an object changes and deforms as it moves along the geodesics of spacetime. Through careful manipulation of the terms, we obtain the Friedmann equation which is: 8𝜋𝐺 𝑘𝑐 2 1 𝐻 = 2 𝜀(𝑡) − 2 3𝑐 𝑅0 𝑎(𝑡)2 2

Terms 𝑎(𝑡) is the scale factor of the expansion of the universe in the equation and is a 𝑎̇

function of time. The Hubble constant and scale factor are linked: 𝐻 = . Thus, 𝑎̇ 2

𝑎

in the above equation we can replace 𝐻2 with ( ) . 𝑎(𝑡) can be defined as how 𝑎 much a volume of space expands in a certain amount of time. 𝜀(𝑡) is the energy density. 𝑘 is an extremely important term and is the curvature parameter. It is a constant of the Robertson-Walker metric and represents the geometry of the universe. 𝑅0 is the scaling parameter. The curvature of the universe depends on whether the average density of the universe is above or below the critical density threshold, which is 10−29 𝑔𝑐𝑚−3 . In the equation, 𝐻 is the Hubble parameter describing how fast a galaxy moves away from another galaxy. 𝐻0 is its value today, and is roughly 70𝑘𝑚𝑠 −1 𝑀𝑝𝑐 −1 . Another form of the Friedmann Equation which we are going to use for modelling the expansion is: 𝐻2 Ω𝑟 Ω𝑚 1 − Ω𝑚 − ΩΛ = + + + ΩΛ 𝑎2 𝐻02 𝑎4 𝑎3 19


The Journal of Young Physicists – Issue 1

where Ω is the cosmic density parameter. Curvature If the universe was flat, its geometry would correspond to that of a flat sheet as can be seen in the diagram below. In a flat universe, the value of Ω0 is 1. 𝑘, the curvature parameter, is 0. If the Universe was closed, its geometry would correspond to that of the surface of a ball as can be seen in the diagram below. In a closed universe, the value of Ω0 is greater than 1. 𝑘, the curvature parameter, is greater than 0. If the universe was open, its geometry would correspond to that of the surface of a saddle as can also be seen in the diagram. In an open universe, the value of Ω0 is less than 1. 𝑘, the curvature parameter, is less than 0. It has been discovered that the curvature of the universe is in fact flat. In order to prove the flatness of the universe, astronomers have used cosmic microwave background radiation (CMBR). The cosmic microwave background radiation is the result of the explosion of the Big Bang, visible in all directions as red shifted energy when the universe came into existence. When this radiation was released, the entire universe was at a temperature of 2700 Celsius. At this instant, photons moved around and slowly over time, they got stretched out shifting them down into the microwave spectrum. By detecting the variations in the temperatures of CMBR across the sky, we have found that the universe is flat. If the universe was curved, the temperature variations would be different to what we detect today. The flat universe theory corresponds to the measurements that have been recorded.

20


The Journal of Young Physicists – Issue 1

MODELLING THE EXPANSION OF THE UNIVERSE USING THE FRIEDMANN EQUATIONS Results obtained from Python modelling Flat universe An Einstein-de Sitter universe is one in which the universe is matter-dominated and has only matter. In this model, the curvature 𝑘 of the universe is 0 and Ω is 1.0. In creating the model below, Ω was taken to be 1.1. The graph of this universe is:

Modelled on Python by solving the Friedmann equation and plotting the scale factor (𝑎) against time It is clear that for an Einstein-de Sitter universe, the universe is going to expand forever. This is because the density of the universe in this model is less than the critical density (the hypothetical average density of the universe which will result in the universe halting its expansion after an infinite amount of time). There is not enough gravitational pull to collapse the expansion meaning it will keep expanding forever. Therefore, in this scenario, the universe will end in a Big Freeze. According to the second law of thermodynamics, entropy increases in a system. The universe will slowly cool as it expands and matter and energy will start to become uniform and evenly spread until the temperature starts to cool and until it gets to absolute zero which is –273.15 degrees Celsius. Closed universe A closed universe, or gravitationally-bound universe, has a value of 𝑘 > 0. A closed universe has a spherical shape and its Ω value is greater than 1.

21


The Journal of Young Physicists – Issue 1

Modelled on Python by solving the Friedmann equation and plotting the scale factor (𝑎) against time As can be seen in the model, the universe expands but at a decelerating rate. The scale factor increases until the peak after which the expansion reverses and the scale factor starts reducing. In this scenario, the average density of the universe is greater than the critical density resulting in gravitational attraction overcoming the expansion caused by dark energy and causing the universe to collapse backward. Open universe An open universe has a value of 𝑘 < 0. It has an Ω value less than 1. In the model below, Ω is 0.9.

22


The Journal of Young Physicists – Issue 1

CONCLUSION Many observations have been made in the last few decades that indicate that the universe is likely to end in a Big Freeze. The biggest factor that tells us that this theory is more probable than other theories such as a Big Crunch is the theorized existence of dark energy. Even with the large force of gravity acting on the universe will likely not cause the universe to collapse onto itself because it is not strong enough to overcome the inflating effect of dark energy. Estimates say that the Hubble rate is likely to drop but will asymptote to around 45km/s/Mpc and it will never go to 0 because of the inflating universe and the presence of dark energy behind the inflation. As the universe expands, the densities of the different matter constituents may get diluted and reduce, but as described above, dark energy is a feature of space itself and thus its average density will remain the same and it will not decrease. The fate of the universe is extremely dependent on this dark energy. In fact, if dark energy starts to get stronger over time, the universe might instead of ending in a Big Freeze, end in a Big Rip which will ‘rip’ the fabric of space apart causing all astronomical bodies to get unbounded from each other. Alternatively, it is possible that dark energy actually reverses sign and instead of causing the universe to expand, causes it to collapse onto itself which is called the Big Crunch fate of the universe as modelled above for a closed universe. It is even possible that a Big Bounce occurs where a Big Crunch causes the universe to collapse and cease to exist and another universe is created in a loop of destruction and creation. Although it is difficult to predict what will happen in the future, with the current understanding of the universe and measurements of different variables, it is likely that the universe will in fact end in a Big Freeze.

REFERENCES https://www.space.com/25126-big-bang-theory.html https://www.scientificamerican.com/article/expanding-universe-slows-thenspeeds/ https://universe-review.ca/R02-07-candle.htm https://www.youtube.com/watch?v=9DrBQg_n2Uo 23


The Journal of Young Physicists – Issue 1

https://people.ast.cam.ac.uk/~pettini/Intro%20Cosmology/Lecture03.pdf https://uncw.edu/phy/documents/thefriedmannequations.pdf https://astronomy.com/news/magazine/2021/01/the-beginning-to-the-endof-the-universe-the-big-crunch-vs-the-big-freeze

24


The Journal of Young Physicists – Issue 1

GENERAL RELATIVITY: A SIMPLE DISCUSSION Author: Arpan Dey

Albert Einstein developed his special theory of relativity, which predicted modifications on the structure of space and time to account for the fact that the speed of light is always constant in vacuum, to any observer regardless of the frame of reference they are in. However, Einstein was not the type to be satisfied by the special theory of relativity. He sought a deeper theory, that would apply to all sorts of reference bodies (accelerating or non-accelerating). After over ten years of struggle through the thickets of spacetime and matterenergy, Einstein wrote down the field equations for gravity. This is perhaps the best example of a theory comparable to quantum mechanics that has been developed almost single-handedly. General relativity, like quantum mechanics, remains one of the greatest theories in physics, and has been repeatedly verified experimentally as well. It has predicted the existence of black holes, dark energy, gravitational waves etc.. General relativity has successfully superseded Newton’s classical theory of gravitation. General relativity forms the basis of modern astrophysics and has provided a new interpretation of spacetime. General relativity treats gravity not as a force, but as the consequence of movement of bodies in curved spacetime. For instance, the Sun ‘sinks’ the spacetime continuum, due to which planets like Earth follow a circular path. The Sun is not exactly pulling the Earth. Have you noticed how water spirals inward toward the center, in a basin? The basin is curved inward. It appears as if a huge, invisible body had been placed on a flat surface, which, due to the body’s weight, has sunk. The same is the case with the Sun and Earth. Imagine spacetime to be a big, flat sheet which stretches 25


The Journal of Young Physicists – Issue 1

away infinitely in all direction. If you put a heavy object (the Sun) at a point on this sheet, it will sink. The sheet will be curved downward at that point. Now drag in another, smaller body (the Earth) into the picture. Give it a minimum velocity, and it will continue to move around this big body in circles. (The water comes closer and closer to the center since it lacks the velocity required to orbit continuously. This is why the Earth doesn’t move closer and closer to the Sun, but moves in almost a fixed orbit.) Now, is the bigger body exerting a force on the smaller one to keep the latter in orbit? No! It seems true. But it’s just a consequence of the curvature of spacetime. Very accurate experiments will show that time near the ground runs slower than time a considerable distance above. Consider a vertical object moving with some velocity. Time near its top will, by however little an amount, run faster than the time near its bottom. This means it will not follow a straight path, but a curved path. We can predict the effects of gravity in this way also. Gravity is not a force in the same way the other forces are forces. Another interesting prediction of general relativity is that the stars you see in the night sky are not where they seem to be. Why? We saw that gravity can bend the path of a body, like the Earth. Can gravity bend light as well? Yes! The apparent positions of stars are not their actual positions because light bends a little if it comes near a massive body (or a deep curvature in spacetime). The apparent position is given by the tangent to the curve nearest to the observer. Light from the star bends around the gravitational field of another body (the Sun, in this case) and reaches the observer on Earth, to whom it appears, on tracing the light backward in a straight line, that the star is at a different position. Einstein’s predictions regarding this were experimentally verified during a solar eclipse. So how exactly did Einstein figure out that gravity bends light? Imagine the following situation. A body is accelerating upward with a person in it. If a light is switched on outside it, and the light beam propagates in a straight line, from the accelerating body, the light would seem to be following a bent path. As the body accelerates upward, with time, the light that was near the top of the body seems to have reached the bottom of the body. Light seems to be following a curved path. And gravity also causes acceleration. Thus, light must also follow a curved path under the influence of gravity (i.e., in a gravitational field). Einstein reasoned that light is actually following the shortest path, which isn’t always the

26


The Journal of Young Physicists – Issue 1

straight line path. (The shortest path from country A to country B is a curved one, for instance.) In his book Relativity, Einstein writes: “In contrast to electric and magnetic fields, the gravitational field exhibits a most remarkable property, which is of fundamental importance for what follows. Bodies which are moving under the sole influence of a gravitational field receive an acceleration, which doesn’t in the least depend either on the material or on the physical state of the body. For instance, a piece of lead and a piece of wood fall in exactly the same manner in a gravitational field (in vacuo), when they start off from rest or with the same initial velocity. This law, which holds most accurately, can be expressed in a different form in the light of the following consideration. According to Newton’s law of motion, we have: (Force) = (inertial mass) X (acceleration), where the “inertial mass” is a characteristic constant of the accelerated body. If now gravitation is the cause of the acceleration, we then have: (Force) = (gravitational mass) X (intensity of the gravitational field), where the “gravitational mass” is likewise a characteristic constant for the body. From these two relations follows: (Acceleration) = (gravitational mass) X (intensity of the gravitational field) / (inertial mass). If now, as we find from experience, the acceleration is to be independent of the nature and the condition of the body and always the same for a given gravitational field, then the ratio of the gravitational to the inertial mass must likewise be the same for all bodies. By a simple choice of units, we can thus make this ratio equal to unity. We then have the following law: The gravitational mass of the body is equal to its inertial mass.” This is the equivalence principle. An observer in a closed body can’t distinguish between the effects produced by a gravitational field and those produced by an actual acceleration of the body. So, there are no gravitational fields. You are accelerating upward now. In curved spacetime, you must accelerate just to remain stationary. Of course you can’t feel it, since everything around you is accelerating upward at the same rate as well. A man who is falling downward can see this acceleration. Suppose you fall from the roof of a very high building. When you are falling through the air, you don’t feel your weight. If you drop something, it falls with you at the same rate. And you are not accelerating. You are not in any gravitational field. There is no such thing as a gravitational field. This was Einstein’s ‘happiest thought.’ Now consider a man in a spaceship in deep space, away from large masses. This man feels no acceleration. The spaceship is at rest or moving at a constant velocity. This situation is equivalent 27


The Journal of Young Physicists – Issue 1

to the situation of the falling man. The general principle of relativity states that all reference bodies are equivalent for the description of natural phenomena, irrespective of their state of motion. This, in some sense, restores the symmetry among all reference bodies. Suppose the spaceship we discussed above comes close to a planet. It will gradually stop moving in a straight line and start to move toward the planet. However, the man inside will not be able to feel anything. (Maybe he can see what’s happening if the planet is visible through a window in the spaceship, but for the sake of argument assume that the spaceship has no window.) To the man, he and the spaceship are moving in a straight line. So why does the path of the spaceship curve? The answer is simple. The spacetime around the planet is curved. You are moving in a straight line in curved spacetime. So ultimately, to an external observer, it seems that you are following a curved path. Suppose you and your friend start moving toward the North Pole, some distance apart from the equator. If you both keep on walking in a straight line toward the North, you will come closer and closer to each other till you bump into each other at the North Pole. It appears as if some force is pushing you closer. Both of you followed a straight path, so what caused you to come closer? Of course, the curvature of the Earth. Concludingly, we can say that while it has been a great problem in physics to unify general relativity with quantum mechanics, general relativity remains the only theory which satisfactorily explains the gravitational interaction and it will undoubtedly be remembered as one of the greatest revolutions in physics, and science. Before we end this article, let’s discuss a very interesting possibility. Gravity may not be a fundamental force in the same way the other three forces are. Gravity may be an emergent force, even an entropic force. Although there are a lot of problems with this idea, we can certainly say that gravity is different from the other three forces. Whether this difference arises due to some spontaneous symmetry breaking, and at the beginning of the universe gravity and the other three forces were all one single force, we don’t know. But here this was brought up to bring the attention of the reader to emergence. So far, we have been breaking things down into smaller and smaller parts and studying these parts to understand the whole. This is reductionism. But we have been missing out on something crucial. How these parts interact to give rise to a whole which is greater than just the sum of the parts. The whole can’t be understood just by 28


The Journal of Young Physicists – Issue 1

studying the parts. As the Quanta Magazine article To Solve The Biggest Mystery In Physics, Join Two Kinds Of Law says, “Reductionism breaks the world into elementary building blocks. Emergence finds the simple laws that arise out of complexity.” Simply put, zoom in, reductionism wins; zoom out, emergence wins. It would be interesting, at this point, to consider whether gravity is an emergent force, and if yes, what would be the consequences? Of course, there are a lot of problems with emergent gravity as we know it, and experiments don’t support it. But that can only be due to our incomplete understanding of gravity. Gravity may not be just a statistical effect, an entropic force. But gravity definitely can be emergent from some deeper structure, in some manner we don’t understand yet.

REFERENCES https://youtu.be/F5PfjsPdBzg https://youtu.be/XRr1kaXKBsU https://www.quantamagazine.org/to-solve-the-biggest-mystery-in-physicsjoin-two-kinds-of-law-20170907/

29


The Journal of Young Physicists – Issue 1

30


The Journal of Young Physicists – Issue 1

AN INTRODUCTION TO THE BRACHISTOCHRONE PROBLEM Author: Satyam Sharma

What is the fastest way to get from point A to point B, when viewing the problem two-dimensionally, where Point B is at a lesser height than Point A (but not directly below it)? It’s well known that the shortest distance between two points on a piece of paper is a line, so it might be natural for one to assume that the fastest distance follows from the fact that: 𝑡𝑖𝑚𝑒 =

𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦

Logically speaking, the distance is minimized, and thus since distance is directly proportional to time, time must be minimized as well.

31


The Journal of Young Physicists – Issue 1

However, the fallacy in this chain of thought occurs when something fundamental is realized: this equation only holds for constant velocity. Yet, we never put any such constraints when we posed our initial question; the distance between two points, a ramp, can be curved just as validly as it can be straight.

Taking another logical approach, it follows that perhaps the curve with the greatest slope provides the greatest acceleration, and thus the greatest velocity. After all: 𝑣 = 𝑣0 + 𝑎𝑡 1 𝑥 = 𝑥0 + 𝑣0 𝑡 + 𝑎𝑡 2 2 which means that as acceleration increases, so does velocity, and as velocity increases, logically time decreases. However, the fallacy in this chain of thought happens when something even deeper is realized: The equation only holds for uniform/constant acceleration. In fact, what happens in the pictured curve is that the object, when dropped onto this ramp, accelerates rapidly, then accelerates slowly. So maximizing the initial acceleration, while seemingly an insightful path to follow, negates its own benefits due to the fact that the latter half of the ramp must meaningfully be flat (and thus with little to no acceleration). We now have intuitively realized that the ‘ideal curve’ must be somewhere in between ── steep enough to gain acceleration/velocity quickly, but not too steep, because then it will negate itself later on. 32


The Journal of Young Physicists – Issue 1

This shortest time problem has a name: the brachistochrone problem (‘brachistos’, Greek for shorter, and ‘chronos’, Greek for time) has been around for several centuries, and confused mathematicians, geometers, and physicists alike until Johann Bernoulli used Newton’s then new-found tool of calculus. This article’s intention is to only serve as an introduction to the problem and showcase some errors in logical lines of thinking. Resources to the actual solution to the problem can be found in the references.

REFERENCES https://books.google.com/books?id=4q1RAAAAcAAJ&pg=PA269#v=onepage& q&f=false https://mathcurve.com/courbes2d.gb/brachistochrone/brachistochrone.shtml https://4ccb06ba-5733-4d01-96521f173bc0e51c.filesusr.com/ugd/4d55eb_43090d54f6384c568c2f0f5e116d123f .pdf https://www.youtube.com/watch?v=Cld0p3a43fU https://www.youtube.com/watch?v=zYOAUG8PxyM

33


The Journal of Young Physicists – Issue 1

34


The Journal of Young Physicists – Issue 1

TWO PROBLEMS WITH SCIENTIFIC RESEARCH Author: Arpan Dey

It is highly likely that you have heard this question sometime: “Is most published research false?” Yes and no. Jordan Ellenberg says in his book How Not To Be Wrong: The Hidden Maths Of Everyday Life, suppose you visit a hospital and examine the gunshot cases. You find that about 90% of the shots had hit the legs, while only 10% of the shots had hit the chest of the patient. If you are asked where one is more likely to get shot, you would say in the leg, although the answer can be in the chest as well. It would be incorrect to assume that the leg is more vulnerable to gunshots than the chest. This is simply because most of the people who got shot in the chest did not even make it to the hospital. If you want to determine where you are most likely to get shot, you must take into consideration all the people who got shot, and not just those who made it to the hospital. Thus, you should always keep the big picture in mind. Just like you were missing out the people who did not make it to the hospital in the above example, scientists ‘intentionally’ miss out on such data as well. Scientists have to publish papers, and most journals only accept papers whose results are statistically significant. But statistics is a highly dangerous tool. As Ellenberg says, there is a 1/20 chance of getting results that match with your prediction, although the result may be of no real significance. Suppose scientists want to find out whether jelly beans cause acne. Initially they find that there is no link at all. Then they try jelly beans of different colors. Let us say they choose 20 colors. By pure chance alone, one of them (say, green) displays some link with acne. But math tells us the results will be statistically significant in at least one case by pure luck. However, the headlines on the paper next day would be something like: “Link found between green jelly bean and acne; statistically significant results,” etc.. This is enough to convince all laymen and even some scientists. Now suppose that the green jelly bean is tested 20 times by 20 different scientists in 20 different laboratories. 19 of the scientists find no statistically significant effect. They hush up, since they know that no journal is going to publish their findings. The other scientist’s results agree with the already published results, and he trumpets the fact. This is why much of published research involving statistical analysis is questionable. Selective 35


The Journal of Young Physicists – Issue 1

publication can give rise to a lot of unnecessary problems and misconceptions. For instance, a group of scientists claimed to have found a secret code in a book, and to have proved the existence of God. However, almost all such results become insignificant if the experiment is repeatedly carried out in an unbiased manner. What determines whether the results are statistically significant or not is the null hypothesis. Your data must not only be consistent with your predictions, but they also must be inconsistent with the negation of your predictions. To quote Ellenberger, “If I claim I can make the Sun come up at dawn with my mind, and it does, you should not be impressed by my powers; but if I claim I can make the Sun not come up, and it does not, then I have demonstrated an outcome very unlikely under the null hypothesis, and you would best take notice.” Suppose you want to prove that there is some relation between A and B. Then, your null hypothesis will be that there is no relation between A and B. Your results will be statistically significant only if the null hypothesis can be safely rejected. A null hypothesis is rejected only if the observed data is significantly unlikely to have occurred if the null hypothesis were true. In null hypothesis significance testing, the p-value is the probability of obtaining test results at least as extreme as the results actually observed, under the assumption that the null hypothesis is correct. A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. A p-value lesser than about 0.05 means that the result is statistically significant. However, scientists (both intentionally and unintentionally) tweak and ignore circumstances to get the desired p-value, as they inwardly know that their results must be statistically significant to be accepted for publication in most journals. Thus, we come back to the same problem. We do not see the big picture. Just because 90% of the patients in the hospital have been shot in the leg, we can’t conclude that the leg is more vulnerable to gunshots. Another example Ellenberg gives to emphasize this point is this. Suppose you receive 10 emails from a stockbroker, predicting, correctly each time, the rise or fall of stocks. You would be tempted into subconsciously believing that his next prediction is also likely to be correct. However, let us look at the big picture. The stockbroker emails 10000 people. To 5000, he says that the stock would collapse; to the other 5000, he says that the stock would rise. Either of the predictions must be true. He then divides those 5000 people who received the correct prediction into two halves, and emails them again. To the first half, he says a certain stock 36


The Journal of Young Physicists – Issue 1

would collapse; to the rest, he says that it will rise. If this process continues 10 times, he can get 10 predictions in a row correct to at least a few people. Those people think that he is always correct about such predictions, only because they are unaware of the thousands of other predictions that failed. (And those people who receive incorrect predictions do not receive any further emails. So they just forget it.) That is the problem. In science, we are unaware of the thousands of ‘statistically insignificant’ results that never made it to the journals. Now, let us come to the second problem with scientific research. Regression to the mean. This phenomenon has been observed in businesses, hereditary traits, etc.. Things always have a tendency to move toward the mean. The best will worsen with time, and the worst will improve toward the average. If your parents are very tall, you are likely to be taller than the average, but not as tall as them. Why? Since heredity alone can’t justify being very tall. Your parents are very tall. Not just taller than the average, but way more taller than the average. Your grandparents being tall alone can’t justify this. Luck, environment and circumstances must have played a part as well. But it is very unlikely that you will get lucky the exact same way like your parents and end up as tall as them. Similarly, if your parents are very short, you are likely to be shorter than the average, but not as short as them. Tall parents do not go on producing taller offspring infinitely. Over time, with more and more generations, the offspring will move toward the average height. You are tall, but not as tall as your parents. You children will be shorter still. Your grandchildren will be even shorter. But if they get too short, their children will gradually grow taller than them. You will move downward if you are above the mean, or upward if you are below. Now, suppose you develop a drug to treat obesity and test it on ten extremely obese people. Extremely obese, not just obese. As regression to the mean predicts, after some time (maybe a few months or years), these people will definitely lose weight, even if by a little amount. Although you would want to believe that your drug worked, that may not be the case. It is impossible to determine whether your drug was really that effective. Thus, the problem, in context of scientific research, is clear. But we must admit that scientific research is the only trustworthy way we have of investigating the world around us, so full steam ahead!

37


The Journal of Young Physicists – Issue 1

REFERENCES https://youtu.be/42QuXLucH3Q https://youtu.be/1tSqSMOyNFE

38


The Journal of Young Physicists – Issue 1

BLACK HOLES Author: Kavya Pullanoor

Many people often mistake black holes as empty space. Remember, black holes are anything but empty space. All of us are familiar with what they basically are. If not, read further to get a brief idea regarding the existence of one of the most fascinating objects in astrophysics. What are black holes? Imagine an object ten times the size of the sun squeezed into a sphere approximately the diameter of New York City. Could you imagine the gargantuan amount of energy that lies within this? This is what a black hole is. The gravitational field of a black hole is so strong that not even light can escape it. Black holes are formed when massive stars die. These stars exhaust their internal thermonuclear fuels in their cores at the end of their lives. Due to this, the core becomes unstable and gravitationally collapses inward upon itself, and the outer layers of the star are blown away. The weight of all the constituent matter falling in from all directions compresses the star to a point of zero volume and infinite density called ‘singularity.’ Black holes can be formed only from the most massive stars, having more than three solar masses. Stars with a smaller amount of mass evolve into less compressed bodies, such as white dwarfs or neutron stars. Why can’t even light pass through a black hole? The structure of the black hole was drawn from Albert Einstein’s general theory of relativity. The singularity of the black hole constitutes its center which is covered by the ‘surface’ of the black hole, also known as ‘event horizon’. At and 39


The Journal of Young Physicists – Issue 1

beyond the event horizon, the escape velocity exceeds the velocity of light, the reason due to which not even light can pass through a black hole. Schwarzschild and Michell In the late 1700s, a not-so-well-known scientist John Michell was the first person to conceive of the possibility of a gravitational mass so large that light could not escape from it, and was then able to come up with an estimate of how large such a body must be. Michell’s calculation did not produce the right answer as he was working with Newton’s laws, not Einstein’s, and the speed of light was not known to high accuracy at the time. More than a century later, Karl Schwarzschild would be the first to correctly analyze the relation between the size of a black hole and its mass. His work revealed the limit at which gravity exceeds other physical forces to form a black hole. This value is called the ‘Schwarzchild radius.’ First picture of a black hole Everyone believed black holes weren’t real, until April 2017, when a picture of a supermassive black hole at the center of M87, a galaxy 54 million light years away, was taken after five days of observation using eight telescopes around the world by a collaboration known as the Event Horizon Telescope. The picture of the black hole contains black matter surrounded by bright lights. This ‘black matter’ is the event horizon, having a gravitational field so strong that even light can’t pass through it, as mentioned above. Previously, researchers had captured a jet of light emerging from where the M87 black hole was predicted to be – but they couldn’t definitively see the black hole because their instruments were nowhere near as sharp as the Event Horizon Telescope. This black hole is about 6.5 billion times the mass of the sun. To see the black hole’s boundary between light and dark, the astrophysicists captured radio waves 1.3 millimeters in wavelength, invisible to the human eye. These radio waves are emitted by the gas or the bright light swirling around the black hole. The researchers chose this particular wavelength because it can sail through entire galaxies and even Earth’s own atmosphere without being absorbed. But they still needed good weather at all eight of their telescope sites to see the black hole.

40


The Journal of Young Physicists – Issue 1

M87’s black hole is relatively close to Earth, as the light coming from it was only emitted 54 million years ago. So we’re seeing it at a more mature moment in its existence. It took two decades of work to capture the image. Part of that effort was designing, building, and hauling the hardware to various telescope sites. Everything they’ve observed so far about M87 – its mass and the size of its event horizon, is consistent with Einstein’s theory. This image is just the beginning, says Feryal Özel, a collaborator of Event Horizon Telescope (EHT) that released the first image of a black hole. The team plans to take more, better-quality pictures of this black hole to understand it in more detail. Now that they’ve finally stared into the eyes of the beast, it’s time to watch how it behaves. Cheers to the team! Meet the team behind this amazing discovery: https://www.smithsonianmag.com/science-nature/meet-team-capturedimage-first-black-hole-180973495/ Read more about the M87 black hole here: https://www.jpl.nasa.gov/edu/news/2019/4/19/how-scientists-captured-thefirst-image-of-a-black-hole/ Read more about black holes here: https://www.space.com/15421-black-holes-facts-formation-discoverysdcmp.html

41


The Journal of Young Physicists – Issue 1

42


The Journal of Young Physicists – Issue 1

THE BATTLE OF THE MILK BAGS Author: Parmin Sedigh

Did you just say milk bags? Yep. In Eastern Canada, that includes Ontario, Quebec, and the Maritime Provinces (New Brunswick, Nova Scotia, Prince Edward Island, and Newfoundland and Labrador), most large quantities of milk comes in bags. For example, going into any family home, you’ll probably find a large bag, like the one below, that has 3 smaller bags full of milk, amounting to 4L of milk in total. Each small bag is then put into a jug and a corner (or two) is snipped so the milk can come out.

Milk bags found in an Ontarian grocery store (Courtesy of CBC) If you’ve never heard of bagged milk, you’re not alone. Not even all of Canada uses milk bags. But why? Why are Eastern Canadians the only ones having milk in bags? According to the CBC, it goes back to the 1960s. That’s when dairy producers were experimenting with milk bags as cheap alternatives to glass bottles. Bags still weren’t very popular until some new regulations regarding measurements came into effect. In the 1970s the Canadian government took a strong position against the imperial system and enforced lots of new rules to get Canada shifted over to the metric system. This included getting the milk industry to move away from quarts and toward liters. Changing from one-quart glass bottles to 1.3L ones was expensive since so many were already in circulation and changing machinery was also costly. But these experimental milk bags could very 43


The Journal of Young Physicists – Issue 1

easily be changed to 1.3L instead. So that’s what happened. That doesn’t mean that milk bags were consumer’s favorite. When milk jugs became cheaper than plastic bags, much of Canada moved over to them. To add on to that, the Progressive Conservative government of Brian Mulroney relaxed some of the metric regulations in the 1980s, further allowing the milk industry to say byebye to milk bags. Nowadays, you can find milk jugs in Quebec and the Maritimes. But Ontario just can’t seem to let go of the milk bag. It’s because of even more regulations. Up until 2018, if retailers or producers want to sell or produce 4L milk jugs, they must either pay a deposit or have a recycling program in place. Even now, Ontarians are just too used to those milk bags, even though it perplexes most others south of the border or in the west. How do milk bags work? Glad you asked; let’s get down to some physics. Here’s the part that everyone agrees on. You grab your bag of milk, pop it into a reusable plastic jug with no lid, and then you cut it open. But that’s where the science comes in. According to a Huffington Post poll of 500 people, 70% always snip off just one corner of the milk bag, 20% always snip off two corners, and the rest are somewhere in the middle. Now, whenever the milk bag is full, it’s very likely to get messy. Experiments were conducted, and both methods (one and two snips) resulted in a messy first pour. However, with a half full bag, the pour was quite simple and clean, again regardless of the different number of cuts. So two questions pop up! Why are people so set on either method when they perform nearly the exact same? Why is the pouring of the full bag so much messier? (And this involves a little more physics than just the fact that there’s more milk!) Let’s answer the two questions! Does the number of cuts matter? Short answer: not really in this case. But that’s not what you’re here for! The reasoning that most of those in favor of the two snips go to is that the second hole relieves air pressure and allows better air flow which prevents lots of spillage. They aren’t exactly wrong, but they aren’t right either. This is absolutely true for containers like cans where having just one hole will lead to the liquid not coming out. This is because air really wants to get into that can and when 44


The Journal of Young Physicists – Issue 1

the hole only has enough room for the milk, the air tries something different. The air pushes the milk back in. When there are two holes, air can go into one and the liquid can come out of the other. Having a larger hole also works. The air can get in while the liquid comes out simultaneously. Going back to our milk bag, you don’t want to cut a huge hole that’s going to allow for little control of the pour but you also want the milk to come out easily and without struggle. So you cut another small hole on the other side. This would be true except for the fact that nearly always, the hole cut for milk to come out is big enough for air to come out of too! So team two-snip was really close and had some solid physics behind them, but it just fell apart at the end.

What supporters of two cuts believe happens during a pour (Image courtesy of Food Network Canada; modified by Parmin Sedigh) Why’s pouring a full bag so messy? You might be really confused about this question; “Well, because it’s full of milk so it can easily spill out!” That is partially true but there’s something else at play here. Going back to our air flow above, when the bag is half full, the other half is full of something else, air! Since there’s already some air in there, the air pressure isn’t as high when pouring milk out. However, when the bag is nearly full of milk and has very little air, the air from outside the bag is desperately trying to get in. And a three-way fight pursues! It’s between you and gravity against the air. And the only thing left at the end is a milky mess. Who thought there’d be physics at work during such a simple (but bizarre to some) task like pouring milk out of a milk bag? Now you know about the history of milk bags and the science behind them so go out and inform the world!

45


The Journal of Young Physicists – Issue 1

46


The Journal of Young Physicists – Issue 1

WHITE HOLES: A PLACE OF NO ENTRY Author: Shiven Arora

White holes are a figment in the realm of the mathematics that appear to be an impossible possibility. In simple terms, they are the hypothetical opposite of a black hole – literally, mathematically and physically. There is no concrete physical evidence for their existence, but they are mathematically possible. Before developing an understanding of white holes, you need to understand what black holes are like. Black holes are regions of spacetime where there is a huge amount of mass packed into an infinitely dense point known as a singularity. To create this singularity, the black hole warps the fabric of spacetime infinitely which causes the gravitational pull of the black hole to be infinite. As a result, the laws of physics cease to operate within the black hole’s event horizon. Past the event horizon, nothing can escape the black hole (including light) due to the strong gravitational pull. No one truly knows what happens inside a black hole because the light from inside can’t reach us. Hence, we have to rely on theories and equations. Conversely, white holes only spew matter out from its own singularity of infinite density back into the universe. It is a region of outward flowing spacetime with its event horizon prohibiting entry. Light can’t enter white hole but can only be radiated from the white hole due to the repulsive force. Both white holes and black holes have a mass so large that they warp the fabric of space-time to such an extent that an object can only travel in one direction. Physicists describe a white hole as a black hole’s ‘time reversal,’ a video of a black hole played backward.

47


The Journal of Young Physicists – Issue 1

Mathematics allows for white holes to exist because there are two solutions to the equations for Einstein’s general theory of relativity. One solution confirms a black hole. However, the second solution produces a white hole. There are two solutions to Einstein’s equation because the direction of time takes no preference in general relativity (time symmetric). As a result, both solutions are equally likely to exist according to mathematics. Neil De Grasse Tyson (a theoretical physicist) related this mathematical phenomenon of two solutions to the answer to the “square root of the number 9.” In this case you also produce two answers – positive 3 and negative 3. Neither answer is better or worse than the other and are both correct. Likewise, in Einstein’s equations, black holes and white holes are equally correct solutions mathematically. However, if white holes arise from the collapsing of stars, you are left with a massless singularity. Physicists debate theories over how a white hole can spew matter with actual mass from a massless singularity. It doesn’t seem to make much sense. However, some believe that a black hole can only shrink until hitting a natural limit. That’s when it would rebound outward due to an immense amount of outward pressure in a ‘quantum bounce.’ This would turn the shrinking black hole into an expanding white hole and eject all the mass the black hole had sucked in beforehand all at once. It would not be instable and not be able to last very long. This theory using quantum mechanics solves many issues that would arise if only black holes existed out of the two. Firstly, it solves the ‘black hole information paradox’ which is related to the laws of thermodynamics. Once matter is sucked into the black hole, no one knows what happens to the information of the matter and it appears to be deleted from the universe. As per the laws of thermodynamics, this can’t occur because information can never be destroyed. Einstein’s theory of general relativity and quantum mechanics also rely on the laws of thermodynamics being true. Whatever comes out of a white hole would be a mangled version of the matter which went into the black hole, but information of its former self would not be eradicated. Another thinking that holds onto the idea of information being retained is that a white hole sits on the other end of a black hole connected by a wormhole (theoretical tunnels of spacetime). In this way, the information of matter going into the black hole is retained and transferred to another universe.

48


The Journal of Young Physicists – Issue 1

The image below is a complete Schwarzschild geometry representing a white hole and black hole connected by a wormhole. This wormhole connecting two separate universes is called the Einstein-Rosen bridge.

Some physicists associate the idea of white holes with the Big Bang due to the instability of them both. The Big Bang's explosion of matter and energy looks like potential white hole behavior. The formation of our universe could potentially be the result of a white hole spewing out all the matter we observe in our universe. It’s alluring to our sense that things regularly come in binaries — if there’s an off switch, there’s undoubtedly an on switch elsewhere. White holes feel like a required balance to the conclusiveness of black holes: where does all of that sucked-up stuff go? However, physics has its limitations with the idea of white holes. They violate the second law of thermodynamics by decreasing the entropy of a system which would make them impossible to exist. This makes many physicists sceptical of these mathematical possibilities. Alternatively, they do provide answers for certain unanswered questions left by black holes alone. The only potential source of evidence of white holes was a gamma ray burst from outer space in 2006. It lasted 102 seconds and was accompanied by an explosion of white-hot light coming from nowhere which immediately vanished. This explosion left physicists confused because most gamma ray bursts only last a couple of seconds from supernovae and neutron stars. However, some believe that this was maybe a white hole and could lead to the secrets of our universe…

49


The Journal of Young Physicists – Issue 1

REFERENCES https://www.space.com/white-holes.html https://astronomy.com/news/2019/06/white-holes-do-black-holes-havemirror-images https://jila.colorado.edu/~ajsh/bh/schww.html

50


The Journal of Young Physicists – Issue 1

QUANTUM CHEMISTRY AND CHEMOINFORMATICS Author: Anupa Bhattacharya

Density functional theory is one of the most popular quantum mechanical energy minimization processes used in physics, chemistry and materials science to investigate the electronic structure of many-body systems. The name "density functional theory" comes from the use of functionals, i.e, function of functions, which in this case is spatially dependent electron density. For one-electron systems like the hydrogen atom, it is very easy to solve the Schrödinger equation exactly, and calculate the possible energy states of the system by obtaining the energy eigenvalue. The Schrödinger equation is represented as: ̂𝜓 𝐸𝜓 = 𝐻 ̂ is the Hamiltonian operator, and 𝐸 is the energy eigenvalue. where 𝐻 However, the difficulty of the calculation increases with the dimension of the problem. Therefore, it is very difficult to solve this equation for many-electron systems like polyatomic molecules. DFT is nothing but a computational technique for achieving an approximate solution to the Schrödinger equation for many-body systems. The wave function of a N-electronic system is a function of 3N number of variables (coordinates of all the N atoms), whereas taking electron density functional reduces the number of variables to 3 (x,y,z), making the calculation easier. Also, it has been established by the Hohenburg-Kohn theorem that the electron density of any system determines all the ground state properties of that system. By focusing on the electron density, it is possible to derive an effective one-electron-type Schrödinger equation for a many-electron system and get all possible energy states of that system by solving it. Presently one of the most successful and promising approach toward investigating the electronic structure, the applicability of DFT ranges from atoms, molecules and solids to nuclei, quantum and classical fluids. It predicts a great variety of molecular properties, structures, atomization energies, reaction pathways, etc.. It is an extraordinary tool in the in silico screening and design of active pharmaceutical ingredients. A small work was done by the author for her Master’s dissertation, which involved the in silico screening of Non Steroidal Anti Inflammatory drugs using DFT and molecular docking. Among the various 51


The Journal of Young Physicists – Issue 1

properties that can be predicted from DFT, we had mainly focused on the energy difference of the Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular Orbital (LUMO) of a molecule. It has been studied that the inhibitory activity of a molecule is generally affected by its total energy, entropy, polarity and electron transition probability. The electron transition probability is further dependent on the HOMO-LUMO energy gap of a molecule. This energy gap, also gives us an idea about the stability of a molecule, which follows a reverse order with electron transition probability. Lower the HOMOLUMO energy gap of a molecule, the more active and less stable is it, and viceversa. The DFT calculations were performed using DMOL 3 code, under Generalized Gradient Approximation (GGA). The density functional BLYP was used along with the Double Numeric Plus polarization (DNP) basis set. The optimization was carried out without any structural constraint and HOMOLUMO energy gap was computed in electron volts (eV). Molecular docking is another popular aspect in structural molecular biology and computer-assisted drug design. It is used to predict the preferred orientation of one molecule to another when they are bound to form a stable complex, thus also giving an idea of binding affinity of one molecule to the other. Due to the ability of this technique to predict the binding conformation of a small molecule to its appropriate target (essentially macromolecules like proteins), molecular docking is found to have an extensive use in structure-based drug design. Molecular docking mainly approaches toward achieving optimized conformations for both the protein and ligand, and the relative orientation of the protein and ligand, so that the free energy of the overall system is minimized. The ligand and the protein adjust their conformation to achieve a ‘best-fit’ conformationally as well as energetically. The mechanics of docking broadly involves two steps. Search algorithms Theoretically, a search space consists of all possible conformations and orientations of the ligand bound to the protein. In actual practice, it is not possible to explore the search space minutely, as molecules are dynamic and exist as an ensemble of various conformational states. Therefore most docking programs employ easier conformational search strategies. Among these, the most widely used is the ‘genetic algorithm’ which is used for evolving low energy conformations, where the score of each bound conformation acts as the fitness function to select molecules for the next iteration. 52


The Journal of Young Physicists – Issue 1

Scoring function Among the potential protein-ligand conformations generated by a docking program, some are rejected outright due to clashes. But the rest are evaluated using a scoring function, representing a favorable binding interaction and ranking the conformations relative to each other. The scoring function is generally represented as a cumulative free energy for binding taking into account all other contributions including solvent effects, conformational changes, etc.. A lower (negative) binding energy indicates a stable system and more likely binding interaction.

Protein-ligand docking

REFERENCES Belaidi. S., Mellaoui. M., (2011), “Electronic Structure and Physical-Chemistry Property Relationship for Oxazole Derivatives by Ab-Initio and DFT Methods,” Organic Chemistry International, 2011, 7 pages. P. Hohenberg. P., Kohn. W., (1964),“Inhomogeneous Electron Gas,” Physical Review, 136, B864. Ghosh. D.C. , Jana. J., (1999), “A study of correlation of the order of chemical reactivity of a sequence of binary compounds of nitrogen and oxygen in terms of frontier orbital theory,” Current Science, 76, 570-573.

53


The Journal of Young Physicists – Issue 1

Bhattacharya. A. et al, (2010), “Crystal structure and electronic properties of two nimesulide derivatives: A combined X-ray powder diffraction and quantum mechanical study,” Chemical Physics Letters, 493, 151-157. Delley. B., (1998), “An all‐electron numerical method for solving the local density functional for polyatomic molecules,” The Journal of Chemical Physics, 92, 508 (1990). Perdew. J.P. et al, (1996), “Generalized Gradient Approximation Made Simple,” Physical Review Letters, 77, 3865. Becke. A.D., (1988), “Density-functional exchange-energy approximation with correct asymptotic behaviour,” Physical Review A, 38, 3098. Lee. A. C. et al, (1988), “Development of the Colle-Salvetti correlation-energy formula into a functional of the electron density,” Physical Review B, 37, 785789. Lengauer. T., Rarey. M., (1996), “Computational methods for biomolecular docking,” Current Opinion in Structural Biology, 6, 402–406. Wei. B.Q et al, (2004). “Testing a flexible receptor docking algorithm in a model binding site,” Journal of Molecular Biology, 337, 1161–1182. Murcko. M.A., (1995). “Computational Methods to Predict Binding Free Energy in Ligand Receptor Complexes,” Journal of Medicinal Chemistry. 38, 4953–4967.

54


The Journal of Young Physicists – Issue 1

THE STANDARD MODEL OF PARTICLE PHYSICS Author: Krishnasri Gollakota

What are the fundamental building blocks of the universe which make up you, me, the galaxies and everything else? “What is the matter made of on the most fundamental level?” is the question we are in quest of for a long time! In this article, we will talk about the Standard Model of particle physics, viewed as one of the most successful theory of all time. The Standard Model describes how everything in the universe is made up of twelve different types of matter particles, interacting with three forces, all bound together by a rather special particle called the Higgs boson. Note that there is a fourth fundamental force, gravity, which is not included in the Standard Model. Therefore, either the Standard Model is an incomplete theory or gravity is an emergent force, i.e., not a fundamental force. One can say that all matter is made out of three kinds of elementary particles: leptons, quarks, and mediators. As of now these can be viewed as the actors of a drama with peculiar characteristics.

55


The Journal of Young Physicists – Issue 1

There are six ‘flavors’ of quarks (up, down, charm, strange, top and bottom) and six leptons (electron, electron neutrino, muon, muon neutrino, tau and tau neutrino). Everything we see around can be reduced to just three matter particles, the electron, up quark and the down quark. The proton and neutron each contain three quarks, the proton has two up quarks and a down, while the neutron has two down quarks and a up. Protons and neutrons together make up the nucleus, which with the electrons make up the atom. And atoms make up molecules, and so on... Next, neutrinos. Neutrinos are not like the particles mentioned above; neutrinos are extremely light and barely interact with anything else. As you are reading this, around 100 trillion neutrinos are passing through your body. Therefore we have discussed the four matter particles. But Nature has a twist in the plot! There exists two heavier generations or copies of the above-mentioned particles. We don’t see these second and third generation of particles in everyday life, because being unstable they quickly decay to the first generation of particles, but they do exist (from experimental observations). Now, how could we miss talking about the forces, which are among the most important actors of this drama, and without which the drama would be quite boring to watch? In the Standard Model of particle physics, we have the strong force, the weak force and the electromagnetic force. There are particles called bosons, which act like force carriers. Talking about the electromagnetic force, it acts on anything that carries the electric charge (it will not act on the neutrinos because neutrinos are electrically neutral.) The particle associated with this force is the photon. The strong force acts on quarks, and holds together the nuclei of atoms. The particle associated with it is the gluon. The weak force acts on subatomic distances, which governs the decay of unstable subatomic particles and also initiates nuclear fusion reaction that fuels the stars. The particles associated are the W boson and the Z boson. Next, there are some problems with the Standard Model. As quoted in Quanta Magazine, “The Standard Model has been a boon for physics, but it’s also had a bit of a hangover effect.” We discussed three fundamental forces, while there are actually four fundamental forces. The most obvious force, gravity is not a part of the Standard 56


The Journal of Young Physicists – Issue 1

Model of particle physics (one of the reasons why we have no idea how to incorporate the general theory of relativity into the quantum world). We are unable to account for several major features of the wider universe, including the action of gravity at short distances and the presence of dark matter and dark energy. Physicists would like to move beyond the Standard Model to an even more encompassing physical theory. But, as the physicist Davide Gaiotto put it, the glow of the Standard Model is so strong that it’s hard to see beyond it. Yet we are in the quest of it!

REFERENCES https://www.quantamagazine.org/a-video-tour-of-the-standard-model20210716/ https://youtu.be/Unl1jXFnzgo

57


The Journal of Young Physicists – Issue 1

58


The Journal of Young Physicists – Issue 1

CHANGING THE PARADIGM: FROM A CHILDHOOD DREAM TO THE FIRST PERSON ON MARS Author: Elsa Shiju

At the age of three years, I decided to become an astronaut and go to Mars. At thirteen, I am an Astronaut Trainee. “How is that possible?” you might ask. Well, I changed the paradigm. A simple decision that I made when I was three years old sparked an idea that I never lost track of, and ever since that moment I have dedicated my life to training, learning, teaching and taking all the necessary steps required in order to make this dream a reality. At the age of seven, I attended my first Space Camp at Huntsville, Alabama. Over the course of last years, I started working on my scuba certification. This was to begin feeling those sensations of being in an environment without oxygen, similar to the vacuum of space. My next step is to continue my education, training with Project POSSUM, working toward my private pilot license and to obtain my skydiving license. As you can see, throughout all of these different experiences, I've been trying to change the paradigm. A paradigm is a typical pattern of something; a pattern or a model. For instance, the paradigm for becoming an astronaut is getting the bachelor's, working in a particular career for several years, applying to the astronaut selection process and completing basic astronaut training. Then you 59


The Journal of Young Physicists – Issue 1

have to wait and be assigned a mission, complete training just for that mission, and finally you can go to space. Why is it that our youngest American astronaut is twenty-eight years old, and the youngest astronaut ever is twenty-five? If a kid has an interest in a career, they can begin studying it and how to pursue it. If you think about it, when you begin studying a topic, you go to pursue your bachelor's, on to your master's, and eventually, on to your PhD. A person will have studied that topic for eight to ten years. If a kid gets interested in a career topic at seven, studies and works hard toward it, what's to say that ten years later, at seventeen, they can't have achieved it? In my journey, I have learned many lessons and I would like to share three of them with you. Putting in the hard work Each one of us has this passion and drive for something. Finding that passion is only the first step, because, more importantly, you have to be willing to put in the hard work. Dreams are very special, in the sense that they can't be bought. They can't be given to you, and you have to want it. A sixteen-year-old recently got a phone call from NASA, offering him a paid internship and a guaranteed job in the next couple of years. His interest was rocketry, and even since he was little, he worked to become the president of his rocketry club, built as many rockets as he could from scratch, and he put in the hard work. The hard work that you put in to follow your dream will not go unnoticed and can bring you closer to following your dream than you might have thought. So what about you? When can you start putting in the hard work? And how can you do it now, to one day achieve your dreams? Sacrifice As part of my training, I have learned that with the hard work comes some sacrifice. Many of the opportunities and trainings that I have done along the way, have been during the school week, during different fun activities. Things such as video games or attending an event may sound a lot more appealing, and with any job, the play is always going to sound more interesting than the work. When talking about our dreams, it's always important to stick with it, finding the right balance of when to buckle down, put in the hard work and when to have that time to relax and enjoy your free time. This balance is essential in following your dream and there will be sacrifices along the way. The sacrifices that I made 60


The Journal of Young Physicists – Issue 1

have brought my dream closer to me, and without them, who knows? My dream could have faded away. Are you willing to put in those sacrifices to eventually obtain your dream and your goals? Never give up I wonder sometimes what was going through my little brain when I was three, when I was thinking that becoming an astronaut was exactly what I was going to do. Choosing the career option of becoming an astronaut and the goal of going to Mars was absolutely the craziest career option I could have ever picked. However, I was not knocked down by this; I was not discouraged. Because of the hard work I'd put in, the support from my family, everyone around me and never giving up, I am where I am today. So it is important, because now, the Mission to Mars is becoming more and more of a reality. So with your dream, no matter how crazy it might sound at the moment or how far-fetched it might be, if you continue and never give up on it, it's possible that it could become a reality. This idea of changing the paradigm, is to make jobs that appear impossible within our reach. This generation is definitely changing the world, breaking out of the paradigm that we find all of ourselves following. As we encourage kids to continue breaking these paradigms, that is how we can make a difference in the world. Never stop dreaming, never give up and don't be afraid to talk about your dreams and tell people what you really want to do. This is a time of change, a time to explore, a time to evolve. Change the paradigm, change the future of the next generations!

61


The Journal of Young Physicists – Issue 1

62


The Journal of Young Physicists – Issue 1

VELOCITY OF FALLING OBJECTS WITH NONNEGLIGIBLE AIR RESISTANCE Author: Leonhard Nagel

We all know what the velocity of a free-falling object is, when air resistance is negligible. But what if it's not negligible?

ABSTRACT For many falling objects, air resistance is assumed to be negligible; however, this assumption is not a valid one to make in many cases. Thus, a function relating velocity to time, under consideration of drag, can be used to better estimate the velocity of a falling object. In an idealized sense, two forces act upon a falling object: weight and drag. These two forces can be assumed to act in opposite direction to each other. This means that a simple equation for acceleration based on Newton’s second law of motion can be found, which can then be manipulated to yield an expression for velocity.

DERIVATION Our goal is to find a function relating velocity (𝑣) to time (𝑡) in terms of the mass of the object (𝑚), gravitational acceleration (𝑔) and the proportionality constant between velocity squared and drag (𝑧). Here we will assume that the only two forces acting on the object are weight (𝐹𝐺 ) and drag (𝐹𝐷 ). To get the magnitude of these two forces, the following two equations are used, respectively. 𝐹𝐺 = 𝑚𝑔 𝐹𝐷 = 𝑧𝑣 2 where 𝑧 =

𝐶𝐷 𝐴𝜌 2

, where 𝐶𝐷 is the drag coefficient, 𝐴 is the reference area of the

body and 𝜌 is the density of the medium. Newton’s second law of motion can be stated as the following relationship between net force (𝐹𝑛𝑒𝑡 ), mass (𝑚) and acceleration (𝑎): 𝐹𝑛𝑒𝑡 = 𝑚𝑎 → 𝑎 = 63

𝐹𝑛𝑒𝑡 𝑚


The Journal of Young Physicists – Issue 1

In the case of the falling object that is being considered here, drag and weight act on it in opposite directions. Thus, the net force equals the difference between the weight and drag force. Please note that for the purposes of this calculation, moving in a positive direction is considered to be moving toward the surface of the Earth. 𝐹𝑛𝑒𝑡 = 𝐹𝐺 − 𝐹𝐷 = 𝑚𝑔 − 𝑧𝑣 2 This expression can now be substituted into Newton’s second law. 𝑚𝑔 − 𝑧𝑣 2 𝑎= 𝑚 The acceleration 𝑎 can be rewritten as the derivative of the velocity 𝑣 with respect to time 𝑡,

𝑑𝑣 𝑑𝑡

, which yields the following differential equation: 𝑑𝑣 𝑚𝑔 − 𝑧𝑣 2 = 𝑑𝑡 𝑚

This differential equation is considered to be separable, which means that it is helpful to move all terms involving 𝑣 to the left side, while leaving all other terms on the other side. After moving the terms, the left side can be integrated with respect to 𝑣 and the right side can be integrated with respect to 𝑡: ∫

1 1 𝑑𝑣 = ∫ 𝑑𝑡 𝑚𝑔 − 𝑧𝑣 2 𝑚

The integral on the right hand side can be evaluated with relative ease. Please note that here we will assume the initial velocity and initial time to be zero. So, we can ignore the limits of the integrals. To evaluate the integral on the lefthand side, first

1 𝑚𝑔

can be factored out of the fraction, as it is a constant: 1 1 1 ∫ 𝑑𝑣 = ∫ 1 𝑑𝑡 𝑚𝑔 1 − 𝑧 𝑣 2 𝑚 𝑚𝑔

The

𝑧 𝑚𝑔 2

in the denominator of the fraction in the integral can be combined into

the 𝑣 term to yield the following: 1 ∫ 𝑚𝑔

1 1 − (𝑣√

2 𝑑𝑣 =

𝑧 𝑚𝑔)

64

𝑡 𝑚


The Journal of Young Physicists – Issue 1

The integral on the left-hand side can be simplified by performing a 𝑢substitution, whereby th function 𝑢 is defined as 𝑢 = 𝑣√

𝑧 𝑚𝑔

.

𝑑𝑢 𝑑 𝑧 𝑧 𝑑𝑣 𝑧 = =√ (𝑣√ ) = √ 𝑑𝑣 𝑑𝑣 𝑚𝑔 𝑚𝑔 𝑑𝑣 𝑚𝑔 An expression for du can be found by rearranging the above equation: 𝑑𝑢 = √

𝑧 𝑑𝑣 𝑚𝑔

In the original equation, the fraction inside the integral can be multiplied by √

𝑚𝑔 𝑧

𝑧

√𝑚𝑔: 1 ∫ 𝑚𝑔

In this equation, 𝑣√ 𝑑𝑢, and the √ with

1 𝑚𝑔

𝑚𝑔 𝑧

𝑧 𝑚𝑔

1 1 − (𝑣√

2√

𝑧 𝑚𝑔)

𝑚𝑔 𝑧 𝑡 𝑑𝑣 = √ 𝑧 𝑚𝑔 𝑚

can be substituted by 𝑢, √

𝑧 𝑚𝑔

𝑑𝑣 can be substituted with

, as a constant, can be brought out of the integral and combined

: 1 √𝑧𝑚𝑔

1 𝑡 𝑑𝑢 = 1 − 𝑢2 𝑚

The following identity can now be used to evaluate the integral on the left-hand side, whereby 𝑥 ∈ 𝑅 − {−1,1}: 𝑑 1 1 −1 tanh−1 𝑥 = → tanh 𝑥 = ∫ 𝑑𝑥 𝑑𝑥 1 − 𝑥2 1 − 𝑥2 Thus, using the identity to solve the integral, substituting 𝑢 for 𝑥, will yield the following expression: 1 √𝑧𝑚𝑔

tanh−1 𝑢 =

65

𝑡 𝑚


The Journal of Young Physicists – Issue 1

𝑣√

𝑧 𝑚𝑔

can be substituted for 𝑢, and the fraction can be recombined: tanh−1 (𝑣√

𝑧 𝑚𝑔)

√𝑧𝑚𝑔

=

𝑡 𝑚

Both sides can now be multiplied by √𝑧𝑚𝑔, after which the hyperbolic tangent function can also be applied to both sides: 𝑣√

𝑧 𝑧𝑔 = tanh (√ 𝑡) 𝑚𝑔 𝑚

Finally, we get: 𝑣(𝑡) = √

𝑚𝑔 𝑧𝑔 tanh (√ 𝑡) 𝑧 𝑚

66


The Journal of Young Physicists – Issue 1

GRAPE PLASMA: FINALLY EXPLAINED Author: Aniruddha Sharma

INTRODUCTION Do you know that the plasma state of matter can be perceived in our homes? All we need is a grape and a microwave oven. Firstly, we need to slice the grape such that the thin isthmus is still holding the two halves together. Then we need to place it in the microwave and irradiate the grape hemispheres in it. In this way, we can create plasma. We don’t necessarily need grapes; we can also use hydrogel water beads to get the same results. Until now, a general explanation was that the skin acted as a short dipole antenna and that the conducting ion-rich skin ‘bridge’ played an important role. But till the recent past, there was not a correct explanation known until a recently published paper.

THE GRAPE PLASMA PHENOMENON When the grape is placed in the microwave oven, the key thing is to observe how considerable the wavelengths are inside the grape. The wavelength of the microwave inside the microwave oven is 12 cm, whereas it is approximately 1.2 cm inside the grape. Therefore, it can be observed that the wavelength of the microwave is 10 times lesser in the grape compared to its wavelength in the air. 67


The Journal of Young Physicists – Issue 1

When we place the grape in the microwave oven and pass the microwave through it, the waves get trapped inside the grape. This is because the grape has a huge size and high refractive index. Total Internal Reflection When the microwave gets trapped inside the grape, it starts bouncing inside the grape. The microwave, unable to escape through the walls of the grape, then starts to resonate modes inside the grape. This means they oscillate inside the grape and generate the maximum electromagnetic field at its center.

We expect the grape to heat from outside, but in reality, it heats from inside, as we can see above Intersection of two grapes When we intersect the grapes and pass the microwave through it, the strongest oscillating electromagnetic field forms at the intersection point of the grapes. We can catch sight of sparks at the intersection point of the grapes, i.e. the area with the strongest electromagnetic field. This is because the strong electric field ionizes the air, thus creating sparks. These sparks lead to the formation of plasma. The ions which are generated get more energy from the microwaves.

68


The Journal of Young Physicists – Issue 1

APPLICATIONS This phenomenon can be used in fabrication of semiconductor microchips and lithography techniques.

REFERENCES https://www.pnas.org/content/116/10/4000 https://physicsworld.com/a/grape-plasma-phenomenon-explained-at-longlast/

69


The Journal of Young Physicists – Issue 1

70


The Journal of Young Physicists – Issue 1

THE PHYSICS BEHIND OXIMETERS Author: Samiha Sehgal

What is an oximeter? An oximeter/pulse oximeter is a medical device that aims to measure the oxygen saturation in a person’s blood, changes in the blood volume and also the pulse rate by using red and infrared light. Oxygen saturation is simply the amount/percentage of available hemoglobin in blood which carries oxygen. A regular oximeter consists of light emitting diodes (LEDs) as the source of light, mounted opposite to light detectors or sensors. The device is in the form of a clip which can easily be attached onto a finger. An illustration of what has been explained till now is shown below.

A typical oximeter In the current scenario of the ongoing global pandemic, oximeters are extremely useful in detecting early infections which can lead to low arterial oxygen saturation, going unnoticed initially.

71


The Journal of Young Physicists – Issue 1

Advantages Oximeters are a non-invasive method for the determination of oxygen saturation and pulse rate. They can be used anywhere – in intensive care units, by pilots in unpressurised aircrafts and even by mountain climbers. Due to its size and simplicity, these devices are very compact, easy to carry and use. The LEDs used are cheap and easily accessible. The LEDs also emit light of accurate wavelengths and do not heat up easily during use. Working and functionality Oxygenated hemoglobin/oxygen saturation is the percentage of available hemoglobin that carries oxygen. When oxygen is attached to the hemoglobin (Hb) molecule, it is known as oxygenated Hemoglobin, and when the Hb molecule is without oxygen, it is called deoxygenated Hemoglobin. Now, when a finger is placed between the light source and the detector, the path of light gets blocked by the finger. Part of the light gets absorbed by the finger and only the remaining light reaches the detector. In the figures to follow, arteries are shown bordered in red as they carry the blood, and veins serve as a passage for the blood to exit and are shown in blue. The amount of light absorbed by the finger depends on the following factors: Concentration of the absorbent What is the light-absorbing substance in your finger? Hemoglobin. The amount of light absorbed is directly proportional to the Hb concentration in the blood. This has been shown below and is governed by an important law in physics. Beer’s law simply states that a more concentrated solution absorbs more light than a dilute solution.

Beer's law using pulse oximetry 72


The Journal of Young Physicists – Issue 1

Length of the path of light Even if the Hb concentration per unit area is the same, the width of the artery plays an important role in determining the amount of light absorbed. In the figure below, the artery on the right is wider than the one on the left. And so, light has to travel a longer path. Another law summarises this. Lambert’s law states that the amount of light absorbed is directly proportional to the length of the path the light has to travel in the absorbent.

Lambert's law using pulse oximetry Amount of red and infrared light absorbed An oximeter uses red and infrared (IR) light to measure oxygen saturation. Red light has a wavelength of about 650 nm and IR light (which is not visible to us) has a wavelength of almost 950 nm. If we consider a source of light with changing wavelengths (i.e., not monochromatic), and allow it to pass through oxygenated Hb, the graph given below will be observed. (Graphs are not drawn to scale.)

Oxygenated Hb absorbing lights of different wavelengths 73


The Journal of Young Physicists – Issue 1

As can be seen from the graph, oxygenated Hb absorbs more light of 950 nm, or infrared light, as compared to red light. Deoxygenated Hb also absorbs lights of different wavelengths differently. But it absorbs more red light (650 nm) than infrared light as can be seen from the graph below.

Deoxygenated Hb absorbing lights of different wavelengths Thus, oxygenated Hb absorbs more IR light than red light, and deoxygenated Hb absorbs more red light than IR light. So, an oximeter compares the amounts of red and IR lights being absorbed by the detector to display the oxygen saturation. More the amount of IR light absorbed, greater will be the oxygenated Hb and higher will be the oxygen saturation. Similarly, if more red light is absorbed, the deoxygenated Hb will be more, resulting in a lower oxygen saturation percentage. However, there are some important factors we must consider. Calibration adjustment Beer’s law and Lambert’s law only hold true if light travels straight from the source to the detector, as shown in the left of the figure below. This does not happen in reality as when the light passes through the blood, it interacts with red blood cells, white blood cells, plasma and platelets, which cause an obstruction in the path, as shown in the right of the figure below.

74


The Journal of Young Physicists – Issue 1

To resolve the potential errors of this problem, a test oximeter is used on a person who is asked to breathe in extremely low concentrations of oxygen. The oxygen levels, however, are not allowed to drop below 75-80%. While doing so, blood samples are taken at intervals and the readings are simultaneously compared to those shown on the oximeter. The errors shown on the oximeter are taken into account and a calibrated graph is made. A copy of this corrected graph is available inside the device, and the computer refers to it before displaying the final reading. As oxygen levels are not made to fall below 75-80%, the oximeter is not very accurate for readings lesser than those. Plethysmography Usually, along with the oxygen saturation (in percentage) on the display, an oximeter also shows the quality of the pulsatile signal (plethysmographic graph) absorbed. This is more important than just the percentage. This is because with slight changes, or movement, the percentage values can be inaccurate and give you false information. This quality of the pulsatile signal mentioned above, is in the form of a graph, which is always accurate and must be referred to at all times. Ambient light The oximeter consists of red light and infrared light. In addition to this, there is a third source of (unwanted) light. This is the light present in the surroundings/room. An important fact to know is that the device does not switch on both the LEDs together. The red light passes through the finger first. It is then switched off and only then is the IR light switched on. This process continues. Now, the oximeter switches on the red light first, causing both red and the ambient light to reach the detector. The red light is switched off and the same process takes place, now 75


The Journal of Young Physicists – Issue 1

between IR and ambient light. Next, the oximeter switches off both LEDs, allowing only the ambient light to fall on the detector. This gives the correct measurement of the (pure) ambient light. This amount is subtracted from the previous mixed readings, giving results of only red and infrared light. However, the ambient light should not be too strong, causing the lights from the LEDs to fade. Limitations Small signal strength An oximeter is able to analyse only about 2% of the total light it receives from the detector. Thus, the device is extremely sensitive and even the slightest of movements in a person can result in very different readings. Optical shunting The light is supposed to pass through the arteries for correct detection. However, if the probe of the device is of the wrong size or if the finger is placed incorrectly, light, instead of passing through the artery, goes by its side, shunting the artery. Naturally, this will result in false readings. Electromagnetic interference If an oximeter is used in the vicinity of surgical devices that rely on electromagnetic waves (MRI machine, X-ray machine, diathermy, etc.), strong electric waves are emitted by them. These waves cause small currents that are ‘read’ by the oximeter, assuming the currents to be from the detector. Poor peripheral perusion In cases such as during hypotension, the arteries are not very pulsatile, causing the absorption rate to be much lower. The oximeter may find the signal inadequate to correctly display results. Hyperoxia Hyperoxia is a medical condition when the body receives oxygen in excess. This can be harmful in cases. Usually, oxygen is only attached to Hemoglobin. However, some additional oxygen can also get easily dissolved in the plasma. The oximeter is unable to take into account this extra oxygen and will not display it. So, people with hyperoxia can’t rely on an oximeter for correct readings.

76


The Journal of Young Physicists – Issue 1

Dyes and color Dyes such as methylene blue, or nail polish on the fingernails can serve as an artificial barrier and can lower the oxygen saturation percentage. Carboxy-hemoglobin Carbon monoxide (CO) combines with Hb to form carboxy-Hb. The oximeter can’t distinguish between carboxy-Hb and oxygenated Hb. So, it will result in a high level of oxygen saturation, which is actually false. Carboxy-Hb does not consist of oxygen and is harmful.

CONCLUSION In conclusion, oximeters are useful devices, especially during the present COVID19 situation. Many doctors are advising patients to keep an oximeter with them. Patients undergoing treatment for infection at home are also being provided with oximeters, to help monitor their oxygen levels.

REFERENCES https://en.wikipedia.org/wiki/Pulse_oximetry https://www.howequipmentworks.com/pulse_oximeter/

77


The Journal of Young Physicists – Issue 1

78


The Journal of Young Physicists – Issue 1

THE AUTOMOBILE REBIRTH: ELECTRIC AND AUTONOMOUS Author: Sarthak Jain

Every day, millions of tons of carbon emissions are released in the air - both in India and the world - by internal combustion engines (ICE) vehicles. This has become a contributing factor in global climatic change. Millions of people are losing their lives because of this. But, as there is a solution to every problem, there is a solution to this problem too. Electric vehicles, or in other words EVs operate on an electric motor instead of the conventional internal combustion engines which rely on fuels and gases. They are seen as the most viable replacement to our life-endangering ICE vehicles which cause global warming, rising pollution and throat-cutting our Earth. Electric vehicles have gathered much-awaited attention in the past decade because of rising carbon footprints and other environmental impacts of fuelbased vehicles. For the first time in the history of the automobile industry, ICE vehicles face a major challenge that could completely wipe them out. Electric vehicles are much safer, reliable, quiet and light. And above all, it is a green technology which doesn’t make people feel guilty that they are harming our Earth. After so many benefits wouldn’t you buy an EV? Unfortunately, when you ask this question to an Indian, the answer would usually be NO. “The world has turned the corner on tobacco. Now it must do the same for the ‘new tobacco’ – the toxic air that billions breathe every day,” said Dr Tedros Adhanom Ghebreyesus, the WHO’s director general. “No one, rich or poor, can escape air pollution. It is a silent public health emergency.” According to a report in Bloomberg, India is home to 14 of the world's most polluted cities. Two-thirds of the worlds most polluted cities are, unfortunately, in India. ICE vehicles can be blamed for the disastrous effect. The number of ICE vehicles on Indian roads is increasing in leaps and bounds. In 2013, India unveiled the National Electric Mobility Mission Plan (NEMMP) to make electric vehicles a part of its huge automobile industry and to address vehicular emission. It included providing subsidies and infrastructure for Electric vehicles. But, sad to say, this policy remained only on paper. 79


The Journal of Young Physicists – Issue 1

The major problem with electric vehicles in India is their high prices. The average cost of a car in India is around $10,000. But, the average cost of an electric car is around $50,000: 5 times higher than that an Indian consumer would spend. The government regulations and duties are very high which discourages a consumer to consider an EV. The electric infrastructure in India is also almost nil. Currently, there are only 150 charging stations in the world’s 7th largest country. These reasons backfire a rational consumer to even consider buying an electric vehicle. Electrification has just started in India. The government has recently ordered around 10000 vehicles. Many local governments are also taking various initiatives. Delhi government has launched its own Electric Vehicle Policy 2020 which aims to make Delhi a world leader in the field of electric vehicles. Tamil Nadu government also targets a 600 million investment to build its electric vehicle ecosystem. But still there is a lot that needs to be addressed. To make electric vehicles a part of the Indian roads, the government should provide heavy subsidies on e-vehicles, especially two-wheelers. It should also reduce duties and promote foreign companies to invest in Indian electric vehicle ecosystem. Exclusive models – keeping in mind the price – sensitive consumers in India – can also boost sales of EVs in India. Electric vehicles would also help India to reduce dependence on foreign oil.

The world has made a lot of progress in Electric vehicle ecosystem during the last decade. Tesla, an electric-vehicle automobile company, has also become the world’s most valuable automaker. In Norway, Electric vehicles are breaking all the sales records. In the U.S, Automakers like Tesla and Faraday Future are working relentlessly to develop new technologies that would give Electric 80


The Journal of Young Physicists – Issue 1

vehicles a competitive edge. Undoubtedly, the future of EVs seems bright. Electric Vehicles could be the driving force to reduce the spread of the ‘new tobacco.’ But as we look toward the future the automobile industry is gearing for yet another change - introduction of self driving vehicles. Can you imagine a car driving by itself without any human input? Autonomous vehicles, or self driving vehicles as they are usually called, combine a variety of sensors to perceive their surroundings such as radar, lidar, Sonar, GPS, odometry and inertial measurement units. Advance control systems interpret sensory information to identify appropriate navigation paths as well as obstacles and relevant signals. Almost every major automobile company like Volkswagen, Tesla and Toyota are pumping billions of dollars into research and development of self driving vehicles. Non-automobile tech companies such as Google and Sony are also showing interest. Waymo, a self driving automobile car by Google, is even one of the front-runners to develop the first self driving car for public use. It has even started people use its self driving taxi in some areas of Phoenix Arizona.

The level of automation of these vehicles have been classified into various stages. It includes six stages with level zero being the lowest and level five being the highest. In level zero the automated vehicle just issues precautionary warnings with very rare small interventions. Almost all the cars we drive today have this stage. The best example of this stage can be the one when the vehicle 81


The Journal of Young Physicists – Issue 1

starts to beep on not wearing seat belts or when the vehicle has less then preset distance from other objects. Then comes the level one, in which the driver and the vehicle share some of the controls of the vehicle. The steering wheel, though, must always be in the control of the driver. Cruise control, in which the vehicle maintains the constant speed on open roads, and automatic parking features are a part of the level one automation stage. Many of the luxurious cars can be seen boasting these features. Then comes level two for which the beloved electric car Tesla is famous. In this the driver can keep his hands off the steering wheel. The automatic test system takes full control of the vehicle: accelerating, braking and steering but the driver must always be ready to take on the control of the vehicle in the unfortunate times of an emergency. In many level two stage vehicles, the eyes of the driver are monitored by the cameras to check if the driver is paying attention to the traffic. This is also the last stage which is available for the general public to use till now. Level three is the stage where the fun begins. In this stage, the driver can even keep his eyes off. The driver can safely ship his attention away from the driving tasks. The driver can even watch a movie or text someone. The vehicle can even perform some emergency tasks like emergency braking. The vehicle will automatically tell you when to drive, like in the situation of busy roads. After this stage there is level four. Under this stage the driver can even keep his mind of the car, i.e. he can sleep or even leave the driver seat. It would work only in a few geographical conditions. The best example would be a robotic delivery service system. At last, there is Level five, which various companies are trying to achieve. In Level five, it is optional for the driver to drive the vehicle. No human intervention is required at all. It is a vehicle which works on all the roads. It is the stage when even human drivers would become the things of the past. It requires a sophisticated network of devices and the internet. It will never ask for any intervention. The first three stages are where the humans have superior responsibility and in the last three stages, the automated system monitors have superior responsibility. Various governments are also pushing companies and providing incentives to develop autonomous vehicles. They will introduce a new set of advantages for the society. Millions of people lose their lives in fatal vehicles accidents. But with the introduction of self driving vehicles this number could be down to 0. These vehicles can prevent human errors as it is all controlled by a system. It also leaves 82


The Journal of Young Physicists – Issue 1

no opportunities for destruction. The vehicles can also coordinate with each other, lowering the chances of accidents dramatically. Since these icons are coordinated with each other, they also reduce congestion on the road. The vehicles simultaneously share location with each other which helps in the prediction traffic problems. It also fixes road detours instantly. It could even pick up hand signals from motorcyclists and react to it accordingly. These are not the only benefits, there are more. Autonomous vehicles have the feature of stress-free parking. They can drop you off at your location and head directly to the vacant parking spot. These vehicles also save humans a lot of time. Since the vehicles take the control, humans can attend a meeting, continue to work or spend some quality time catching up with their family without having the fear of road safety. We even help the disabled or elderly, who are unable to drive cars, travel hassle-free in them. Just like every coin has two sides, autonomous vehicles also have their own set of disadvantages. As these vehicles use sophisticated and expensive equipment, a large amount of money has also been invested for their research and development. They require the finest software and sensors. So, their cost might be initially very high. We can only hope to see these in the hands of a common man after around a couple of decades. They also may be prone to serious problems which also concern the current generation hacking. Since these vehicles operate on the internet they could also be the next target of hackers. On hacking, these vehicles can behave in an unusual manner. They may also collect personal data of the owners. And, since it is a technology, there are chances of a glitch. But above all, there is another big problem: loss of millions of jobs. Most of the drivers will lose their job. Unemployment levels will rise in leaps and bounds. Even though these vehicles have many disadvantages, their advantages outweigh them. Deaths on roads will almost be zero. Harmful emissions will also reduce significantly. It will also increase human productivity by reducing travel time. The combination of electric and autonomous vehicles seems to make the future of driving really bright. They will transform the way we look at our cars.

83


The Journal of Young Physicists – Issue 1

84


The Journal of Young Physicists – Issue 1

QUANTUM IMMORTALITY: COULD YOU SURVIVE 50 GUNSHOTS? Author: Elizabeth Field

Quantum immortality seems almost oxymoronic in terminology. Quantum physics is scientific, mathematical, (dis)provable. Immortality, on the other hand, is an inconceivable idea, implausible in any realm of physical, chemical, or biological science. However, the two words fused together illustrate a mindblowing concept. Quantum immortality, also known as the quantum suicide experiment, is a theory formulated by Hans Morevic and Bruno Marchal, two scientists from the robotics and health care fields, respectively. The thought experiment has also been proposed by many other scientists like Max Tegmark. The thought experiment is a direct consequence of Hugh Everett’s many-worlds interpretation, which states that the possibilities that do not take place in this universe must take place in some parallel universe. Despite their lack of background in quantum mechanics, the basic principles that they developed the theory upon hold and still remain consistent with further studies into quantum physics. The quantum immortality theory mirrors what is probably the most well known quantum physics principle: Schrödinger's cat. Schrödinger’s cat is a thought experiment which demonstrates phenomena occurring at the subparticle level as perceptible events that at least some people who do not possess doctorates in quantum physics can understand. Basically, you place a cat, a flask of poison, and an acute radioactive source in a steel box. The theory says that 85


The Journal of Young Physicists – Issue 1

the magnitude of the radioactivity would create a 50% chance of a single atom decaying, therefore killing the cat by breaking the glass and releasing the poison. However, the equivalent chances of the cat dying and remaining biologically intact creates what’s called a superposition, in which case both possibilities are happening simultaneously. In quantum physics, particles are suspended in a state of superposition before settling into one option or another, and it is impossible to definitely tell, as of current understanding, which circumstance will result. Hence, the superposition of both dead and alive states emerges within Schrodinger’s cat. While Schrodinger was simply using this as a paradoxical thought experiment and not as a real possibility, Morevic and Marchal took his work a step farther, birthing quantum immortality. If a cat could theoretically be both dead and alive, why couldn’t humans or any other being for that matter? Hypothetically, they could. Imagine a gun. However, it is not your average gun; rather, it is driven by an electron spin. If the electron spins clockwise, the gun will fire, and you are dead. If the electron spins counterclockwise, the gun will not fire, and you are not dead. According to quantum physics, it is impossible to know which way the electron is spinning, so the two options cause a 50/50 chance at life and a 50/50 shot at death. The moment comes. You are about to be either lucky and alive or unlucky and dead. Just kidding! You are now both alive and dead. However, the mind can’t fathom death, an unfamiliar realm, so even if you existed in both life and death, you would not know. Consequently, you, or rather your mind, ends up alive, settling into the counterclockwise spin option. You conduct a second trial; 50% probability again at either outcome. The gun goes off, and you are once again suspended in a superposition, settling mentally into consciousness and life. This happens a third time. Then, a fourth time. Then, a fifth time, and so on and so forth until you decide to stop pointing a quantum-mechanically charged gun at yourself (although, the theory technically says 50 times). In essence, you are creating a branch in the multiverse, another out there and highly contested theory. In one universe, you have died. Everyone around you sees that you have died. However, in another, you’re alive and thriving, probably buzzing to tell someone that you’ve just proved quantum immortality by surpassing the odds and surviving 50 gun shots, well sort of. But what’s to say it’s not just dumb luck, literally just beating the odds? Just like quantum immortality, it COULD happen, and it is significantly more feasible to the immortality non-believers frequenting society. 86


The Journal of Young Physicists – Issue 1

So, you decide to prove them wrong. You will conduct the experiment again, and then show them your survival through another round of incredibly minute odds. Except there are two problems. One, there are still odds, and, in a psychologically-based generalization, people are inherently against revising their beliefs without concrete proof to the contrary, which no one but you have. Two, the universes supposedly split with each trial, so those who denied your admittedly outlandish theory see you perish, as opposed to your self-predicted continuation life. They are unaware that your consciousness is actually in another version of yourself in an alternate branch of the multiverse. Theoretically, you could do this a million times, and because the universe splits each time to settle the superposition, it would remain impossible to prove. The quantum immortality theory is not limited to electron spins inside of quantum guns, though. It takes the shape of any life or death matter, or even every decision in an extreme version. If a vital organ is failing, you have a superposition in which the organ failed and the organ did not fail. Your consciousness flows through the living option. If you die of old age, even, there is a chance that you did not die of old age. Hence, you remain intact. Everything has its 50/50, direct opposite, and consequently, you are forced into a superposition with everything. There is no concrete proof that quantum immortality is real or even possible. Some have pointed out that your self in a parallel world is different from your self in this universe. However, the ability to follow scientific thought backing the idea, a theoretical proof of sorts, is incredible. In any case, we can resolve the issue for ourselves when we all become the oldest people on Earth in our respective branches of the multiverse.

REFERENCES https://arxiv.org/ftp/arxiv/papers/0902/0902.0187.pdf https://interestingengineering.com/a-theory-of-quantum-mechanics-thatsuggests-everyone-is-immortal https://web.archive.org/web/20070827000748/http://www.higgo.com/quant um/qti.htm

87


The Journal of Young Physicists – Issue 1

88


The Journal of Young Physicists – Issue 1

WE SHOULDN’T EXIST: THE MATTER-ANTIMATTER ANNIHILATION Author: Jay Nolledo

If we were to ask random people how the universe was created, the popular answer would be the Big Bang theory. The one you always read on one of those thick and confusing science books (the ones that you don’t read). The Big Bang theory is the leading explanation on how the universe began; however, some things just don’t add up. According to the Big Bang theory, the Big Bang should have created equal amounts of matter and antimatter during the early stages of the universe. Today, there is almost no antimatter to be found, and no one knows why. Everything we see on earth and even outside the earth (such as the stars, galaxies, super clusters) – almost everything seems to be made up of matter, and not antimatter. Matter and antimatter particles are identical; in fact, they share the same mass. However, they are oppositely-charged. Matter-antimatter particles were randomly popping in and out of existence during the early stages of the Big Bang when our universe was hot and dense. They are always created in pairs, and if they come in contact with one another, they annihilate one another, leaving behind nothing but energy. So in theory, we shouldn’t exist. Everything in the universe should be nothing but leftover energy. But, we do exist, right? Why is that? Scientists are eager to find answers to explain why there is an asymmetry between matter and antimatter. Studies suggest that there are actually two options: first, the universe was just simply born with more matter than 89


The Journal of Young Physicists – Issue 1

antimatter. Else, maybe something happened during the early stages of the Big Bang that caused the asymmetry of matter-antimatter particles. Both these ideas are great, but the first one is scientifically untestable because the entire universe needs to be recreated. The second idea, in contrast, is quite interesting. In order to test the second idea, a replication of the extremely high energy states immediately after the Big Bang is needed. This helps in understanding how matter and antimatter particles behaved in such conditions. How exactly do you replicate the high energy Big Bang environment? The best way to recreate the environment is by smashing particles together at near-light velocities in a particle accelerator. There have been numerous experiments conducted in the past to further understand the asymmetry of matter and antimatter. In the late 1960s, Russian physicist Andrei Sakharov proposed a set of three conditions necessary for a baryon-generating interaction to produce matter and antimatter at different rates. This is called baryogenesis; the creation of more baryons than anti-baryons. The conditions are as follows: C-symmetry and CP-symmetry violation, There must be baryon-number-violating interactions, and Interactions out of thermal equilibrium. C-symmetry and CP-symmetry violation C-symmetry is defined as replacing particles with antiparticles while CPSymmetry is replacing particles with mirror-reflected antiparticles. This is easy to achieve since they are both violated in many weak interactions involving strange, charm, and bottom quarks. There must be baryon-number-violating interactions Baryon-number-violating interactions are obviously needed to produce an excess of baryons over anti-baryons. However, it’s easier said than done. Experiments showed that the balance of quarks to antiquarks and leptons to antileptons are directly conserved. On the contrary, there isn't a straightforward conservation law for either one of those quantities individually.

90


The Journal of Young Physicists – Issue 1

Interactions out of thermal equilibrium Lastly, this condition states that the rate of reaction which generates the baryonasymmetry must be less than the rate of expansion of the universe. The particles and their antiparticle counterparts do not achieve thermal equilibrium due to the expanding, cooling universe with unstable particles (and/or antiparticles). As mentioned earlier, the early stages of the universe is an extremely high energy environment. The energy is high enough to create every known particle in great amounts as per Einstein’s famous equation: 𝐸 = 𝑚𝑐 2 . If the idea of particle creation and matter-antimatter works according to theory, equal amounts of matter and antimatter particles would be created and will all be interconverting into one another as the energy from the environment remains extremely high. Unstable particles created in huge amounts will decay as the universe continues to expand and cool. And if the three conditions of Sakharov are met, they can lead to an excess amount of matter compared to antimatter. Physicists, however, are still working on generating visible frameworks that could give an excess amount of matter over antimatter. Fun fact: Did you know that bananas produce antimatter? Studies show that bananas produce positron (the antimatter counterpart of an electron) once every 75 minutes. This occurs primarily because bananas contain potassium-40, which is a naturally occurring isotope of potassium. Positron is ejected occasionally as the potassium-40 in the banana decays.

REFERENCES https://home.cern/science/physics/matter-antimatter-asymmetry-problem https://www.forbes.com/sites/startswithabang/2019/02/08/theres-almostno-antimatter-in-the-universe-and-no-one-knows-why/?sh=23c0d9fe9c6b https://www.abc.net.au/news/science/2016-06-23/antimatterexplainer/7487354 https://sites.psu.edu/dfnpassion2blog/2016/02/04/antimatter/

91


The Journal of Young Physicists – Issue 1

92


The Journal of Young Physicists – Issue 1

ULTRA COLD ATOMIC SYSTEMS: THE INVESTIGATION AND APPLICATION OF BOSE-EINSTEIN CONDENSATES Author: Kian Jagtiani

ABSTRACT The emergence of highly effective cooling and trapping techniques for neutral atoms in the late 1990s was undeniably one of the largest scientific breakthroughs in atomic physics. The concept of Bose-Einstein condensates, first theorized by Albert Einstein and Satyendra Bose in the 1920s and later carried out experimentally in 1995, is a concept that enables us to study multitudinous phenomena, investigate the behavior of atoms at a quantum scale, make precise measurements as well as proceed with many other research opportunities that were otherwise impossible to carry out. In constant pursuit of an even colder temperature, physicists in a laboratory at JILA, a joint institute of the University of Colorado, Boulder and NIST, created the first Bose-Einstein Condensate at barely 5 nanokelvins. This article will revolve around the fifth state of matter and will begin with a concise explanation of what it is, followed by a succinct description of how it can be attained. Subsequently, it will highlight the properties exhibited by BECs and ultimately elaborate on a few of their potential applications.

THE NEWEST STATE OF MATTER: BOSE-EINSTEIN CONDENSATES Satyendra Nath Bose – an Indian physicist – sent Albert Einstein his work on the behaviour of photons. Bose’s work noted that the two classes of submicroscopic particles, bosons and fermions, react differently. As dictated by Pauli's exclusion principle, fermions naturally tend to repel/avoid each other. Bosons, on the other hand, were found to disobey the principle, and Bose cogitated that multiple bosons can share the same quantum state. Upon receiving Bose’s paper, Einstein built upon this and predicted that when the energy of the particles is decreased to temperatures extremely close to absolute zero (−273o C), a number of bosons can amalgamate into a single quantum body that can be described by a single wavefunction. Unfortunately, due to limitations in technology at the time, it wasn’t possible to create an 93


The Journal of Young Physicists – Issue 1

environment colder than the critical temperature, the temperature below which a Bose-Einstein condensate is formed. The critical temperature of an element is dictated by the following equation: 𝑇𝑐 = (

2 2 3 2𝜋ℏ

2 2 3 ℏ 𝑛

𝑛 ≈ 3.3125 ) 𝜁(3/2) 𝑚𝑘𝐵 𝑚𝑘𝐵

where 𝑇𝑐 is the critical temperature, 𝑛 is the particle density, 𝑚 is the mass per boson, ℏ is the reduced Planck constant, 𝑘𝐵 is the Boltzmann constant and 𝜁 is the Riemann-zeta function; 𝜁(3/2) ≈ 2.6124. In the 1990s, two breakthroughs in the field finally made it possible to cool substances down below their critical temperature: laser cooling and demagnetization. These discoveries won their inventors the Nobel prize and opened up a world of applications related to BECs. As far as we know, there are no naturally occurring BECs in our solar system and as well as beyond. Although, theoretical physicists believe that they could occur naturally in close proximity to neutron stars, as the extremely high pressure could lead to dense particles gathering so close together so that they act like BECs.

CONTRIVANCE There are two main methods that scientists use to cool a substance down by such a large magnitude: Doppler Cooling and Adiabatic Demagnetization. Doppler Cooling Doppler Cooling is based on the idea that atoms can absorb and emit photons that are at their resonant frequency. This method consists of three pairs of orthogonal laser beams, which together cool the atoms found at their intersection. For the sake of experimental clarity, let’s consider just one of these beams at first. If the frequency of the photons in this laser is exactly equal to the difference in the excited and ground states of an atom, then it is possible for the atom to absorb it. Once the absorption has occurred, the atom will now be in its excited state, from which it will try to return to its ground state by emitting a photon identical to the one absorbed, but in a completely random direction. 94


The Journal of Young Physicists – Issue 1

Through this process, the momentum of the atom will change; decreasing upon impact in accordance with the law of conservation of momentum. Since the temperature of a body is simply the average kinetic energy of its atoms, it is safe to say that: 𝐿𝑒𝑠𝑠 𝑚𝑜𝑚𝑒𝑛𝑡𝑢𝑚 = 𝐿𝑒𝑠𝑠 𝑣𝑒𝑙𝑜𝑐𝑖𝑡𝑦 = 𝐿𝑒𝑠𝑠 𝑘𝑖𝑛𝑒𝑡𝑖𝑐 𝑒𝑛𝑒𝑟𝑔𝑦 = 𝐿𝑜𝑤𝑒𝑟 𝑡𝑒𝑚𝑝𝑒𝑟𝑎𝑡𝑢𝑟𝑒 However, the process described above would only work if all the atoms are travelling directly at the laser. This means that the process will not work for gas molecules, which are known to undergo Brownian motion, as some atoms will then be sped up instead. The Doppler effect now comes into play. If a frequency slightly less than the resonant frequency is chosen, then all atoms that are stationary or moving in the wrong direction will observe a frequency that would not enable them to absorb the proton. Whereas, atoms that are moving toward the beam, will have a higher observed frequency and will interact with the photons. The use of three orthogonal pairs of lasers ensures atoms in all directions are slowed down. Adiabatic Demagnetization Adiabatic Demagnetization is a process that makes use of the paramagnetic properties of certain materials and can cool them down to a few millikelvins. It is primarily meant for gases that have already been cooled down. Hence, this is the next step after Doppler Cooling in the creation of a Bose-Einstein Condensate. Process Let X be the substance that has to be cooled and Y the cold liquid that helps X cool. X is first made to come in contact with Y, which is typically liquid helium, inducing a magnetic field in it. Once X and Y are in thermal equilibrium, the strength of the magnetic field is increased, which results in a more ordered system, leading to a decrease in entropy. After which, X is isolated from Y and the strength of its magnetic field is reduced. This prevents the backflow of heat and will result in X being cooler. Regular Adiabatic Demagnetization can cool substances down to approximately 0.001 Kelvin. To obtain even lower temperatures, an extremely similar process known as Adiabatic Nuclear Demagnetization is used. The process is beyond the scope of this article, but in essence, it makes use of nuclear dipoles rather than atomic 95


The Journal of Young Physicists – Issue 1

ones, which are roughly 900 times smaller and can cool substances down 1000 times more. [ 0.001/1000 = 0.000001 Kelvin.]

CORRELATION TO COUNTER-INTUITIVE PHENOMENA Superfluids Superfluids are fluids with the ability to flow with zero viscosity. The first superfluid to be discovered was helium-4, and it was quickly understood that superfluidity was observed due to partial Bose-Einstein condensation. Many BECs till date have been seen to exhibit superfluid-like properties, and this can be theoretically explained by the fact that BECs are effectively ‘super atoms,’ and all parts of them move in the same direction without any internal friction. Supersolids Scientists have observed that BECs form high density ‘droplets’ that repel each other. When placed under certain conditions, including a trap, these droplets arrange themselves in an ordered lattice. When the trap presses the droplets close together, the BEC acts as a substance known as a supersolid, which means that while maintaining the lattice structure, the droplets allow for the transfer of atoms, allowing the condensate to stay in a collective state. Bosenovas The self interaction of the wavefunction that the BEC is defined by can be controlled by changing the magnetic field in which the BEC is in. Adjusting the interaction to ‘repulsive’ would cause the condensate to expand at a constant rate which is theoretically expected. However, when the interaction is adjusted to ‘attractive’, rather than contracting and eventually becoming extremely small, the condensate is seen to first shrink a little bit (as expected) and then experience an explosion. The remnants of this explosion include a smaller, colder condensate surrounded by the gas of the explosion. Since this looks similar to a supernova, scientists refer to it as a ‘Bosenova.’ The weird part is that the explosion is inexplicable till date and the process behind it is completely unknown.

96


The Journal of Young Physicists – Issue 1

Coherence A property of BECs, one that has been made extremely evident throughout this paper, is that they exhibit coherence. This is a property that proves to be extremely useful, which will be discussed in the next part. Superconductors Superconductivity is when a circuit has zero resistance. A form of matter similar to BECs, known as Bardeen-Cooper-Shrieffers (BCSs) have been known to have atoms that are in a particular order, enabling them to facilitate the transfer of electrons easily, experiencing superconductivity. When the two (BCSc and BECs) are overlapped, and a process called ‘photoemission spectroscopy’ is used to observe electron behaviour, it can be seen that BECs exhibit superconductivity as well. Slow/stationary light Perhaps the most unusual property that BECs possess is that they can slow down light. For decades people have believed that the speed of light is constant, and that 𝑐 – the speed limit of the universe – is the speed at which light eternally travels at. However, in 1998 researchers from Harvard University slowed down light from 300,000,000 to 17 m/s . Consequently, many others have taken this further and have even completely stopped a pulse within a BEC. A simple way to understand the theoretical aspects of this would be to consider what light is made up of – photons. When they interact with atoms, they form a hybrid subatomic particle known as a polariton. These hybrid particles are greater in mass, resulting in the speed of their propagation decreasing. A surfeit of other phenomena is observed in Bose-Einstein condensates such as swirling vortices and neutral particles that behave as if they are carrying a charge. Unfortunately, the other applications are much more complex and demand the use of equations and concepts that can’t be summarised in this article.

APPLICATIONS Most of the work done in this field serves as a form of research known as ‘basic’ research, which means that it aids us in understanding other concepts and

97


The Journal of Young Physicists – Issue 1

facilitates other fields, rather than being applicable to a specific process or form of equipment. That being said, there are multifarious examples of the latter. Superfluids Even now, the best lubricants in the world experience frictional losses as their molecules interact with each other to some extent. The property of superfluidity overcomes this problem and enables BECs to reduce friction by values near 100%. Furthermore, superfluids are known to be interrelated with superconductors to a great extent. Likewise, BECs could be used to make various superconductors and even superconductor magnets. Like all other superfluids, they can also be used in gyroscopy, as a quantum solvent, and in a variety of other places. Atomic lasers When a BEC is crafted into a beam, it acts similar to a laser. Their property of coherence ensures that every part of the beam is identical or behaves in the same way. Nonetheless, they provide multiple advantages over normal lasers as they are much more precise and also have a relatively higher energy level. Atomic lasers (those that contain BECs instead of photons) are expected to revolutionize atomic physics and have a positive impact on fields such as atom optics, interferometry, lithography, holographics, the measurement of the fundamental units through the enhancement of devices such as atomic clocks, etc.. Cognizance One of the biggest problems of quantum mechanics and particle physics in general is that a lot of its principles are counter-intuitive and contradict Newtonian physics. This means that it can be quite difficult to visualise and understand a lot of the concepts. BECs, being superatoms, provide us with a substance that acts like an atom, but with a volume that is observable by the naked eye. In fact, researchers at MIT have produced clearly visible interference patterns using sodium BECs, inherently demonstrating a micro-effect on a macro scale.

98


The Journal of Young Physicists – Issue 1

Slow/stationary light The fact that light has been slowed down, especially to the point at which it is stationary, is quite remarkable. Furthermore, this leads to the possibility of the storage of light, which opens up multitudinous applications related to telecom, optical data storage and enhanced quantum computing. Organised traffic flow in networks, single-photon switches and the spatial compression of optical energy via photonic crystals are some of the more realworld applicable solutions that are likely to be developed in the near future. Entangled atoms Researchers at the Georgia Institute of Technology have observed a ‘sharp magnetically-induced quantum phase’ in a sodium BEC. They expect this to lead toward the observation of a state of entangled atomic pairs that are predicted to have applications related to computers, sensors and other technologies. They believe that they are extremely close to observing entanglement, and have a predefined window of time in which this would be possible. Once discovered, the entangled atoms would be used to increase the sensitivity of sensors in detecting physical stimuli. It would also help enhance the speed at which quantum computers can perform a number of identified calculations.

SPECULATIONS There are also multiple mathematical theories related to dark matter and string theory that are fundamentally based on BECs. For example, a dark matter BoseEinstein Condensate formed from a ‘cloud’ of dark bosons is said to form something known as a ‘Bose star’ when under the effect of gravity. Scientists in Russia have conducted research at a smaller scale, and have hypothetically speculated that predicting the number of Bose stars in the universe and determining their mass in terms of ‘light dark matter’ is a key step towards solving one of the biggest questions of our time – what is dark matter? Of course, this last part is all speculatory and while it is backed to some level by experiments, quantum physics has proven itself to be too unpredictable to confidently support these theories without concrete evidence.

99


The Journal of Young Physicists – Issue 1

REFERENCES https://newatlas.com/physics/bose-einstein-condensate-superconductor/ https://www.researchgate.net/publication/303524583_UltraCold_atoms_and_Bose-Einstein_condensates_in_Optical_Dipole_Traps https://boulderschool.yale.edu/sites/default/files/files/Vortices.pdf https://hal.archives-ouvertes.fr/hal-01166054/document https://web.stanford.edu/~rpam/dropoff/Phys041N/lecture6-lasercooling.pdf https://www.nist.gov/news-events/news/2001/03/implosion-and-explosionbose-einstein-condensate-bosenova https://www.physicscentral.com/explore/action/light.cfm

100


The Journal of Young Physicists – Issue 1

THE THEORY OF EVERYTHING: QUANTUM GRAVITY Author: Hazal Kara

INTRODUCTION There are four fundamental forces in our universe: the weak force, the strong force, electromagnetism, and gravity. The latter is different from the others in the sense that it is described as a consequence of the curvature of spacetime and the presence of mass in the mass. The Standard Model of particle physics (a classification system of all known elementary particles in our universe) unites three fundamental forces but is missing gravity (as described by general relativity). In order to fit gravity into the Standard Model, certain physicists are working on what is described as the theory of everything. The two major candidates for this are string theory and loop quantum gravity, which will be discussed in this article. The theory of everything will provide a comprehensive description of all possible interactions that can take place in our universe and significantly advance our understanding of it.

STRING THEORY String theory is one candidate for the ‘theory of everything’ in which particles are modeled as one-dimensional ‘strings’ rather than zero-dimensional pointlike entities. The particular vibrations of these ‘strings’ is what gives rise to particle properties such as mass and charge. One of the main challenges in string theory is that it requires at least a total of 10 space-time dimensions in order for the math to hold up. No experimental 101


The Journal of Young Physicists – Issue 1

results have shown signs of an additional six space dimensions as of yet. Consequently, string theorists propose that the three space dimensions we can observe are large whereas the other six are ‘curled up’ and much harder to detect. These dimensions have been an important area of study for string theorists because understanding their geometry can lead to more information on how these string-like particles may vibrate (see the figure below).

A Calabi-Yau manifold; these manifolds are used for compactifying dimensions Superstring theory is a version of string theory that encompasses both fermions and bosons (unlike bosonic string theory) while also incorporating ‘supersymmetry,’ which basically states that every particle in the Standard model has a partner particle (see the figure below). When superstring theory only has 10 spacetime dimensions, it separates into five subtheories: type I, type IIA, type IIB, SO(32) heterotic, and E8 × E8 heterotic. Fortunately, there is Mtheory, which unifies these five; it requires seven additional spatial dimensions rather than six. How can string theory be tested? Evidence of supersymmetric particles through particle accelerators would definitely provide a strong case for string theory. In fact, many physicists had hoped that the Large Hadron Collider at CERN would show the existence of supersymmetry, but so far searches have come up empty. Aside from that, if a graviton (a hypothetical force-carrying elementary particle

102


The Journal of Young Physicists – Issue 1

for gravity) were to be detected in a particle accelerator, this would also help the case for string theory.

Standard Model particles and supersymmetric particles side by side

LOOP QUANTUM GRAVITY Another candidate theory is called loop quantum gravity. It incorporates the concepts of space and time from general relativity into quantum mechanics. In loop quantum gravity, both space and time are discrete and granular. Essentially, this means that there is an extent at which space can’t be divided any further. In contrast to general relativity, quantum field theory suggests that spacetime is a fixed background. Since loop quantum gravity is a theory that adopts general relativity, it assumes no fixed background (also known as background independence). Where does the ‘loop’ come in? Here, loop quantum gravity uses an idea from Faraday’s ‘lines of force.’ These lines can be thought of as a quantum excitation of their respective field. When there are no charges, the lines close in on themselves and form a loop. In loop quantum gravity, the quantum gravitational field is described in terms of these loops. Space is made up of these loops. It is important to note that loop quantum gravity is not a theory that unifies all forces, rather it explains the gravitational field in a way that fits quantum mechanics. Loop quantum gravity faces the same obstacle as string theory in that there have been no experimental results that support it. One proposed method uses evaporating black holes. The radiation these black holes emit could provide 103


The Journal of Young Physicists – Issue 1

quantum gravity signatures, separate from evidence of Hawking radiation. However, for such an experiment to take place, physicists would first need to actually detect evaporating black holes, which will be a difficult feat.

CONCLUSION While much progress has been made in search for a “theory of everything”, there is still a whole lot more to research and discover. The two theories, string theory and loop quantum gravity theory, described in this article are the two main candidates that merge quantum mechanics and general relativity. However, they are difficult to test and there has been no experimental evidence of either one as of now. Perhaps one day there will be concrete evidence for one of them (or another less studied theory) that will revolutionize the field of physics.

REFERENCES https://www.britannica.com/science/Standard-Model https://www.britannica.com/science/string-theory https://home.cern/science/physics/supersymmetry http://cgpg.gravity.psu.edu/people/Ashtekar/articles/rovelli03.pdf https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5567241/ https://arxiv.org/abs/1109.4239

104


The Journal of Young Physicists – Issue 1

Did you enjoy reading our articles? Bet you did! Visit our website for more.

https://journalofyoungphysicists.org

105


The Journal of Young Physicists – Issue 1

106