

Dear Reader,
We are delighted to release the Computer Science Society’s 2024/2025 edition of ENIGMA. The articles featured here show the absolute best of what HABS’ STEM students are capable of, even from the youngest of years. With articles ranging from the 1,000 km/hr Hyperloop to Brain Computer Interfaces, take your pick and dive into some incredibly detailed and impressive pieces.
ENIGMA would not have been possible without the hard work of the HABS Marketing Team, and of course, everyone who took the time to write the incredible pieces this magazine features. We hope you enjoy!
Yours algorithmically,
Aaron Varma, Yunfei Fan, Pasha Soneji, Kayan Shah, Samira Glynn Chairs, Computer Science Society
BRAIN COMPUTER INTERFACES ARE A SYSTEM THAT ALLOWS DIRECT COMMUNICATION BETWEEN THE BRAIN AND AN EXTERNAL DEVICE LIKE A COMPUTER OR ROBOTIC LIMB, BYPASSING THE BODY'S USUAL OUTPUT PATHWAYS (MUSCLES, NERVES) BUT COULD THEY BE ANY USE FOR MANKIND OR HAS IT POSED AN ENOUGH THREAT TO OUR ABILITY TO THINK FREELY?
BCI’S HAVE ALREADY FOUND POTENTIAL IN THE MEDICINE INDUSTRY SUCH AS IN CONTROLLING PROSTHETICS. SOME EVEN SPECULATE THAT THEY COULD BE USED FOR REHABILITATION PURPOSE, HOWEVER LIMITED TRIALS HAVE HINDERED ITS PROGRESS FOR CLINICAL USE
FURTHERMORE, WITH OVER 5000 - 1000
USD PRICE RANGE AND CONCERNS OVER RELIABILITY, THERE IS NOT YET ADEQUATE INCENTIVE FOR COMMERCIAL INTERESTS SUCH AS IN GAMING OR SENSOR TECHNOLOGY
"My brain is in the cylinder, and I see, hear, and speak through these electronic vibrators.”
hat’s more fun than having something constantly beat you? This article will guide you into coding your own bot into playing any game of your choice, with a near 100% (if you do it right) success rate
The code that I share in this article is gravitated towards games where each player places a piece of some sort on a board, one at a time, and if this is you want to test it out, I would recommend starting with a straightforward game like Tic-Tac-Toe and then doing something more complicated, but equally simple, like Connect 4 Despite that, you could transform the same principles into working for more complicated games even as far as Chess
Hopefully, you can see, that the Maximising Player wants the score to be as high as possible as a score of +10 means they have won Likewise the Minimising Player wants the score to be as low as possible So, start to trace back up the tree If a game-state has two options below it, one with a score of -10, and the other with a score of +10 and it is the Maximising Player’s turn they will want to choose the move in which they have the highest score, and so, you can write +10 next to the game-state above it Continue doing this paying attention to whether it is the Maximising or the Minimising player’s turn, working upwards until you get to the last layer before the current game-state If it is the Minimising Player’s turn, for example, look at which score is the lowest and then play it
To visualise how the Minimax Algorithm works imagine that you are playing Tic-Tac-Toe You take the current game-state, and place it at the top of a tree Then from it you draw arrows with each possible move following on placed underneath (based on whose turn it is) You continue building the tree until Player 1 who we will call the Maximising Player has won Player 2 who we will call the Minimising Player, has won, or, the game has been drawn Then, next to it, write +10 if the Maximising Player won -10 if the Minimising Player won or 0 if it is a draw
ow, there is another aspect to the Minimax Algorithm called the Heuristic Function (as if it couldn t get simpler!) Think of this as being like the evaluation bar in online Chess Games: it takes in the board, inspects it, and then returns a value, such that a positive value means that the Maximising Player s position is better and a negative value meaning that the Minimising Player’s position is better The magnitude of the value, how positive or negative the value is, shows the magnitude of how much a player is winning For example in two different game-states if the Heuristic Function returns -6 and -3, in both cases the Minimising Player is in a better position, but, for the game where the value is -6, they are winning by a greater margin
So how does this help us?
This function, essentially, saves time You wouldn’t need a Heuristic Function for a game as simple as Tic-Tac-Toe, as you can easily traverse every single combination of moves, from the very start to the very end but for a more complicated game you can just tell your code to stop after, let’s say, 5 moves into the future and then use the Heuristic Function to assign it a score
This is the only real thing that can affect the ability of your AI The more accurate the Heuristic Function is, the more accurate the computer can judge the game the more accurate the moves are that it makes No pressure
Again not much explanation required but just returning what moves a player in this situation can possibly make This requires you to be familiar with your game ’ s rules for example in Tic-Tac-Toe you can place it anywhere that there is an empty space, however in Connect 4, you can only place it in a fashion that follows gravity where there has
There are some essential functions that you will need and I am afraid for this bit, you might have to think for yourself
One of the first and most obvious thing is that you establish a board that you can play on In the code above, I created a 7x6 board, that is used in Connect 4 as a 2D array Of course depending on what game you play, you can change the dimensions and nature of the board
You must also create a function that can print the board in whatever format you want, so that your user can actually see what is ha for each
Ma et s start coding shall we?
row es no u can de
One of the first and most obvious thing is that you establish a board that you can play on In the code above, I created a 7x6 board that is used in Connect 4 as a 2D array Of course depending on what game you play, you can change the dimensions and nature of the board
You must also create a function that can print the board in whatever format you want, so that your user can actually see what is happening You may want it to print row-by-row have
Now is the time to make the Heuristic Function Beforehand, decide which factors you want the Heuristic Function to take into account For example, when I made a Othello Reversi AI Player (great game you should check it out) the factors I chose were the number of pieces each player had, the mobility of the player (the number of moves it could make) and the number of edge pieces and corners a player had captured (which is important to the game)
Here I show you how to code for one of the factors (number of pieces) but for the rest you would just repeat the whole block and adjust depending on what the factor is
First, we assign the Maximising Player’s and the Minimising Players value for the factor to 0 Then depending what the factor is, we adjust those values, in this case, just adding one for every piece they have Then if both of those factors are not 0, to avoid a Zero Division Error, we apply the following formula: The weight essentially signifies how important the value is and tends to be in the range of 0–2, but you can adjust that value as much as you want (here I chose 12 after testing a range of values, and observing that 12 worked the best) Clearly if the factor is more important you want the weight to be bigger, as it has a larger effect on the heuristic value
Now we get into the fun(ish) bit how to actually code the Minimax Algorithm
In order to do this it requires two functions and some nifty aspects of dynamic programming, but luckily for you, it is not game-specific, and you can essentially use the same code for whatever game you are
And take a breath
This may seem complicated, but it is essentially doing the exact same thing you did earlier with the Tic-Tac-Toe tree so I would recommend looking at the code, and try to visualise this process There are a few things that I should point out, however:
The reason why I set the bestScore for the Maximising Player to be -1 000 000 to begin with (and for the Minimising Player +1 000 000) is that this is an absurdly large number, such that I am confident that my Heuristic Function cannot produce a value to these extremes
Furthermore if a player had won for example the Maximising Player, I would return a value of 1,000,000 depth, as this would be a number large enough such that the computer will naturally try and go for this route, and I subtracted the depth, so that if there are 2 situations where the player can win one with a depth of 3 (after 3 moves), and another with a depth of 4 (after 4 moves), it would prioritise the situation in which they win faster
Notice that more nodes are added to the tree only when the depth is less than five (five moves into the future) otherwise it just resorts to our Heuristic Function You can adjust this value however you want as when you increase it the moves become more accurate, but it takes exponentially more time to travel through all the options
N B As previously mentioned, for Tic-Tac-Toe, you do not need to create a Heuristic Function and you can get rid of the IF statement, as the computer can easily travel through every option and you will have an AI that is literally unbeatable
Now, the only thing that we have to do is get our game started, and make sure everything that we have created ticks in the right order
This code makes the human play first then the computer and then the human, and so on, so forth If you want, you can add a coin flip or rock-paper-scissors algorithm to decide who plays first and/or wrap the whole thing in a while loop so that every time after a game has ended the game starts again (making sure to reset the board) that is until you throw your laptop at the wall after losing for the billionth time, to your unbeatable AI player
s the field of computer science and technology grows exponentially, us humans need to adapt and learn with it Whilst the technological singularity remains a slightly dystopian theory computers intelligence and capabilities are evolving rapidly meaning that the way in which humans interact and use these systems must in turn change while maintaining control over them To facilitate this, fields such as HCI and UX play a vital role in helping to understand these complex technologies as well as integrate them more effectively into our daily lives so we can leverage their capabilities and allow everyone in society to benefit from them
The field of Human-Computer Interaction (HCI) focuses on how humans (the users) interact and engage with computers, through interfaces, hardware, and engineering It is a multidisciplinary study involving subjects like computer science cognitive sciences ergonomics, and design engineering The 4 key elements of HCI include:
The user
A goal-oriented task
interface
context
HCI must account for the user s behaviours needs abilities etc through conducting usability tests (discussed later) The overall task must be clearly defined, as well as maps or structure plans created to guide the development process The most crucial element is the interface where interaction type and physical characteristics of the interface are defined although they are subject to change by the user ’ s requirements Lastly, the context of the system and user must be considered to provide a more holistic experience for example its performance without network connection or its visual appearance in different lightings HCI principles are the foundation of various digital fields such as User Experience Design (UX) and Interaction Design (IxD), which focus on the overall human experience with a product or specifically examining the interface whereupon the interaction between human and computer take place
At the crux of HCI, usability and user efficiency are the primary goals when it comes to designing and operating computer systems In order for a product to succeed, it must include highly engaging features as well as being usable and easy to operate for the diverse possibility of users According to the Nielsen Norman Group a leading UX organisation usability depends on how well a product’s features accommodate users ’ needs and contexts Some heuristics for usability from Jakob Nielsen include error prevention, visibility of system status and minimalist design The aim of optimal usability is to guide users through the least labour-intensive route of the system in order to reduce their cognitive load
One key example of reducing cognitive load is through limiting the options or decisions necessary to complete a task Having too many choices for the user can cause a cluttered display on the interface as well as overwhelm them, making their response time slower and the overall usability decreases This is demonstrated in the Hick-Hyman Law which states that the time it takes to make a decision (reaction time RT) increases with the number n and complexity of choices:
Where a is the time not involved with decision making and b is the cognitive processing time per option This proves that the more stimulus to choose from, the longer it takes the user to decide which to interact with hence limiting the usability of the system
Where a is the time not involved with decision making and b is the cognitive processing time per option This proves that the more stimulus to choose from the longer it takes the user to decide which to interact with, hence limiting the usability of the system
Human-in-the-loop (HITL) is a collaborative approach integrating human input and expertise into the lifecycle of AI systems and machine learning We can see HCI is heavily depended upon in this approach, as humans have a direct interaction and influence on complex AI applications, therefore require highly efficient and understandable HCI Essentially humans participate in training evaluation and operation of the AI model, providing feedback and guidance on improvements Unlike the recently established ‘HABA-MABA trap’ (Humans Are Better At vs Machines Are Better At) HITL aims to enhance the accuracy and reliability of AI through embedding human and machine capabilities together in one system
Another way of improving the usability of HCI is the GOMS model (Goals Operators Methods and Selection) proposed by Card et al (1983) It is a cognitive model of a general framework of representing how a user performs tasks by breaking down the task into its components GOMS aims to improve the efficiency of HCI through identifying and eliminating unnecessary user actions, similar to reducing cognitive load
Furthermore GOMS adapts a hierarchical structure when executing tasks, whereby operators can establish sub-goals which in turn get accomplished by sub-methods Unsurprisingly artificial intelligence has had a growing impact on the fields of HCI and UX As AI continues to revolutionise society and transform a plethora of sectors and industries HCI ensures AI systems are user-friendly intuitive and responsive to the user ’ s needs, therefore allowing a wider range of customers and businesses to utilise it Examples of AI where HCI has been incorporated are voice assistants and gesture recognition Voice assistants like Siri and Alexa encompass conversational AI, using natural language processing (NLP) and deep learning techniques all of which are facilitated by the voice interaction type, allowing for hands-free communication and human-like conversations, making its usability highly effective
Human-in-the-loop (HITL) is a collaborative approach integrating human input and expertise into the lifecycle of AI systems and machine learning We can see HCI is heavily depended upon in this approach, as humans have a direct interaction and influence on complex AI applications therefore require highly efficient and understandable HCI Essentially, humans participate in training evaluation and operation of the AI model providing feedback and guidance on improvements Unlike the recently established HABA-MABA trap (Humans Are Better At vs Machines Are Better At), HITL aims to enhance the accuracy and reliability of AI through embedding human and machine capabilities together in one system
Another model similar to GOMS is the SOAR model, proposed by Laird et al (1987) It is a general cognitive architecture of human intelligence and is used for creating AI systems with intelligent behaviour Whilst SOAR hasn t been applied extensively in HCI research it has the potential to answer questions not addressed or ignored by GOMS model, hence suggesting that introducing this model will be a pivotal step the future of HCI
The future of HCI may be shaped by many advancements in technology as it continues to grow and strengthen its importance in society We are already seeing innovations in wearable technology such as smartwatches and fitness trackers, where HCI is a fundamental element of its usability, beyond the traditional touch-based interfaces through motion sensing voice commands and gesture recognition Another possibility of an area where HCI will thrive is Brain-Computer Interfaces (BCI)
BCIs acquire and analyse brain signals, to then translate them into commands that are relayed to output devices that carry out a desired action They are a type of neural interface which provides a direct communication between the brain and external devices They are a shift from traditional HCI input systems (touch, keyboard, mouse etc) and instead gather input from neural signals inside humans, enabling an emergent interaction type as well as a deeper connection between humans and computers
Nevertheless, even in today’s society, the symbiotic relationship between humans and computers is burgeoning From smartphones to household machines, computers are becoming increasingly integrated into our lives and along with it, the way we interact with them becoming more important Human-Computer Interfaces aim to bridge the gap between the complexity of these technologies and how we understand and maximise their capabilities Whether this may be through enhancing usability or employing AI systems, HCI will no doubt have a prominent influence on the future of technology and innovation
Arafat, Md Shahriare Hossain “AI and Human-Computer Interaction” Medium, 21 Feb 2024, mediumcom/@Shahriare/ai-and-humancomputer-interaction-481e39f7d032
Artemis Skarlatidou, and Carol Iglesias Otero “Design Approaches and Human–Computer Interaction Methods to Support User Involvement in Citizen Science” UCL Press EBooks, 4 Feb 2021, pp 55–86, https://doiorg/102307/jctv15d817411 Accessed 23 Nov 2023
Gerlach, James H, and Feng-Yang Kuo “Understanding HumanComputer Interaction for Information Systems Design” MIS Quarterly, vol 15, no 4, Dec 1991, p 527, https:// doiorg/102307/249456 Accessed 12 Apr 2020
Google “What Is Human in the Loop” Google Cloud, 2025, cloudgooglecom/discover/human-in-the-loop
I Scott Mackenzie Human-Computer Interaction : An Empirical Research Perspective Amsterdam Etc, Morgan Kaufmann, 2013 Interaction Design Foundation “What Is Usability?” The Interaction Design Foundation, 1 June 2016, wwwinteraction-designorg/ literature/topics/usability
Kanade, Vijay “What Is HCI (Human-Computer Interaction)?
Meaning, Importance, Examples, and Goals” Spiceworks, 22 July 2022, wwwspiceworkscom/tech/artificial-intelligence/articles/whatis-hci/
Kieras, David E “An Overview of HumanComputer Interaction” Journal of the Washington Academy of Sciences, vol 80, no 2, 1990, pp 39–70 JSTOR, wwwjstororg/stable/24531047, https:// doiorg/102307/24531047
Nielsen, Jakob “10 Heuristics for User Interface Design” Nielsen Norman Group, 24 Apr 1994, wwwnngroupcom/articles/tenusability-heuristics/
Prasad, Manjusha, et al HCI in Mobile and Wearable Computing)
Shih, Jerry J, et al “Brain-Computer Interfaces in Medicine” Mayo Clinic Proceedings, vol 87, no 3, Mar 2012, pp 268–279, pmcncbinlmnihgov/articles/PMC3497935/, https://doiorg/101016/ jmayocp201112008
“What Is Human Computer Interaction?” Figma, wwwfigmacom/ resource-library/human-computer-interaction/ Yablonski, Jon “Hick’s Law” Laws of UX, 2022, lawsofuxcom/hickslaw/
Google’s Vice President of Engineering at Google. Founder and manager of the Quantum Artificial Intelligence lab.
Could there be a multiverse?
There is nothing cooler in the universe than quantum computers. Or should I say the multiverse? yes, you heard me. the spiderverse series could be real.
A quantum chip refers to a hardware component which processes information exponentially faster than a normal computer, utilizing characteristics such as superposition and entanglement. it represents data using qubits which can simultaneously or either be 0 or 1.
according to google, 105 qubit willow solved a computational task in under 5 minutes which would take over 10 septillion years for a normal computer to solve. quantum ai founder hartmut neven claims such a fast solver would be near impossible to occur under one human timeframe, and such algorithms would proceed in alternate realities using superposition, which allows a quantum system to exist in multiple states simultaneously.in one universe, the qubit holds 0, whilst in another reality, it holds 1. However, this is just a speculative claim, and nothing has yet been proved
magine if you could choose between an experienced doctor who has memorized each procedure and would help you calculate each risk statistically in the blink of an eye or a doctor with a few years of experience who would need 3 weeks to inform you of your diagnosis How about a doctor who knows every language in the world and can help you in any way possible or a doctor who requires a translator at hand to communicate? AI is a rapidly evolving landscape of technology and has shown much potential in revolutionizing aspects of medicine, yet despite AI improving efficiency and accuracy, there is a limit as to how willing patients are to risk their heartbeats to hard drives
The integration of AI in medicine offers substantial benefits backed by evidence from various studies For instance a study published in Nature demonstrated that AI algorithms for diagnosing skin cancer outperformed dermatologists, achieving an accuracy rate of 95% compared to 86% for human experts A report from the McKinsey Global Institute indicated that AI could potentially reduce healthcare costs by up to 20% through improved efficiency and optimized resource allocation By streamlining administrative tasks and enhancing clinical decision support AI not only improves diagnostic precision but also allows healthcare providers to focus more on patient care, ultimately leading to better health outcomes and patient satisfaction These case studies highlight the significant impact of AI assisting in diagnostic accuracy, personalising treatment, and reducing costs in the healthcare sector
The fear of the unknown is a significant barrier to the widespread adoption of AI in medicine, stemming from concerns about reliability, data privacy and human interaction Many healthcare professionals worry that AI systems may produce inaccurate results or make decisions without human oversight, potentially jeopardizing patient safety For example a survey conducted by the American Medical Association found that 75% of physicians expressed concerns about the reliability of AI algorithms in diagnosing diseases, fearing that a misdiagnosis could have severe consequences Additionally issues related to data security and patient confidentiality heighten apprehensions; the misuse of sensitive health data by AI systems can lead to breaches of trust between patients and healthcare providers A notable case is the backlash faced by companies like IBM Watson, which, despite its advanced capabilities, has struggled to gain traction in clinical settings due to skepticism about its recommendations
Many healthcare professionals worry that reliance on AI could undermine their clinical judgment and decision-making authority, leading to a scenario where algorithms dictate patient care Many question the humane balance between statistics and emotions and wants For instance, a study published in JAMA Network Open highlighted that 60% of physicians felt that AI could erode their role in the patient care process, fostering anxiety about becoming obsolete This loss of control is particularly concerning in high-stakes situations, such as diagnosing cancer or managing chronic diseases, where nuanced understanding and empathy are crucial
Moreover the shift towards AI-driven solutions can diminish the essential human interaction that forms the foundation of patient-centered care Patients often value the personal connection with their healthcare providers and the introduction of AI could lead to a more transactional experience For example, telehealth platforms that utilize AI for initial assessments may reduce face-to-face consultations, which can leave patients feeling less supported and more isolated Research indicates that a significant portion of patients prefer in-person interactions when discussing sensitive health issues, as the human touch fosters trust and emotional support This growing concern about losing the relational aspect of healthcare can make both providers and patients hesitant to embrace AI technology, ultimately hindering their potential benefits in improving medical outcomes
AI's potential to challenge ethical norms raises another layer of fear that stems from human intelligence Questions about accountability who is responsible when an AI system makes a mistake? complicate the narrative surrounding AI implementation The capacity for moral reasoning inherent in human intelligence leads to a heightened awareness of the ethical implications of deploying AI technologies
Moreover, the fear of creating machines that could inadvertently perpetuate biases or make unethical decisions leads to calls for stringent regulations and oversight While such measures are necessary for responsible AI development, they can also slow down the pace of innovation The desire to ensure ethical standards can lead to bureaucratic inertia, preventing organizations from fully
Zou, F-W et al (2020) Concordance study between IBM Watson for oncology and real clinical practice for cervical cancer patients in China: A retrospective analysis, Frontiers in genetics Available at: https://pmcncbinlmnihgov/articles/PMC7105853/ #:~:text=Watson%20for%20Oncology%20(W FO)%20is,is%20suitable%20for%20Chinese%20patients (Accessed: 09 January 2025)
Singla, A et al (2024) The state of ai in early 2024: Gen ai adoption spikes and starts to generate value, McKinsey & Company Available at: https://wwwmckinseycom/capabilities/quantumblack/ourinsights/the-state-of-ai (Accessed: 09 January 2025)
Polygraph Council on Scientific Affairs (no date) JAMA Available at: https://pubmedncbinlmnihgov/3735653/ (Accessed: 09 January 2025)
Although AI is a strong tool to use for research and sometimes diagnosis, at this point, AI has not been developed enough to consider more holistic medicines Furthermore, the concept of AI is so complicated in terms of ethics and responsibility, so although AI could help with more statistical and data analysing aspects, the idea of patients awaiting for a machine to inform them of their soon to be death still remains
irtual Reality (VR) and Augmented Reality (AR) have revolutionized the ways in which we interact and perceive the digital universe immersing humans in ways which blur the lines between the physical and virtual world These technologies have rapidly evolved from futuristic visions to daily applications in life, profoundly shaping the modern lifestyle The historical journey of VR and AR has shown a timeline of innovation which has led to the expansion of the technology into fields such as gaming, healthcare, and education In this essay we will explore the vast impact VR and AR had on society industries and individuals making vivid the transformative powers of the technology
Before delving into the complexities, it is a common misconception that AR and VR are same though they are completely different AR overlays digital content onto the real world w w A or
d after the turn of the century VR dates to the 1960's when the first VR headset was created and in the 1990's the 'Virtual Boy' was released by Nintendo and became the first VR technology publicly available In 1970 the US Air Force used AR for training simulations whilst the first AR technology publicly available was the 'Google Glass' released in 2013
VR and AR are applied in numerous ways already integrated into the daily lives of billions of people around the world AR is commonly seen in apps like 'Pokémon Go' and on Instagram and Snapchat filters whilst an example of VR in day to day lives could be through video games In the field of health care we are seeing VR used in training doctors and nurses with surgical simulations, whilst helping patients recover from trauma with therapy Meanwhile AR helps with vein visualization and surgical navigation In education VR allows students to conduct virtual experiments and explore key historical events safely from the classroom or from home which helped the educational system through the years of the Covid-19 pandemic Perhaps the most significant impact of VR and AR on an industry is the Gaming industry where VR offers gamers realistic and immersive game experiences where players can finally feel what it is like to be inside their favourite game AR allows mobile gamers location-based gaming which gives players a greater feeling of control encouraging players to invest more time and effort into the game
Though there are numerous good things about VR and AR it may have negative impacts on society, as being in a virtual world different to our own for too long may see a disconnect between people and their relationships Though nothing is proven yet doctors believe VR and AR could trigger abnormal brain function and rewire the brain to think in potentially harmful ways It has been found that using VR for a prolonged period may cause fatigue and cybersickness
Yet social impacts of VR and AR are not all that bleak they have the potential to impact the lives of those with disabilities and change them for the better AR can overlay audio descriptions for the environment a person is in helping people with visual impairments navigate and interact with their surroundings AR can create interactive learning spaces for those with cognitive disabilities making education more fun and effective VR allows people with mobility challenges to experience virtual environments that may be impossible for them to access in the real world VR and AR hold the power to close the accessibility gap providing new ways for people to interact with the world around them, if designers keep people with disabilities in mind they can create a much more equal world to live in
When it comes to ethical considerations in technology there are several important aspects to consider Due to the nature of information they collect AR technologies raise significant privacy concerns Users may be at risk to data theft or spoofing Immersive technologies like VR also pose risks like Identity theft as without proper safeguarding data about the user's body, behaviour and environment could be under significant risk Another concern is whether AR and VR technologies can distort or manipulate the user's perception of reality
In conclusion, the profound impacts of VR and AR reverberate through society, unveiling an interplay of benefits and challenges that shape our future in technology We see this in areas like accessibility health care education and gaming As the AR/VR industry charts a path of exponential growth and innovation the impacts on society highlight the need for conscientious design, ethical frameworks, and user-centric approaches By using the transformative powers of VR and AR while addressing the social implications through inclusive design methods we can create a future where immersive technologies enrich human experiences foster empathy, and bridge societal divides in a digitally augmented landscape
In 2024 the AR/VR industry is expected to make US $40 4 billion yet by 2029 it is predicted this number could reach US $69 billion growing at an annual rate of 8 97% This is not the only statistic at which the industry is growing, there was a surge in startups in the AR and VR market over 2023, this reflects the broader shift towards the immersive technologies in recent years As companies in the AR and VR industry lighter VR headsets AI integration and smart glasses to improve their products there are still challenges upcoming Like technology limitations, privacy concerns and user adoption barriers, assessing these issues is crucial for growth, as fixing even one of these could lead to the expansion of AR and VR technologies to even rural areas around the world
(No date) AR and VR in Education Available at: https://vectiontechnologies com/solutions/industries/education/ (Accessed: 31 May 2024)
Cross Director R J et al (2023) VR risks for kids and teens, U S PIRG Education Fund Available at: https://pirg org/edfund/resources/vr-risks-forkids/#: :text=Using%20VR%20may%20trigger%20abnormal,60%2C %2 0said%20Professor%20Mehta (Accessed: 03 June 2024)
Druzhinin A (2019) The history of AR and VR Medium Available at: https://arvrjourney com/the-history-of-ar-and-vr-3faea3f1e94b (Accessed: 31 May 2024)
Dutertre, A (2023) The ethical challenges of AR/VR, Medium Available at: https://medium com/@alex24dutertre/the-ethical-challenges-of-arvr-a5333594f909 (Accessed: 03 June 2024)
Mishra, S and Agarwaal, P (2023) Top 5 applications of Augmented Reality (AR) and virtual reality (VR), ET Edge Insights Available at: https://etinsights et-edge com/top-5-applications-of-augmented-realityar-and-virtual-reality-vr/ (Accessed: 31 May 2024)
Recovery O (2023) Virtual reality: The impending revolution and risky consequences, Omega Recovery Available at: https://omegarecovery org/virtual-reality-the-impending-revolutionand-riskyconsequences/#: :text=The%20Good%20and%20Bad%20of%20VR%2 0Technology&text=However%2C%20some%20risky%20consequences %20should,engaged%20in%20the%20real%20one (Accessed: 03 June 2024)
Sokołowska, B (2023) Impact of virtual reality cognitive and motor exercises on Brain Health International journal of environmental research and public health Available at: https://www ncbi nlm nih gov/pmc/articles/PMC10002333/ (Accessed: 03 June 2024)
Statista (no date) AR & VR - worldwide: Statista market forecast, Statista Available at: https://www statista com/outlook/amo/arvr/worldwide#: :text=The%20AR%20%26%20VR%20market%20is US% 2462 0bn%20by%202029 (Accessed: 03 June 2024)
Thampan, A and Razak, A (2023) ‘Evolution of Augmented Reality (AR) and Virtual Reality (VR)’, International Journal of Research Publication and Reviews, 04(02) doi:10 55248/gengpi 2023
uantum Computing and AI are two of the most impactful developments of our modern era This is because they are able to complete tasks more efficiently than the resources we have now Therefore, as the two resources are put together, it will allow humans to explore new uncharted territory in the world of business and human development as a whole In this essay I will be exploring the significant influences which fields such as physics are having in the AI industry right now and what the emergence of quantum computers means in terms of the boost which it may give to AI development HOWDOPHYSICSCONCEPTSRELATETOTHEDEVELOPMENTOFARTIFICIAL
Artificial intelligence is intelligence exhibited by computer systems It is a field of computer science that develops and studies software and data that enable computers and machines to use learning and intelligence in order to succeed in given tasks AI is made from transformers which are the neural networks of the AI model allowing them to learn for its own and access deep learning and Machine Learning to advance to new heights
AI leverages branches of science and mathematics that enable it to make better predictions with the least error rate and by utilising the least amount of computational power In today’s context heavy reliance on AI means that error rates have to be extremely low in order to inspire confidence in the power of AI Similarly, low computational power means lesser environmental impact and lower cost which is important to ensure better use of AI
AI is a massive emerging tool for businesses because of the exciting potential which it has for the future It is already appearing in many industries and companies which use it not only to automate their services and data sorting, but to make better business decisions The scope and potential of AI in our future is undeniable; it is only a matter of how advanced it will get and how quickly we will adapt to its new challenges
Physics concepts play a crucial role in the development of Artificial Intelligence Physics provides rules and models that govern physical systems and interactions which have been instrumental in explaining natural phenomena and engineering machines However, traditional physics can only help us to a certain extent, and in the internet era tasks such as image and speech recognition cannot be advanced by the traditional studies of physics This is where AI particularly deep neural networks has excelled by introducing a data-driven computational framework The combination of physics and AI concepts can overcome the challenges faced by both fields, extending AI's development and advancing research in engineering and physical science Physics-inspired models, such as quantum neural nets and physics-inspired machine learning methods, have the potential to address challenges related to data quality and availability in AI These approaches can benefit AI and machine learning by deep learning methods In particular concepts like Quantum Physics in the form of Quantum Computing, Electrostatics, in the field of data sampling and Thermodynamics in the field of prediction accuracy are the most useful in AI advancement by offering models and computational rules that enhance machine learning Quantum computers, based on quantum physics introduce qubits and superposition which allow AI systems to process data more efficiently than classical computers Quantum computing is set to revolutionise AI by increasing speed and accuracy, enabling tasks like data analysis and image generation to be completed faster and more efficiently However, challenges such as error rates in quantum systems remain The combination of AI and quantum computing could lead to huge advancements in fields like finance cybersecurity and scientific research
uantum physics is the branch of physics that deals with the behaviour of matter and energy at tiny scales such as atoms and subatomic particles It was developed in the early 20th century to explain phenomena that classical physics could not describe
Quantum mechanics can describe many systems that classical physics cannot This is because classical physics can describe many aspects of nature at an ordinary (macroscopic and microscopic) scale but is not sufficient for describing them at very small, sub microscopic (atomic and subatomic) scales
For many years the scientific fields of quantum physics and computer science were separate sections in the academic community Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales, and digital computers emerged to replace human computers for tedious calculations Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography and code breaking and quantum physics was essential for the nuclear physics used in the Manhattan Project This led to more experimentation and research into how the two subjects could be merged to create new science, and eventually scientists such as Benioff and Feynman drafted the first academic papers on the subject of quantum computing
Just as the bit is the most basic concept of classical computers, the qubit is the fundamental unit of quantum computers The term qubit (literally ‘quantum bit’) refers to a mathematical model and to any physical system that is represented by that model A classical bit exists in either of two physical states 0 or 1 A qubit is also described by a state and two states often written |0⟩ and |1⟩ serve as the quantum counterparts of the binary 0 and 1 However, the quantum states |0⟩ and |1⟩ can exist as |0⟩ or |1⟩ or a linear combination of the two states, meaning somewhere in between the two states This unique property of quantum physics is known as superposition and is the building block of all quantum computers
Quantum computing makes tasks exponentially quicker due to the fact that it needs several less qubits than classic computers need bits to perform a task This is due to the superposition explained above
Thermodynamics is essentially the study of the random behaviours of particles for example when particles diffuse through a liquid This can be compared to how AI looks at pixels in a picture and makes a decision about what the pixels are representing This technique can be seen in diffusion models like DALL-E by ChatGPT, which implement this method by looking at the static pixels which follow the picture and then trace the random movement of the pixels back to the original picture This technique is utilised by AI models to create imagery allowing us to tell the model what we want the generated image to look like
Electrostatics is the study of electric charges Charge densities are continuous objects that have different amounts of charge in different areas A place with a high charge density would exert greater force on electrons than areas with low charge density Interestingly the distribution of electrons in space looks very similar to the probability density curve, which is useful in the context of AI Specifically, one such class of AI models - Poisson Flow Generative Models (PFGM) simulate probability distribution as charge density This is particularly useful in the field of sample data generation as these models can generate new training data for AI models
This is done by visualising each piece of data as an electron, this is done because it is too complex to pick out a specific piece of data for training because it is so vast and not likely to find a realistic set of scenarios in the data manually Therefore, as each data is visualised as an electron, they repel each other outwardly to form the data set into a hemisphere; this makes it easier to pick out a specific data point because we see the trajectories that map the points on the hemisphere back down to the original data set giving us new data to use as training for AI models
uantum Computing can bring AI to a whole new level due to the speed of processing which can be accessed by AI models allowing them to perform more tasks with higher speed This would solve the limitations of current AI models which are only able to perform rudimentary tasks without knowledge of what they are doing Additionally, Quantum Computers allow for better and quicker analysing of data, so AI doesn’t waste as much time in processing
The transformer as mentioned before is a neural network based on the human brain allowing AI models to learn and progress at an unprecedented rate, currently transformers and AI models are limited by the processing speed of classical computers meaning that they aren t able to progress as quickly and as far as they can However if transformers could be run on quantum computers then AI advancement would be taken to new heights
Quantum computers are going to revolutionise the way AI performs due to the fact that qubits do not operate simply with 1s and 0s The properties of superposition mean that quantum computers can use many less qubits compared to a classical computer do bits in order to complete the same task at the same speed Therefore, when a quantum computer with several thousand qubits will run an AI model the potential of machine learning in the model will be substantially more than what classical computers allow for currently
Quantum systems currently have a very high error rate due to the fact that it is very difficult to predict whether the qubits will land on a 1 or 0 and therefore quantum computers have not been able to be implemented much so far in industries but looking at the vast amounts of research and development going into the sector by companies such as IBM and Google and universities such as MIT and Princeton, it seems that these issues will be resolved in our lifetimes and quantum computers will begin to play a huge role in our lives
There are a few different methods of resolving and correcting the issues that quantum computers make that are being implemented or experimented so that the quantum computer can eventually become a usable product: For example, you can Quantum Error Correction methods which use a unique code like the Shor code or the Surface Code which take into account the unique challenges which are posed by the quantum properties such as superposition and entanglement Additionally, Fault-Tolerant Quantum Computing manipulates quantum circuits in order to allow them to function even after some components fail these systems also rely on Quantum Error Correction to detect and correct errors before they accumulate
Researchers have also been exploring methods of actually improving the qubit quality in order to minimise qubit decoherence (when qubits lose their quantum properties) This can be achieved by isolating qubits from external noise or using higher quality materials However, it is most essential that greater control is held over qubit states and interactions to make sure there are no drifts in qubit behaviour Additionally Quantum Gates which are the operations that manipulate qubits must be enhanced by improving control mechanisms and shortening the time taken to perform these gates so that there is as less exposure to noise as possible
Noise can also be mitigated by noise extrapolation which measures the level of noise in the system and then apply corrections to mitigate its effect decoupling can help by applying a series of pulses to qubits which ‘decouples’ them from environmental noise Quantum computers are already kept at extremely low temperatures to reduce thermal noise, but these systems need to be further stabilised by reducing physical vibrations and lapses in the cryogenic environment
Increasing the number of physical qubits can create logical qubits which have inbuilt error correction, improving accuracy Whilst these logical qubits are less prone to errors they do require a significant overhead in physical qubits which means that custom architectures must be built to support this large number of qubits while optimising the layout and connectivity of the qubits to remove error
n my opinion the most interesting form of error management in quantum computers is hybrid quantum-classical computing by combining quantum computations with classical computers’ postprocessing to compensate for errors which happen during quantum computations An example of this is the VQE (Variational Quantum Eigensolver) algorithm where a quantum computer evaluates a function and a classical computer adjusts the parameters and improves the result despite the presence of noise
These are all methods which have been researched and are being put into practise currently and are vastly improving the performance of quantum computers and bringing them one step closer to the quantum advantage The quantum advantage is the point at which quantum compu in task than classical compute I models will be able to be run on otentials explored
Quantum computers have not been put into implementation very much yet currently due to the fact that the quantum advantage has not yet been met However, quantum computers will get more useful as hardware improves and quantum computers are able to complete tasks more efficiently with lower error rate; some examples of its use cases will be to go through data quickly and perform tasks more efficiently for companies Quantum Computing will bring AI to a whole new level as mentioned previously due to the speed of processing which can be accessed by AI models allowing them to perform more tasks with higher speed This would solve the limitations of current AI models which are only able to perform rudimentary tasks without knowledge of what they are doing Additionally, Quantum Computers allow for better and quicker analysing of data, so AI doesn’t waste as much time in processing It can finally also be used for financial modelling, encryption and cyber security drugs research and batteries
Chipmaker Nvidia has already started implementing AI and ML in scenarios such as weather forecasting, clean energy solutions, greenhouse gas emissions mapping and also learning to model car and industrial parts; This has imminent use in our world today and is a development which will be further accelerated by the advent of AI running on quantum computers
To conclude, Physics concepts support AI development by offering models and computational rules that enhance machine learning by finding new training data more efficiently and developing methods of picture recognition Quantum computing based on quantum physics introduces qubits and superposition which allow AI systems to process data more efficiently than with classical computers Quantum computing is set to revolutionise AI by increasing speed and accuracy, enabling tasks like data analysis and image generation to be completed faster and more efficiently However challenges such as error rates in quantum systems remain big issues to be solved The potential combination of AI and quantum computing could unlock powerful advancements in fields like finance, cybersecurity, and scientific research; and looking at the rapid progress being made in the fields of Quantum Computing Artificial Intelligence and Machine Learning supported by Physics concepts it seems that the combination of the two huge developments will bring about a revolution in the way companies go about their business and also the way we live our lives
Tom Garlinghouse Researchers discover an abrupt change in quantum behaviour that defies current theories of superconductivity Princeton University Department of Physics January 19 2024 https://wwwprincetonedu/news/2024/01/19/researchers-discover-abrupt-change-quantum-behavior-defiescurrent-theories
‘Quantum Entanglement Wikipedia 8 September 2024 https://enwikipediaorg/wiki/Quantum entanglement
Brian Clegg ‘What is Shrödingers Cat?, Science Focus, BBC Science Focus Magazine, https://wwwsciencefocuscom/science/what-is-schrodingers-cat
‘Superposition’, Quera, https://wwwqueracom/glossary/superposition
Matt Swayne Top 18 Institutions Leading Quantum Computing Research in 2024 The Quantum Insider May 16 2022 https://thequantuminsidercom/2022/05/16/quantum-research/
Surface Codes Quera https//wwwqueracom/glossary/surface-codes
Member of Caltech Faculty What is Quantum Physics? Caltech Science Exchange https://scienceexchangecaltechedu/topics/quantum-science-explained/quantum-physics
Quantum Computing Wikipedia 5 September 2024 https://enwikipediaorg/wiki/Quantum computing
Ryan OConnor How Physics Advanced Generative AI Assembly AI April 19 2023 https://wwwassemblyaicom/blog/how-physics-advanced-generative-ai/
Beth Stackpole Quantum Computing What leaders need to know MIT Management Sloan School January 11 2024 https://mitsloanmitedu/ideas-made-to-matter/quantum-computing-what-leaders-need-to-knownow
Jacob Roundy ‘Explore 7 future potential quantum computing uses Tech Target 10 February 2023 https://wwwtechtargetcom/searchdatacenter/tip/Explore-future-potential-quantum-computing-uses
Bhoomi Gadhia, Ram Cherukuri and Kristen Perez ‘Physics-Informed Machine Learning Platform NVIDIA Modulus Is Now Open Source Nvidia Developer 23 March 2023 https://developernvidiacom/blog/physics-ml-platform-modulus-is-now-open-source/
Artificial Intelligence Wikipedia 8 September 2024 https://enwikipediaorg/wiki/Artificial intelligence
COMPUTATIONAL MATHS >
There is no doubt that the world is mathematical Mathematics is an integral part in the solution of many problems and for this reason it is recognised as one of the most important educational courses in the new curriculum Problem solving has been a crucial aspect of teaching and learning mathematics Not only can students improve their thinking and ability with problem solving but can further apply procedures and deepen their conceptual understanding Many occupations require adequate problem solving skills like architecture and engineering because there is a need to design buildings that are not only pleasing and functional but ones that meet the strict safety requirements For example, in psychology research, many mathematical puzzles are used as a tool to investigate how humans develop their problem solving ability One such puzzle is called the Tower of Hanoi, which I will explore in detail in this paper
The original description from the leaflet that accompanied the puzzle was The Tower of Hanoi problem proposed by Lucas over a 100 years ago, contains deep, foundational mathematical truths, which requires knowledge of the basic if not fundamental properties of odd and even numbers It is a mathematical puzzle that involves transferring a certain number of disks to a goal position in the fewest moves possible Initially you start with 3 vertical pegs and are given a certain number of disks (typically 3) of mutually different diameters placed in order on the first peg
The aim of the game is to get from an initial state to a target state by moving a single disk from the top of the peg to the top of another possible one while obeying the following rules: Each time only 1 disk is moved
Only the topmost disk can be moved
At any moment, a larger disk cannot reside on a smaller one
Written by Lucas:
D'après une vieille legend indienne, les brahmes se succedent depuis bien longtemps, sur les marches de l’autel, dans le Temple de Bénarès, pour exectuter le dèplacement de la Tour Sacrée de Brahma, aux soixante-quatre étages en or fin garnis de diamants de Golconde. Quand tout sera fini, la Tour at les brahmes tomberont es ce sera la fin du monde!
That is, the Holy Tower of Brahma is to be found in a temple in the Indian city of Benares It has 64 disks, each of pure gold and embossed with diamonds on a cylindrical slate The disks are to be transferred by the temple priest from the initial stele to one of two other steles although it is not stated to which stele the disks will be moved nor how often a disk is moved but this is believed to be one a day in the legend When the task is completed the priests will collapse and so will the tower meaning the world will end It is also stated that before the priests complete their task the temple will crumble into dust and the world will vanish in a clap of thunder
The problem now is to determine how worried we must be about when the world will end
Iteration is a programming technique that uses loops (for, while etc ) to repeatedly execute a set of instructions until a condition is met It is therepetitionof a block of code usingcontrol variables, typically in the form of for, while or do-while loop constructs
Recursion is a programming technique where a function calls itself within its own definition to solve a problem A recursive function breaks down a problem into smaller self-similar subproblems then solves the subproblems in the same way as the original problem Recursion is a technique that enables a function to save itslocal variablesandparameterson anactivation stack, which is thenpopped offwhen the function exits or thebase caseis reached For any problem that can be solved via iteration there is a corresponding recursive solution and vice versa Similar algorithms are used in both approaches, however recursive algorithms tend to be more efficient as it can be more elegant and easier to understand for certain problems like solving the Tower of Hanoi In this case the solution to this problem is conceptually easier to implement via recursion and harder to implement via iteration Breaking the problem down into a smaller version of itself would be the best approach Recursive algorithms also naturally matches the puzzle s structure: to move a stack of discs you must first solve smaller instances of the same problem Even though problems likeTower of Hanoiare naturally recursive problems with easy implementations using recursion, an iterative solution could be equally as efficient since there would be no overhead associated with the activation stack Hence, the Tower of Hanoi puzzle is a great tool to understand and further implement the two key types of problem solving used extensively in computer programming It helps with honing the problem solving skills as well as conceptually appreciating both the problem-solving methods
Mathematical induction is a proof technique used to establish the truth of a statement for all natural numbers It involves proving a base case (the statement is true for the first number) and an inductive step (assuming the statement is true for a given number, proving it's true for the next number) Typically, this technique helps derive formulas for problems which require recursive algorithms to compute The Tower of Hanoi puzzle is a classic example where mathematical induction can be used to prove the minimum number of moves required to solve the puzzle for any number of disks, and where patterns repeat and build on earlier steps
f we were to use the minimum number of moves to move the disks so that all n disks end up on a specified different peg, we need to find a mathematical way to count them To move n disks, we must first move n-1 disks out of the way and then move the largest disk and again move n-1 disks on top of the largest Now to move 1 disk it takes exactly 1 move and to move 2 disks we move the smallest one, the bottom one then the smaller one again, so 3 moves at a minimum To move 3 disks, we must move 2 disks, so three moves then the bottom one which is another 1 move and 3 more moves to move the two disks to the target peg adhering to the rules, making up 7 moves This is illustrated in the table below This is a beautiful example of recursion, and use of induction to prove the formula is a great application of this principle Suppose we want to move n disks Name the number of moves required to move n-1 disks M(n-1) Using this we can describe the number of moves for n disks which is M(n-1) twice +1 We can call this M(n) The table below captures M(n) for 1 – 4 disks which is 1, 3, 7, 15 respectively, suggesting this is an exponential series Solving this series mathematically, we can arrive at a hypothesis that M(n) = 2ⁿ - 1 This is proved by induction below
Moves 1 1
2 1+1+1= 3
3 3+1+3 = 7
4 7+1+7 = 15 n-1 M(n-1)
n
Analysis of Algorithm: M(n) = M(n-1) + 1 + M(n-1) = 2 × M(n-1) + 1
Claim: M(n) = 2ⁿ - 1
Now we are using induction proof to prove our recursive algorithms
Proof:
Claim: M(n) = 2ⁿ - 1
Thus our predicate p the basis of our induction proof of n is the claim that M(n) = 2ⁿ - 1
P(n) = M(n) = 2ⁿ - 1
Base case: n = 1: M(1) = 1 = 2¹ - 1 = 2ⁿ - 1
We have counted manually that the moves it takes for 1 disk is 1 move which is what we get using this formula However, this doesn t mean that it is proved
Induction step:
We must prove: P(n) → P(n+1)
If M(n) = 2ⁿ - 1 → M(n+1) = 2ⁿ⁺¹ - 1
Assume: P(n) is true → M(n) = 2ⁿ - 1
M(n+1) = ?
When we did our analysis of the problem we found that we must first move n disks out of the way and then back again which is 2 × M(n) and then add the 1 larger disk at the end, so this becomes:
2 × M(n) + 1
From the induction hypothesis we know what M(n) is 2ⁿ - 1 Then we substitute it into the formula and do a bit of algebra
= 2 × (2ⁿ - 1) + 1
= 2 × 2ⁿ - 2 + 1
= 2ⁿ⁺¹ - 1
Thus we have finished our proof We have successfully proved that if P(n) is true so is P(n+1) If we go back to the base case this proves that for all n ≥ 1, M(n) = 2ⁿ - 1
The mathematical mystery is solved by inductive proof and yet there is a mystical element to this If one is still wondering or anxious about when the world will end and the priests will complete their task of moving all 64 disks, here is the calculation
A rough estimate of this number is:
Using the formula 2ⁿ - 1 will give us the number of moves and days as we are still assuming that 1 move is made per day To find out the number of years we must simply divide this amount by 365 The logarithm is there just to find out the number of digits in decimal base That is when the number is expressed as a decimal it is 17 digits long Given that the earth is around 4 5 billion years old it would take 10 000 000 times that long for the priests to complete their task This means a rough estimate is 45,000,000,000,000,000 years before the task is completed Hence, it appears that scientifically the ‘the temple and tower will fall down could happen trillions of years before the task is completed
(PDF) An Evolutionary Approach to Tower of Hanoi Problem
Australian Senior Mathematics Journal vol 29 no 1 hanoi dvi
AlgoDaily - Problem Solving With Recursion vs Iteration (PDF) Improving Solving Problem Ability with Tower of Hanoi Puzzle (PDF) A Review of Mathematical Conjectures: Exploring Engaging Topics for University Mathematics Students
The Tower Of Hanoi Myths And Maths (2013, Birkhäuser): Free Download Borrow and Streaming : Internet Archive
Ihopeyoumaygetthisbeforeyouleave tomorrow,asitwillgiveyousomethingtodointhe train.
"Itisjusttotellyouhowtodothesolitairepuzzle. Ifindithelps,ifIamtryingtodothepuzzleto usefourkindsofpieceslikethisorbetterstillto useaboardwiththesquaresinfourcolours.
Eachpiecealwaysstaysonthe samecolouruntilitistaken.You startwithonlyfourX'sandyou muststillhave[them]onatthe endsoyoumustbeverycarefulof them.Butthereare12O’stobegot ridof.Oneneedstorememberthisall thetime.”
IhopeyouallhaveaveryniceholidayinItalian Switzerland.Ishallnotbeveryfarawayat: ClubMediterranéer
See : https://wwwcityamcom/letter-from-alan-turing-explaining-the-formula-towin-solitaire-sells-for-136000-at-auction
Alan Turing’s letter about Solitaire, written during the final years of his life offers an intruiging mathematical view of computing puzzles In this letter, he describes a version of the card game he played, known as The Turing Solitaire," with its rules
PHYSICS >
odern physics relies upon the principles of general relativity and quantum mechanics to comprehend the universe Quantum mechanics is the study of how things move and work at the smallest most fundamental level, and helps us to understand how individual particles interact and behave to make the universe work Quantum concepts challenge our intuitive understanding of the universe by exposing the fact that particles in the universe behave in ways that are far more complicated and nuanced than our day-today experiences might suggest
For example according to Heisenberg’s uncertainty principle, if you were to shrink a car down to the size of an electron, you would only be able to calculate its speed, or its position precisely not both of those values simultaneously Or imagine an electron –surely it has to be in one specific, observable position – say Position A or Position B? In the quantum realm, this electron can be in superposition where it exists in both positions simultaneously Only when this electron is actually observed, does our perception of the probabilities of its different states collapse into one definite position or state These concepts contradict our logical understanding of the universe as these phenomena are not observable at a macro level, so we do not experience them
There is no uncertainty when we are trying to observe both the position and speed of a car simultaneously, in the same way that there is no chance that before being observed, the car might exist in two different positions simultaneously
Despite the success of quantum mechanics when explaining sub-atomic interactions, its theories break down when describing gravity, suggesting a missing piece in our theories of the universe This is because quantum mechanics uses hypothetical gravitons to quantize gravity causing mathematical inconsistencies leading to non-renormalizable infinities which is why we turn to Einstein’s theory of general relativity when attempting to understand the concept of gravity General relativity uses the principle of spacetime to explain gravity and can be visualised as a huge elastic net that has been stretched out When you introduce a massive object onto the net it weighs the net down creating a dip causing any smaller objects introduced to the dip to roll towards the more massive object because of the slope created Fundamentally, this is the concept of gravity
Einstein’s theory of general relativity portrays events in the universe as deterministic, where each cause has an equal, specific effect that can be precisely calculated, whereas quantum mechanics portrays events in the universe as probabilistic where the behaviour of individual particles can never be determined with complete certainty The outcomes of quantum calculations are governed by probability distributions, so we are only dealing with the probabilities of certain events happening until we actually perceive them to happen
Quantum principles state that at very small scales, particles (even if in a vacuum) experience spontaneous energetical fluctuations due to Heisenberg s uncertainty principle and when this idea is applied to spacetime incorporating these fluctuations would go against the fundamentally smooth structure required by general relativity For example at extreme moments, such as the initial moments of the Big Bang, in theory, these energetical fluctuations would magnify to an extent which would introduce a randomness onto the Planck scale which would undermine the fundamental principle of predictability required by general relativity
When scaling general relativity down to incorporate its calculations in understanding how the universe works at a quantum levels, we similarly have little success We end up with infinity values for certain calculations, most importantly, those to do with gravity When we describe a particle using general relativity its energy is concentrated in an extremely small volume and the magnitude of the concentration of this energy curves spacetime immensely resulting in a gravitational singularity; where the curvature of spacetime is infinite This would mean that according to general relativity, even elementary particles would collapse into a tiny black hole if their gravitational effects are fully incorporated
Furthermore when general relativity is applied to subatomic particles, we also have another major problem As distances shrink, the strength of gravity hugely increases, which means that at the Planck scale, gravity becomes as strong as the other fundamental forces resulting in the equations of general relativity to break down due to infinite energy densities and infinite curvature
These problems show us the fundamental incompatibility of the two principles because of the clashes that occur when trying to explain events that occur at a macro scale with concepts that are used to explain the behaviour of individual particles
Ibelieve that the incompatibility between quantum mechanics and general relativity is not merely a mathematical failure but instead an indication that our current frameworks are only approximations of a deeper reality While these two theories are incredible successful in explaining their own subsets of the universe, attempting to understand the quantum world using general relativity and the macro world using quantum mechanics result in failure I believe that our lack of a unified theory does not stem from mere mathematical inconsistencies, but instead from a conceptual misalignment in the way we use physics to understand the universe The reason why we experience these non-renormalizable infinities when scaling quantum mechanics up and general relativity down is because they are two distinct, incomplete representations of a greater, more fundamental framework
The problem with gravity is not that it cannot be quantized but instead that it is being attempted to be used in a way in which it is inapplicable Spacetime could be an information structure, an emergent theory as opposed to a fundamental concept, which would mean that the fabric of our reality is not made from matter or energy alone but instead is comprised of fundamental bits of information; similar to how a computer simulation is built from binary machine code If this is true, then gravity and quantum mechanics would merely be a large-scale side effect of the processing of this in
This would mean that what we experience as space and time could instead be a high level illusion emerging from quantum rules that we don’t yet understand And any attempts to understand this would just be our attempts of approximations of a more profound framework of reality The breakdown of relativity at the Planck scale should be a signal that gravity is a concept that dissolves when zooming into the universe beyond the resolution that we experience it in our day to day lives, which means that the next breakthrough in modern physics will not be yet another advancement in string theory or attempts to reconcile these two theories but instead will result in us looking back at our two previous breakthroughs’ as primitive while completely transforming our outlook on the universe by providing us a singular, complete framework
Demystifying Quantum Physics: A Beginner’s Guide | by Chris Ferrie | Medium
Quantum Theory and the Uncertainty Principle - The Physics of the Universe
Special and General Relativity - The Physics of the Universe
Introduction to quantum mechanics - Wikipedia
Quantum mechanics | Institute of Physics (ioporg)
What is the theory of general relativity? | Space
Quantum Theory: The Einstein/Bohr Debate of 1927 | AMNH
Relativity versus quantum mechanics: the battle for the universe | Physics | The Guardian
‘quanta’, is the smallest discrete unit (e g a particle) of a phenomenon Quantum mechanics is the fundamental theory that describes the behavioural nature at and below the scale of atoms
While we would typically assume, based on quantum experiments that Quantum mechanics only involves the smallest parts of our universe (such as electrons and photons) quantum phenomena are all around us acting on every scale
Whether you are making some toast in the toaster turning on a fluorescent light or simply using your compute you are surrounded by quantum phenomena
The idea of Quantum physics arose in the late 1800s and early 1900s when scientists noticed that classical physics could not be used to explain some of the behaviours of subatomic particles
Previously, scientists believed that atoms followed the structural theory proposed by John Dalton, who thought that electrons consisted of a positive ‘dough’ with electrons stuck to it (this was called the plum pudding model where electrons where randomly surrounding the nucleus) However in 1913 Neils Bohr revised this model suggesting that electrons orbited the nucleus at different energy levels and specific distances from the nucleus (what we now know as shells) This then allowed him to explain that since certain chemicals burn with certain coloured flames, the pattern of energy realised by the electrons must be the same for every atom of that element (due to the different energy levels) This means that electrons cannot be arranged at random and must have fixed levels of energy
There are many central concepts that helped to form out understand of quantum physics
NielsBohr,QuantumPhysicist
The wave particle duality theory states that waves can exhibit particle like properties, while particles can exhibit wave like properties Albert Einsteins theory that light which is considered as a form of electromagnetic wave must also be thought of as particle-like This lead French physicist Louis de Broglie to propose that electrons and other discrete bits of matter (which have previously only been conceived as material particles) must have wave-like properties (e g wavelength and frequency) The experiment that was done to prove this is called the double-slit experiment In this electrons are beamed at two holes in a wall with another wall behind it With out previous knowledge of electrons we would assume that the electrons would then land on the wall behind exactly where the other holes were However, this is not the case If we think of electrons as subatomic particles with wave-like qualities, the waves would go through the two slits and then interfere with each other
Due to the inferences of the two waves the electrons actually do not land where we would expect – on the wall behind the slips- but behind the centre wall (as seen on the diagram) The line showing the wave intensity shows that actually the electrons would gather behind the centre wall This experiment proves that electrons have wave like qualities, due to the area that they gather once beamed at two holes This theory (wave particle duality), is just one key concept in quantum mechanics
Quantum superposition is the ability of a quantum system to act as if it is in multiple states at the same time until it is measure This is a property that applies to all waves The Schrödinger’s cat experiment, was originally used to point out the absurdity of superposition, however it is now used to illustrate the phenomena In this experiment a cat is placed in a seal box along with a vile of poison that will kill the cat if an event (such as the decay of a radioactive atom) were to happen From outside, whether the cat is dead or alive is unknown, so therefore the cat is in a state of superposition and is both dead and alive at the same time (because it is not until the box is opened that the state of the cat can be known) This theory can also be carried over to the double-slit experiment (which is an example of superposition in real life) This is because it demonstrates that light can be classified as both and a wave and a particle simultaneously until it is observed in a specific way
The uncertainty principle states that the momentum and position of a particle cannot be precisely determined at the same time Now this sounds silly to use, because it is easy to know the speed and position of a car But the uncertainties that this principle is talking about are far too small to be observed The principle says that the product of the uncertainties in position and the velocity is equal to/ greater than a tiny physical quantity or constant (h/(4π) Where h is Planck’s constant (6 6 × 10 34joules per second) It also says that any attempt to precisely measure the velocity of a subatomic particle (like an electron), will knock it around in an unpredictable way so that a simultaneous measurement of it’s position has no validity
This new knowledge has had profound impacts on science and technology today The understanding of quantum mechanics has lead to the development of things like lasers, light emitting diodes, transistors medical imaging electron microscopes and many modern day devices Without the discovery of quantum mechanics our phones would not exist today!
Figure 1: Models of the atom over time - What is radioactivity? - OCR
Figure 2: Models of the atom over time - What is radioactivity? -
Figure 3: Double-slit experiment - Wikipedia
Figure 4: Double-slit experiment illustrating the wave behavior of light | Download
Diagram Wave-particle duality | Quantum Mechanics Electrons Photons | Britannica
What Is Quantum Superposition? - Caltech Science Exchange
Uncertainty principle | Definition & Equation | Britannica
PHYSICS >
So, what is special relativity? Special relativity, in short, is the scientific theory of the relationship between shape and time Albert Einstein proposed this theory in 1905 which specified many different concepts we see in physics today such as time dilation, length contraction and relativistic mass Einstein produced these concepts and postulates through a series of elaborate thought experiments and contemplating many constants in physics such as the speed of light and Planck’s constant Special relativity has 2 main postulates: 1
The principle of relativity: the laws of physics are all the same for people travelling at a constant speed no matter how fast they are going 2 The constancy of the speed of light: the speed of light in a vacuum in always 300 000 000 regardless of how fast the light is moving or how fast the object is moving These postulates affect our perception of space,
The first key concept that arose from the special theory of relativity was time dilation This is the concept that time moves differently for people in different reference frames The twin paradox was used to explain this (1) Two twins called Bo and Frankie are both born at the same time and have just turned 20 Bo decides to jump on a spaceship and travel to a distant planet 10 light years away at 86 6% of the speed of light
Once she reaches the planet, she will turn around and head back to earth at the same speed Because Bo is travelling at such a significant percentage of the speed of light time will pass slower for her because the light rays will take a longer time to reaching her which will slow time down To calculate by what factor time will pass slower for Bo we can use the Lorentz Factor At 86 6% the speed of light, the Lorentz Factor is 2, meaning that time passes twice as slow upon the spaceship as it does on earth (2) To find out how long this journey would take on the spaceship, we can use the formula speed= distance/time So, speed = 3/0 866x the speed of light which equals 3 46 years passed on the spaceship To find out the amount of time passed on earth we can use the equation for time dilation This is that time dilation= Lorentz Factor x the time that passes on the spaceship So, 1 998 (to be precise) x 3 46= 6 91 years passed on Earth From this we can see that time has passed much faster on Earth than it has on the spaceship Not just this, but the person on earth will look biologically older than the person on the spaceship
Relativistic mass is the mass of an object perceived by an observer which increases as the object increases in speed (3) If an object is traveling at exactly the speed of light its relativistic mass would be infinite (which is why no object can ever reach the speed of light) as shown by the formula for relativistic mass, which is shown below An object always has a rest mass which is represented by the symbol m0 and a relativistic mass which is shown by the formula m
This phenomenon is the idea that if two events happen simultaneously the event is not absolute but it depends on the observer's reference frame The laws of classical physics stated that two events happening simultaneously are absolute and happen at the same time for everyone, no matter your reference frame Special relativity disproved this theory In special relativity, there is no such thing as ‘ now and there is no absolute present which is applicable for everyone everywhere Essentially the relativity of simultaneity states that if two events happen at the same time in one reference frame the time that those events happen is not absolute We can use a thought experiment designed by Einstein to understand this concept further (4) Envision a train, which is moving at a constant speed There is an observer on the train and one on the platform that is moving relative to the train Two lightning bolts strike the train s rear and its front The observer on the train will see the lightning bolts strike at exactly the same time However the observer at the front of the train moving relative to it will see the lightning bolt at the front of the train before the one at the rear, because the light rays of the bolt at the back of the train must travel further to reach the observer than those of the bolt at the front of the train
This thought experiment is based on the fact that the observer in the train is in the middle of the train and since light travels at the same speed in all directions the light rays will meet at the same time meaning that they will perceive the strikes to be at the same time
However, the situation would be different if the person on the train was sitting right at the front in 1st class If this was the situation, then the light rays from the lightning bolt at the front of the train would have a shorter distance to travel to the person than the rays from the bolt would at the back This would mean that the bolts would strike at different times for the observer This is also the case for if the person was sitting at the back of the train As we can see, it is all dependent on where you are and how you are moving relative to the situation
This key point of special relativity states that no object with mass can exceed the speed of light, because it would require infinite energy Light is the fastest moving thing in our universe, meaning that it is faster the communication This makes instant communication impossible because it will never catch up with the speed of light One lightyear is the distance that light can travel in one year Let’s use the spaceship analogy again Bo is currently 5 light years away from Frankie Since the speed of light is finite, the light will take 5 lightyears to travel from Bo s position to Frankie s By the time the information has reached Bo Frankie will be a long way away because that information will have taken 5 years to reach Frankie This means that Frankie will be seeing Bo as 5 lightyears away from her, because that’s how far the distance is between them as she sees it, when in reality Bo is actually much further away than 5 lightyears
To conclude special relativity is an extremely complex and difficult topic to comprehend, let alone understand and these are just a few of the topics and postulates that are involved in the theory
1 https://wwwyoutubecom/watch?v=yuD34tEpRFw
2 https://ffden-2physuafedu/webproj/211 fall 2014/Jackson Page/ jackson page/page3html
3 https://wwweinstein-onlineinfo/en/explandict/relativistic-mass/
4 https://wwwnationalgeographiccom/science/article/einsteinrelativity-thought-experiment-train-lightning-genius
5 https://physlibretextsorg/Bookshelves/Relativity/ Spacetime Physics (Taylor and Wheeler)/06%3A Regions of Spac etime/601%3A Light Speed-
Limit on Casualty#: :text=More%20generally%2C%20one%20even t%20cannot,zooming%20particle%20of%20any%20kind
recently read a profound book on the future of artificial intelligence by Mustafa Suleyman, CEO of Microsoft AI and cofounder of DeepMind This book aims to wake readers up to the stark reality of the serious change that will come about in the next few years with Suleyman suggesting that the AI revolution may be more consequential for our species than any prior technological revolution
Like with all innovation the exciting prospects, and the future they hold are coupled with dangerous newfound implications the likes of which have never been seen before with this kind of tech The reason this time of innovation and progress is referred to as the “coming wave ” in this book is because of the risks it will undoubtedly pose, with the problem of its containment being in his opinion the “defining challenge of our era ”
Failure to do so, from his rather disturbing perspective, could lead to our unfortunate demise Already we are familiar with the products of years of research and experimentation in the form of LLMs like ChatGPT, Gemini, or Copilot What are currently forms of AI that mostly help with menial work-related tasks will very quickly predicts Suleyman become core components of our lives organising our routines, operating businesses, and managing fundamental government services
his AI revolution we are just stepping into will lead to advancements in an array of fields Whether through DNA printers or robot assistants, these seemingly far-off wonders of the tech world are not as distant as once believed
Whilst we are presently unaware of it, Suleyman strongly argues that the coming decade will be defined by this wave of powerful yet rapidly proliferating tech He gives the example of the jaw-dropping collapse in the cost of sequencing DNA, which has dropped a millionfold in under twenty years a thousand times faster than Moore s Law Suleyman believes that a similar chain of events will unfold in the AI industry
This has already been seen since the release of ChatGPT and other LLMs with years of extremely expensive groundbreaking research already amalgamating into gamechanging chatbots used by hundreds of millions of people daily around the world
Price collapse and widespread availability of tech:
The release of ChatGPT-4 in March 2023 seems lost to the past now, with the prospect of ChatGPT-5 already drawing closer OpenAI CEO Sam Altman suggests the same price collapse in this industry, estimating that the cost to use a given level of AI falls roughly ten times every 12 months
What was seen as pioneering and futuristic just three years ago is now a common aspect of the lives of millions In the next few years the boundaries of what AI is capable of will continue to be broken again and again, paving the way for innovation in so many different areas AI could combat major global issues such as poverty disease and climate change far more effectively than any human can It may well even play a role in the future of warfare, taking cyberwarfare and autonomous weapons to a whole other level
rom all this it seems that AI could be a phenomenal tool for manufacturing the ideal world if it stays on our side
The classic dystopian forecasts of AI world domination and its goal to wipe out mankind may be quite far-fetched But to a lesser extent, AI going rogue would have to be a serious consideration according to Suleyman That is because we are currently at the stage of artificial intelligence, far off from the prospect of reaching AGI (Artificial General Intelligence) the stage where AI can successfully outperform humans in all cognitive skills
We are fast approaching somewhere in the middle: ACI (Artificial Capable Intelligence) and will most likely see the implementation of AGI at some point in our lifetimes What it holds in store for us we can only speculate about
What we do know for certain is that we must be ready to meet and adapt to these challenges that we will definitely face as AI grows more
powerful and intelligent
Suleyman doesn’t want us to fear AI but to accept its incredible power If we manage it wisely, it could be the greatest tool humans have ever created If not, Suleyman strongly believes it may well become the most dangerous When managed properly AI could be the most important invention of our species
Containment is the defining challenge of our era -
“Containment
Bill Gates has also produced a review for the Coming Wave naming it as his favourite book about AI You can see a short extract of his review below:
When people ask me about artificial intelligence their questions often boil down to this: What should I be worried about and how worried should I be? For the past year I've responded by telling them to read
The Coming Wave by Mustafa Suleyman It’s the book I recommend more than any other on AI to heads of state, business leaders, and anyone else who asks because it offers something rare: a clear-eyed view of both the extraordinary opportunities and genuine risks ahead
The author Mustafa Suleyman brings a unique perspective to the topic After helping build DeepMind from a small startup into one of the most important AI companies of the past decade, he went on to found Inflection AI and now leads Microsoft s AI division But what makes this book special isn t just Mustafa s firsthand experience it s his deep understanding of scientific history and how technological revolutions unfold He's a serious intellectual who can draw meaningful parallels across centuries of scientific advancement
Most of the coverage of The Coming Wave has focused on what it has to say about artificial intelligence which makes sense given that it's one of the most important books on AI ever written And there is probably no one as qualified as Mustafa to write it He was there in 2016 when DeepMind’s AlphaGo beat the world’s top players of Go, a game far more complex than chess with 2 500 years of strategic thinking behind it by making moves no one had ever thought of In doing so the AI-based computer program showed that machines could beat humans at our own game literally and gave Mustafa an early glimpse of what was coming
uclear reaction is slowly reforming our future: a renewable source of energy, limited waste, the perfect answer to our problems However it could also spell out our demise; destructive weapons that cause calamities all over the world and lay waste to civilisation That is why nuclear reactions should be handled with care; they have the power to create and destroy
Nuclear reactions are a key part of nanoscience, as well as nuclear physics and chemistry They are so powerful, yet they occur on such a small scale! Nuclear reactions can help fill in the gaps of our knowledge of the universe, and they will be key to expanding our knowledge soon They have many applications in the modern world ranging from medicine to power to weaponry
There are two main types of nuclear reaction: fission and fusion Quite briefly, they are near opposites; whilst fission is the splitting of an atom nucleus fusion is the combining of atom nuclei Both release monumental amounts of power, however fusion releases approximately four times the energy of fission That is why we are looking to fusion as a new reliable energy source
ission: Nuclear fission occurs when a neutron called the incident neutron hits a fissile nucleus (a nucleus that can undergo fission) The nucleus is excited by the incident neutron, splitting into two nuclei and releasing some more neutrons These neutrons in turn collide into other surrounding fissile nuclei, causing more fission, more neutron release and eventually a chain reaction [1]
Some of the best examples of fissile nuclei are the Plutonium isotope Plutonium-239 (239Pu) and the Uranium isotope Uranium235 (235U), which are the most active and fissile isotopes They also easily absorb lowenergy neutrons making it easy for them to split and for nuclear fission to take place [2] However, these isotopes are not in abundance Only 0 7% of the Earth’s natural uranium sources are 235U, meaning that a large sum of Uranium must be mined which could lead to harm for the environment and miners being largely exposed to dangerous radioactive elements 239Pu is not a natural isotope which means that a large amount of energy needs to be used to synthetically produce it However, it can be produced by the natural, abundant Uranium isotope 238U, meaning that it is unlikely that we run out of our resources any time soon [3]
Fusion: Nuclear fusion on the other hand occurs when two light atomic nuclei hit each other and combine to form a heavier nucleus It must take place at an extremely high temperature; to occur on Earth, it must be over 108 °C Two nuclei will always have what is called a ‘mutual electrical repulsion’ This is a force that pushes the nuclei apart whenever they move too close to each other However, when the temperature breaches 100 000 000°C Tritium nuclei have enough energy to overcome this force They then get pushed together by an attractive nuclear force, and if the conditions are right, fusion occurs [4]
Nuclear fusion must occur in plasma another state of matter: an extremely hot, electrically charged gas that is full of charged ions Plasma is the only state of matter that can withstand such extreme temperatures which is why fusion must take place within it Plasma makes up 99% of the universe, which explains why so much fusion occurs in space However, plasma is extremely rare on Earth and even when we see glimpses of it it is only in things like lighting and Aurora Borealis (the Northern Lights), making it very hard to harness This is one of the things that
[5]
The most common nuclei that are used for nuclear fusion are two hydrogen isotopes – Hydrogen-2 (2H or D) and Hydrogen-3 (3H) These are commonly known as Deuterium and Tritium They are so common in fusion because of their compatibility, as they readily fuse together in the right conditions to form Helium Deuterium has one neutron and Tritium has two, meaning when they undergo fusion, 3 neutrons are put together However Helium only has 2 neutrons meaning the other high velocity neutron escapes and can transfer energy very well, which could be harnessed for power [6]
Deuterium is an the more abundant isotope of hydrogen that can be found in most of the world s water; 1 in 6420 hydrogen atoms are deuterium and when this is put into context we have a steady supply of deuterium that is virtually inexhaustible Tritium on the other hand, is a radioactive isotope that decays within 12 years, which is, relative to other elements and isotopes, a very small half-life As well as this, tritium is very rare in nature, and for all fusion purposes, must be produced synthetically This takes fuel energy and money although it is possible that the production of tritium can be merged into the plant itself, which might save on some energy [7]However, it must be considered that the energy released from just one gram of deuterium-tritium fusion is the equivalent of the amount of energy taken from around 9000 litres of oil [8]
Humans have been investigating nuclear physics and radioactivity since the 1700s From Bohr to Rutherford, Curie to Einstein, there have been many scientists who have made important discoveries in this field Theoretical physicists have been exploring the realms of fission and fusion for over two centuries and we have also begun to act on these ideas; we have made massive leaps in these mysterious fields
In 1789 Martin Klaproth discovered the first radioactive element: Uranium However its radioactive capabilities were only uncovered in 1896, by French physicist Henri Becquerel, who discovered it by accident After this, Marie and Pierre Curie began testing with other radioactive elements such as Radium, to attempt to find a cure for Cancer This process is still used as a treatment today; Radiotherapy uses radioactive elements to try and kill cancer cells In 1902 Ernest Rutherford learnt how to manipulate the elements using alpha particles: the first steps towards full sized reactions Over the next 30 years or so, many significant discoveries were made in this field;
Einstein’s General and Special theories of Relativities, Niels Bohr’s original atomic structure and James Chadwick s discovery of the neutron to name a few However possible one of the most significant breakthroughs in nuclear science was made in 1934, when a man called Enrico Fermi achieved the first controlled self-sustaining nuclear Fission reaction- the first step towards a nuclear future [9] [10]
Humans have been dealing in practical fission for over 70 years and we have made many outstanding advancements The first example of a major fission reaction happened in 1945 when the Manhattan Project resulted in the first nuclear explosion, on the 16th of July This eventually led to the bombing of Hiroshima and Nagasaki, showing how destructive nuclear energy can be Thankfully we haven’t had any large-scale nuclear bombing in warfare since then, but the world still has a substantial nuclear arsenal; the power peaked in 1986, reaching over 60,000 bombs during the Cold War The first power plant to produce usable electricity through atomic (nuclear) fission was EBR-1 (Experimental Breeder Reactor 1) which powered four 200-watt lightbulbs on December 20th 1951 In the modern day, we can power over 1 gigawatt of electricity; just one example of how much we have advanced over the last 70 years The first commercial fission plant was commissioned in 1960 and lasted until 1992; it was called Yankee Rowe [11]
Fission and fusion reactions release monumental amounts of energy and humans have discovered many applications to harness this energy However, whilst we have made significant developments in the field, we have only seen the tip of the iceberg in this fascinating realm
One of the main applications of nuclear fission on Earth is in a nuclear power plant The plant’s purpose is to generate electrical energy from a controlled nuclear fission reaction; a relatively clean, renewable source of electricity The reactor core used in the reaction contains over 200 rods filled with small pellets of 235U, to provide a controlled chain reaction that generates power The reactor core also contains many control rods made of various materials such as boron cadmium and indium These absorb some neutrons, creating a reaction that does not get out of control; a safe way to generate power [12]
Another important application of nuclear fission is in bombs; nine countries now have a nuclear arsenal A nuclear explosion kick-starts a rapid nuclear reaction and the energetic neutrons escape in an explosion of light, sound and shockwaves, and power more than any average explosion [13]
The most important example of nuclear fusion is in stars; the closest to us is the sun The sun is made up of hydrogen and helium atoms in an incredibly hot plasma These conditions are perfect for fusion to occur, and fusion is the reason for the light and heat we get from the sun When the Deuterium and Tritium molecules fuse together, they form Helium molecules The high-energy is expelled and it is received on Earth as warmth and light
oth reactions have their pros and cons Whilst fission is a very reliable source of energy, it produces waste as well Although the amount is quite low – only 25-30 tonnes a year – the waste is highly radioactive and a big hazard for our health and for the environment As well as this whilst nuclear power plants are quite eco-friendly and releases more energy than other forms of power generation, it costs high amounts of money, and the radioactive elements required are not inexhaustible As well as this, many populations do not understand nuclear power and this causes opposition and the reason why nuclear energy is not very widespread
Fusion on the other hand has virtually no waste or impact on the environment As well as this, hydrogen is the most abundant element in the known universe meaning that we have an almost inexhaustible supply of fuel However, there is not much known technology to simulate such high temperatures, and if there was, the cost would be monumental Whilst fission has the possibility of going out of control fusion is self-regulating, meaning it will not get out of control
It is likely that nuclear reaction will play a big part in our lives in the near future Once we find a cost-effective way of simulating fusion on Earth, there will be near unlimited prospects for it Fusion will easily replace fossil fuels, and although it may be many years until we reach this level, it is a good target to look to
Bibliography
1 Andrea Galindo (2022) What is Nuclear Energy? The Science of Nuclear Power Available at: What is Nuclear Energy? The Science of Nuclear Power | IAEA
2 (2025) Physics of Uranium and Nuclear Energy Available at: Physics of Uranium and Nuclear Energy - World Nuclear Association
3 (ND) Why Uranium and Plutonium? Available at: Why Uranium and Plutonium?
4 Andrea Galindo (2022) What is Nuclear Energy? The Science of Nuclear Power Available at: What is Nuclear Energy? The Science of Nuclear Power | IAEA
5 (ND) About Plasmas and Fusion Available at: About Plasmas and Fusion | Princeton Plasma Physics Laboratory
6 (ND) DOE Explains Deuterium-Tritium Fusion Fuel Available at: DOE Explains Deuterium-Tritium Fusion Fuel | Department of Energy
7 MR Gordinier,JW Davis,FR Scott,KR Schultz (2004) Nuclear Fusion Power Nuclear Fusion Power - ScienceDirect
8 (ND) DOE Explains Deuterium-Tritium Fusion Fuel Available at: DOE Explains Deuterium-Tritium Fusion Fuel | Department of Energy
9 (2024) Outline History of Nuclear Energy Available at: Outline History of Nuclear Energy - World Nuclear Association
10 (ND) What is nuclear energy? Available at: What is Nuclear Energy? - Nuclear Industry Association
11 (2024) Outline History of Nuclear Energy Available at: Outline History of Nuclear Energy
World Nuclear Association
Is it a bird? is Is it a bird? is it a plane? it a plane?
aI TOOLS COULD BE USED TO FIGHT CLIMATE CHANGE, IT COULD ALSO BE A KEY CONTRIBUTOR IN DESTROYING IT,
ai! ai! could ai save could ai save the world??? the world???
ai has exceptional data pattern recognition and analysis which lead to high hopes for a more sustainable environment by reaching more informed decisions or detecting methane levels. despite this, critics argue the 800kg of raw materials required to build a measly 2kg of computers is more trouble than its worth.
According to earth.org, the average co2 emissions by a car in one’s lifetime is 126,000 lbs. training an AI MODEL emits 626,600 LBS.
however, TECHNOLOGY SUCH AS SATELLITE IMAGERY AND MACHINE LEARNING HAVE PROVED SIGNIFICANT IN REMOVING plastic from the oceans and preventing illegal logging. over 32 billion waste items across 67 aste categories in 2022 alone were recovered and recycyled.
rtificial Intelligence, commonly referred to as AI is one of the most promising and rapidly developing fields of the 21st century As Large Language Models (LLMs) such as Chat-GPT become more integrated in our day-to-day lives their relevance will only increase LLMs have greatly influenced how we interact with technology, and with the introduction of Apple Intelligence, there is no telling just how far this will go Currently over 77% of businesses are either using or exploring the use of AI in their business to optimise efficiency and workflow But LLMs are not perfect, in fact they are far from it They can hallucinate, miss crucial details, or, more often misunderstand the prompt Either way we must do something to fix this issue In this article I will be discussing a widely recognised technique to solve this problem –Retrieval Augmented Generation (RAG), which is already implemented in many topperforming LLMs such as Chat-GPT
RAG is a technique that essentially enhances the LLMs ability to process text, giving an external information retrieval system This is particularly useful and impactful as it means that the model can access the Web at any point and pull data from there This results in more accurate data enabling the LLM to give a more indepth and detailed response This improves the overall performance of the LLM making it much more effective, thus knowing how RAG works will certainly give an advantage to you, yes you in the coming world
AG is comprised of two main components; the retriever and the generator
The retriever model s aim is to extract information relating to the prompt from the training data It does this through the use of semantic search methods, the most commonly used being cosine similarity A normal key-word search is not as useful as other methods of search as they can often miss the context or intent behind the search Semantic searching however improves by attempting to grasp the query s meaning taking into account context This then retrieves content that is relevant in meaning not just in word match
Let s take the example of the prompt: Most promising fields of artificial intelligence in the coming century relating to societal change” A basic keyword search may focus on the words; “promising”, “fields”, intelligence century societal change When compiling these words together it is clear how a basic search can easily focus on the wrong components of the query, and thus provide an irrelevant response, not actually answering the query On the other hand with semantic search the LLM understands that you ’ re asking the LLM to state the most promising fields of artificial
intelligence that have the potential to impact society Essentially, connecting the dots between words and meanings providing you with results that are actually constructive
Thediagramabovedisplaysatraditionalcosine similaritygraph
osine similarity excels here as it is extremely precise at finding the exact information required which makes it the perfect method Cosine similarity is a metric that evaluates how similar two documents are, regardless of their size It does this by calculating the cosine angle of the between the two non-zero vectors In the diagram above, it is a 3D representation of a graph with 3 axes each representing a different topic: McDonalds Popeyes and the Mona Lisa There are four documents in the graph represented by the circles, the distance between these points is then calculated using cosine similarity, which measures how closely related the documents are to each topic based on the angle between them The smaller the angle the closer linked documents In this graph we can see that the Mona Lisa document has the largest cosine distance/angle, this conveys that the Mona Lisa is a different topic altogether due to its distance between the other two documents
At this point you may be wondering how the points of the documents are decided, well fret no longer, your requests have been answered This is done by another very famous tool which is necessary for all NLPs (Natural Language Processing) it is called word embedding Word embedding is a vector representation of a word in high-dimensional space In general, the specificity of the dimensionality of the word represents the total number of features that are encoded in the vector representation Effectively, the more words that are encoded, the more precise the representation of the words are These values are typically assigned to words through one-hot encoding This is where each word in the vocabulary that is being represented is given a binary vector Then it is mapped out on the graph
Alternatively, TF-IDF can be used which is a cross between keyword and semantic searching This is more advanced and more useful TFIDF is a numerical statistic that reflects how important a word is to a document As the LLM processes more words the TF-IDF matrix becomes more detailed and weights certain words as more important Then this can either be directly put into a keyword search for the more important words, or these specific words can then be selected through cosine similarity to further determine what the prompt is actually asking for
he secondary component of an RAG system is the generator The generator s predominant aim is to actually craft the coherent and informed AI responses It does this by utilising the famed transformer architecture which Google pioneered in their revolutionary AI paper titled ‘Attention is all you need’ This architecture essentially uses context and data to predict what is the most likely words to follow them by looking at all of the prompt at the same time By doing this it allows the model to generate profound statements In essence, the retriever component acts as a search engine, compiling all the relevant information and facts, and the generator strings it all together to eventually form the AI s response
The combination of both the retriever and generator make RAG systems novel and particularly useful for certain applications Whilst RAG models are limited in Generative text content, due to their ability to hallucinate and amplify any existing social stereotypes in the training data they have many popular uses such as in image generation RAG systems are extremely intriguing to analyse as their image outputs can interpret user prompts very differently to how other AI image generators may, resulting in new (and often interesting) AI images Despite the overwhelming urge to automate everything with a variant of AI it is important to be considerate and ethically just in what we utilise AI for; ensuring our usage of AI is moderate and carefully considered will result in a more equitable and better society for all Let us ensure the future we craft together is one for all of humanity
[Gao et al] Retrieval-Augmented Generation for Large Language Models: A Survey - https://arxiv org/abs/2312 10997 (Accessed on the 18th of January 2025)
[Merritt] What Is Retrieval-Augmented Generation aka RAG? - https:// blogs nvidia com/blog/what-is-retrieval-augmented-generation/ (Accessed on the 16th of January 2025)
[Varkey, 2023] The ELI5 Guide to Retrieval Augmented Generationhttps://www lakera ai/blog/retrieval-augmented-generation (Accessed on the 17th of January 2025)
CIs are devices that can process brain activity and send signals based off this activity to external devices which will then perform actions based off these signals This essentially allows a user to control external devices such as phones or laptops with just their thoughts
The use cases for such devices are immense, potentially helping patients with paralysis or motor diseases regain motor functions through this BCI technology
There are two different types of BCIs: Invasive and Non-invasive Invasive BCIs are directly connected to the brain tissue via surgical procedures and are most appropriate for patients looking to overcome severe conditions due to the risks associated with the procedures Non-invasive BCIs involve wearing an electrical device on the head of the patient they produce weaker signals since they are not directly connected to the brain tissue and is therefore better suited for use in virtual reality, video games and robotics
CIs work around the electrophysiology of the brain’s neural network, every time our brain makes a decision or even thinks it sparks electrical chemical signals This phenomenon is located in our nervous system; more specifically in the gaps between neurons, known as synapses, as they communicate back and forth ” (“Brain Computer Interfaces (BCI), Explained - Built In )
In order to capture this brain activity, BCIs place electrodes proximal to these synapses These electrodes are known as EEG and detect electroencephalogram waves They detect voltages, measuring the frequency and intensity of each “spike as they fire That information is then fed through local computer software where it s translated in a process called neural decoding This is where a variety of machine learning algorithms and other artificial intelligence agents take over, converting complex data sets collected from brain activity into a programmable understanding of what the brain’s intention might be
BCIs are somewhat of a sci-fi concept to most of the world with it being a relatively new advancement However with the investment of tech tycoons like Elon Musk the sector is expected to grow rapidly over the course of the next few decades BCIs have actually already been implemented in certain areas and their main potential applications can be grouped into two uses: neurorehabilitation and direct control of assistive technologies
CIs have already helped victims of paralysis by supplying a neural feedback loop that rewires the brain BCIs are also capable of restoring movement, mobility and autonomy for quadriplegic or partially paralyzed and disabled patients, heightening their quality of life The feedback loop involves EEG sensors which are placed on the scalp to monitor brainwave activity The brainwaves are displayed on a screen or translated into visuals The person gets real-time feedback for example, a game character moves correctly only when the brain is producing the desired pattern while the patient plays a game Over time, the brain "learns" to produce healthier or more regulated activity This method has been used successfully for ADHD and epilepsy and the next stage of development for the treatment is for partial paralysis
In more chronic cases robotic devices and limbs are integrated into this system so the robotic limbs themselves are controlled by the BCIs This is done as the BCIs pick up signals directed towards the nonfunctioning limb directly from the brain, bypassing sites of injury or disease, and sending them directly to the robotic limb giving them direct control over this new limb
Similarly, BCIs can help paralysed patients communicate with others as the devices allow patients to type or speak through a third-party device A team from Stanford University found that its brain chip could hack 62 words per minute which is on pace with natural conversation The study featured a non-verbal patient who suffered amyotrophic lateral sclerosis (ALS) and a pre-programmed vocabulary of 125,000 words, marking “ a feasible path forward for using intracortical speech brain-computer interfaces to restore rapid communication to people with paralysis who can no longer speak ” This is done by recording EEG
readings at the same time as acoustic readings of a subject speaking a set of words then training a deep learning model with this information This allows it to correlate EEG waves from the brain to syllables or letters that the subject is trying to speak Over a period of time of training, the model will be able to identify the words that a user is thinking of and then correlate it to words then output it into a third-party device This technology would not only be helpful to patients of paralysis but ablebodied patients as well making communication online more efficient
A study by the IEEE noted that subjects with neurological conditions such as epilepsy, ALS, cerebral palsy, brainstem stroke, spinal cord injuries muscular dystrophies or chronic peripheral neuropathies may be able to be treated by BCIs If not completely cured by the devices their quality of life would certainly be improved by these devices in the form of assistive devices which could perform routine tasks using just their thoughts In the specific case of epilepsy, BCIs would be able to easily identify spikes in brain activity and therefore be able to predict when seizures will happen more effectively and even provide targeted stimulation to the regions of the brain affected in order to prevent the seizures fully
Finally, the use of BCIs is hypothesised to make it possible to control or ease mental health conditions Research at California Institute of Technology has theorised the possibility that psychiatric conditions such as bipolar disorder OCD depression and anxiety could be eased by BCIs Neurofeedback therapy already exists involving a form of non-invasive BCI to treat more pedestrian conditions such as migraines, ADHD, fatigue, and burnout; however, this process could be revolutionised in the future with use of more invasive BCIs within the brain to treat more effectively as it can pick up brain signals much clearer
n amazing use scenario is with drones: in January 2025 a study was published from Jackson State University showing a user control a drone through a BCI This feat was achieved using a non-invasive BCI called an EEG headset which measures electroencephalogram waves from the brain These readings were then processed and classified using a deep learning (machine learning) algorithm into 4 different directions in which the drone should move The Department of Defence has already funded research on BCIs for hands-free control of drones
And the Federal Aviation Administration has started investigating how to medically certify pilots who may one day use BCIs to control airplanes all signalling how a telepathically controlled air force could be the future of modern warfare as we know it
Video: https://www youtube com/watch?v=PR3KPwkeyQc&t=60s
A use which would impact our lives the most is the development of BCIs to access and control electronic devices in the home such as smartphones, lights, virtual assistants, and messaging apps Companies such as Neuralink and Synchron have already managed to allow their users to control video games and social media applications with their thoughts In studies by the University of Malaga and Valladolid users have exercised control of social networking apps email administration virtual assistants, and instant message services without physical motor skills Dimming the lights or changing the channel on a TV are examples of how BCIs can be adapted in the home Similarly another use in our daily lives would be wearable headsets with non-invasive BCI technology built in For example the company Neurable has built a pair of headphones with 12 EEG sensors to detect brain activity and feed it to a mobile app, showing the user how productive they have been and helps to avoid burnout and fatigue The picture shows the Neurable BCI headset readings of brain activity on a subject allowing you to see when you are productive and not in a period of work
There are some significant issues which need to be addressed and resolved in order for the technology to progress and become a leading technology in our future lives
Some companies such as Elon Musk s Neuralink have tested their invasive BCIs on real test subjects Noland Arbaugh was their first human patient having been diagnosed with quadriplegia after a diving accident within a month of the surgery to implant the chip, 85% of the threads implanted in his brain had retracted and become unresponsive This meant it was much slower and more unresponsive at times making it less effective at completing tasks
Another possible issue experienced with these devices is how dangerous the surgical procedures can be, specifically with invasive BCIs This is because they are being implanted into the brain tissue, invasive BCI can damage nerve cells and blood vessels hence increasing the risk of infection Additionally the natural defence system of the body may reject the implant, treating it as a foreign entity (biocompatibility concern)
Another safety concern of invasive BCI is the possible formation of scar tissue after surgery, a consequence that may gradually degrade the quality of the acquired brain signals necessary for the BCIs to effectively function
With non-invasive BCIs, the most common issues occurring in tests is inefficient functionality due to weak signals detected and low signal-tonoise ratio this is because the signals from neurons in the brain get much weaker the further from the brain the EEG detectors are and get mixed up with neuron signals from unwanted parts of the brain (noise) This can mean they don’t function as well with external devices and don’t produce the desired effect of the user
Finally, BCIs require heavy regulatory approval, from the FDA and other organisations, given the risk and unfamiliarity of this technology In spite of this, the FDA did approve Neuralink in May 2023 for human testing However as developments become more advanced the restrictions imposed will inevitably become more of an issue as questions about ethics and privacy are raised
These questions have led to the creation of neuroethics: a growing field which looks at the ethical, social, and legal implications of neuroscience With the advent of BCIs, neuroethics is beginning to focus on subjects such as cognitive liberty (freedom of thought ensuring that corporations or powerful individuals cannot influence people’s minds) and mental privacy and neurodata (ensuring that sensitive data from within an individual’s mind is kept private) As BCIs grow, much like AI, we will need to address the ethical concerns of how it may be misused and can be an extremely dangerous power to have over someone
The future looks bright for BCIs as astounding developments are being made in our time and the sector looks to be getting increasingly advanced as we look to the future
Companies such as Neuralink as Synchron have connected Braincomputer interfaces with AI models such as Nvidia and Open AI deep learning models are used with BCIs to make for efficient pattern recognition in EEG waves and interpret waves to find out what each pattern means These models can train themselves over time to find patterns and be able to recognise them the next time they arise making the BCIs faster and better With thought-to-text, AI models can enhance BCIs so that they can predict words based off patterns in the user s previous activity much like predicted text on keyboards but instead with your thoughts
Synchron is developing technology which may be able to bypass the dangers of open brain surgery to implant invasive BCIs They can implant the device through an endovascular procedure where the BCI is fed through the jugular vein in the neck via stents and then is released at the site in the brain where it is required to work (the motor cortex) Over a 90-day period after insertion cells in the vessel wall grow around the ‘stentrode’ (BCI), integrating the device into the body and reducing the main risk of blood clots around the device So far, Synchron have tested this on 10 patients and met the required safety guidelines from the FDA This exciting technology could make BCIs much more accessible and safer for people in the near future
We have already seen examples of BCI technology integrated with dayto-day devices such as headphones and phones This ecosystem will only become more extensive as the technology becomes more integrated with our daily lives, we could see AR glasses like Meta glasses or VR headsets like the Apple Vision Pro integrated with BCI technology It sounds like a science fiction world far off but it may become reality sooner than you would think Video of BCI using Apple Vision Pro powered by Nvidia
Finally BCIs will give us a deeper insight into how the brain functions and is structured a concept which we don’t have the greatest understanding of as of now Deep learning algorithms used to decode brain signals learn rapidly and better themselves on their own increasing their knowledge of the brain and its functions Using these algorithms scientists can also gain a deeper understanding of the brain allowing us to understand psychiatric conditions better and the structure of the brain better as well
BrookeBecher BrainComputerInterfaces(BCI) Explained BuiltIn 7thMay2025 https//builtincom/hardware/braincomputer-interface-bci
RachelTompaPhD Whyisthehumanbrainsodifficulttounderstand?Weasked4neuroscientists AllenInstitute 21stApril 2022 https//alleninstituteorg/news/why-is-the-human-brain-so-difficult-to-understand-we-asked-4-neuroscientists/ AYearofTelepathy Neuralink 5thFebruary2025 https//neuralinkcom/blog/a-year-of-telepathy/
TamaraBhandari Stroke-recoverydeviceusingbrain-computerinterfacereceivesFDAmarketauthorization WashU Medicine 27thApril2021 https//medicinewashuedu/news/stroke-recovery-device-using-brain-computer-interface-receivesfda-market-authorization/ FranciscoVelasco-Álvarez Brain–ComputerInterface(BCI)ControlofaVirtualAssistantinaSmartphonetoManage MessagingApplications DepartamentodeTecnologíaElectrónica UniversidaddeMálaga 26thMay2021 https// wwwmdpicom/1424-8220/21/11/3716
VíctorMartínez-Cagigal&EduardoSantamaría-Vázquez&RobertoHornero ControllingaSmartphonewithBrain-Computer Interfaces APreliminaryStudy UniversityofValladolid June2018 https//wwwresearchgatenet/ publication/325803835 Controlling a Smartphone with Brain-Computer Interfaces A Preliminary Study JosephNMak&JonathanRWolpaw ClinicalApplicationsofBrain-ComputerInterfaces:CurrentStateandFutureProspects IEEERevBiomedEng PubMedCentral 3rdMay2010 https//pmcncbinlmnihgov/articles/PMC2862632/#S13
JoshuaIGlaser&AriSBenjamin&RaeedHChowdhury&MatthewGPerich&LeeEMiller&KonradPKording Machine LearningforNeuralDecoding eNeuro PubMedCentral 27thAugust2020 https//pmcncbinlmnihgov/articles/PMC7470933/ # text=Neural%20decoding%20is%20an%20importanttraditional%20linear%20methods%20for%20decoding
ImplantedBrain-ComputerInterface(BCI)DevicesforPatientswithParalysisorAmputation-Non-clinicalTestingandClinical ConsiderationsGuidanceforIndustryandFoodandDrugAdministrationStaff US DepartmentofHealthandHuman ServicesFoodandDrugAdministrationCenterforDevicesandRadiologicalHealth 20thMay2021 https://wwwfdagov/ media/120362/download
BrittanyLoggins WhatIsNeurofeedbackTherapy? VeryWellMind 17thDecember2024 https://wwwverywellmindcom/ neurofeedback-therapy-definition-techniques-and-efficacy-5193195
MokhlesM Abdulghani&ArthurA Harden&KhalidH Abed ADroneFlightControlUsingBrainComputerInterfaceand ArtificialIntelligence InternationalConferenceonComputationalScienceandComputationalIntelligence(CSCI) 2022 https//american-cseorg/csci2022-ieee/pdfs/CSCI2022-2lPzsUSRQukMlxf8K2x89I/202800a246/202800a246pdf
Science&TechSpotlight:Brain-ComputerInterfaces US GovernmentAccountabilityOffice 8thSeptember2022 https// wwwgaogov/products/gao-22-106118
AliciaHowell-Munson&WalterTPiper&TheresaGuarrera&DavidStanley&DavideValeriani&MichelleLim&RamsesE Alcaide DetectingFocusStatesinOfficeEnvironmentwithNeurableEEGHeadset BCIMeeting2023 GrazUniversityof TechnologyPublishingHouse https//davidevalerianiit/pub/BCIMeeting2023pdf
RafeedAlkawadri Brain–ComputerInterface(BCI)ApplicationsinMappingofEpilepticBrainNetworksBasedonIntracranialEEG AnUpdate FrontiersinNeuroscience 27thMarch2019 https://wwwfrontiersinorg/journals/neuroscience/articles/103389/ fnins201900191/full
BarakaMaiseli&AbdiTAbdalla&LibeVMassawe&MercyMbise&KhadijaMkocha&NassorAllyNassor&MosesIsmail& JamesMichael&SamwelKimambo Brain–computerinterface trend challenges andthreats BrainInformatics PubMed Central 4thAugust2023 https://pmcncbinlmnihgov/articles/PMC10403483/
# text=Safety%20concerns%20can%20generally%20beincreasing%20the%20risk%20of%20infection
DingguoZhang NewHorizons21-DecodingSpeechusingInvasiveBrain-ComputerInterfacesbasedonIntracranialBrain Signals(dSPEECH) DepartmentofElectronic&ElectricalEngineering CentreforBioengineering&BiomedicalTechnologies (CBio) 31stDecember2024 https//researchportalbathacuk/en/projects/new-horizons-21-decoding-speech-using-invasivebrain-computer-int# :text=BCIs%20may%20use%20invasive%20orwith%20acceptable%20performance%20at%20present Brain-ComputerInterface NoOpenBrainSurgeryRequired CNET 24thSeptember2023 https://wwwcnetcom/videos/braincomputer-interface-no-open-brain-surgery-required/
t was 12 years ago that the concept of Hyperloop was published by Elon Musk The rush for development which followed brought a surge of excitement with the thought of travelling over 600mph being an irresistible prospect Yet since then it has found little success beyond passenger testing, bringing about the rise and fall of Hyperloop One and speeds which are struggling to exceed 400mphi In the public eye Hyperloop is an airplane brought down to the ground In reality it is an airplane struggling to take off
The ability to achieve this requires pumps to vent out air, and a sealant coating to maintain the vacuum properties Of course the pressure differential will exert a huge force on the tube given by the formula:
Here, ��1 = atmospheric pressure (101300Pa), and ��2 = pressure inside the tube (100Pa) It is the circumference of the tube that is equal to the area boundary between the two pressures at any one point and with Musk’s specification suggesting the tube’s diameter would be 3 5miv, �� = 3 5�� Plugging these values back into the force equation:
It is clear the tensile strength of the tube would have to be immense bringing into question whether Musk’s initial intentions of a steel tube would be possible, not least without concrete reinforcement
There are three main components which go into the construction of Hyperloop: Vacuum Tubes Magnetic Levitation (Maglev) and Linear Induction Motors As with any high speed system, the key to success involves minimising energy loss, and for vehicles, this energy loss is a result of air resistance and friction And of course there is no better way to eliminate air resistance than simply removing air Hypothetically The original publication of Musk’s Hyperloop Alpha paper suggested an internal pressure of 100Pa inside the tube through which small pods could travel, each one accommodating 28 peopleii With the tube being almost a vacuum, the air resistance could be decreased by almost 90%iii
What doesn t seem a heedless interpretation of science fiction however is the use of Maglev technology Maglev has been well established to function at high speeds with a flawless safety record of 0 fatalities in 60 yearsv Electromagnetic suspension (EMS) has been used on Japan’s SCMaglev, relying on the simple concept of the repulsion of like poles; the underside of the train has the same pole as the track from which it is levitating This enables the complete removal of friction between the track and the train In fact recent developments have led to the rise of Electrodynamic suspension (EDS) whereby superconducting magnets are used inducing currents within the track to maintain repulsion and keep the pod floatingvi The third key components is reliant on Linear Induction Motors which provide the necessary propulsion Unlike standard motors, which spin a rotor to generate motion, linear motors use an electromagnetic field to propel the pods forwards in a straight line relying on tilt angle for steering much like an airplanevii It is through the careful control of the magnetic forces that smooth acceleration and deceleration can be achieved, ultimately removing the need for wheels, brake pads, and fuel-burning engines
It is through these theoretical mediums that the promise of hyperloop has been conveyed with only recent developments by the China Aerospace Science and Industry Corporation (CASIC) showing practical promiseviii The engineering challenges associated with each element cannot be understated, yet like many prospective commercial projects it remains the most gatekept from public giving rise to premature excitement and unjustified predictions
Hyperloop’s failure lies in the structural challenges that occur in building a safe and effective system Whilst hypothetically Hyperloop appears to be a sound idea, its inevitable failure lied within this failure Below are a few of the challenges that the design faces with some basic proposed solutions, however, many of these procedures are either too costly or haven’t been discovered Majority of these solution are hypothetical, thus reflecting why Hyperloop has been unable to succeed as a project
Maintaining a Low-Pressure Vacuum – a low pressure vacuum is required in order to avoid air resistance and friction as mentioned above, however in a long tube it is extremely difficult to create a near vacuum environment As it stands, the longer the tube, the higher difficulty to maintain the environment without regular leaks, hence meaning that long journeys will be very difficultix In order to attempt to contain these leaks engineers will need explore advanced sealing technologies that and employ redundant vacuum pumps in order to maintain stability
Safety at extreme speeds – due to hyperloops proposed speeds over 600mph any small obstacle can pose a major issue Any risk are amplified due to Hyperloop pods being extremely close to th grounds One major safety concern is that of a tube breach: in th case of this event, air would rush in at supersonic velocities, whic can lead to dangerous pressure wavesx Engineers will need to in emergency pressure equalization systems in order to prevent these dangerous waves and use reinforced tubes to prevent structural failures and avoid the use of emergency systems
Cost and Infrastructure challenges – if safety regulations were to be met and effective designs were developed Hyperloop would still face large economic challenges Billions would need to be spent if hundreds of kilometres of vacuum- sealed tunnels would need to be built This would be significantly more expensive compared to any other forms of land travel such as high speed rail Some companies have proposed that Hyperloop systems can be made underground or be elevated to reduce land acquisition costs however there will be long and expensive legal processes and this could hinder the development of an already stalling idea
Emergency Evacuation Systems – If there is a safety breach or other issues inside the hyperloop pod/tube there will need to be an escape mechanism for passengers However, these are significantly harder to implement as passengers are in sealed tubes inside sealed pods Passengers will be trapped if there is a loss of power or depressurization Whilst possible safety measure include emergency airlocks escape pods and side exits at regular intervals due to the low pressure environment, that is already difficult to make, there will be considerable challenges to implement these without significantly ramping up costs
Passenger comfort due to acceleration – due to high speeds acceleration and deceleration can cause major discomforts if not carefully controlled Sharp turns and sudden stops can exert several G-forces on passengers due to fast changes in motion Passengers will experience high centripetal forces expressed by:
Whilst there may be solutions to structural and engineering problems hyperloop faces these will require large amounts of innovation to develop successfully and vast costs in order to be implemented These solutions have already been considered but require extensive research and funding and therefore can t be carried out Therefore, despite Hyperloop’s idea appearing to rejuvenate travel by land in reality it will be extremely difficult to develop in the coming decade and will not be able to take flight without significant scientific breakthroughs in engineering
where m = mass of the passenger v = speed of the Hyperloop pod and r = radius of the turnxi This suggests that if Hyperloop is moving beyond supersonic speed, passengers may experience over 5Gxii, similar to that of a fighter jet pilot, and can lead to a loss of consciousness 1 Routes will need to be designed with long gradual curves alongside smooth deceleration to ensure a comfortable ride without the loss of consciousness
i https://wwwindependentcouk/tech/hardt-hyperloop-elon-musk-europe-b2609728html iihttps://docshardtglobal/what-is-hyperloop/hyperloop-product-specifications iiihttps//eurotubeorg/hyperloop/ ivhttps://wwwhyperloopdesignnet/vacuum vhttps//wwwjrailpasscom/blog/maglev-bullet-train vihttps://wwwtechrxivorg/users/660107/articles/850128-a-reality-check-on-maglev-technology-for-thehyperloop-transportation-system-status-update-after-a-decade-of-development viihttps//wwwlinearmotiontipscom/why-use-a-linear-induction-motor-and-will-they-come-to-drive-a-hyperlooponeday/# :text=Linear%2Dinduction%20motors%20application%3A%20HyperloopCtext=Linear%20inducti on%20motors%20could%20soonback%20to%20where%20it%20started viiiFasterthanaplane HyperloopracespeedsupasChinatests flyingtrain system|RailTechcom ixRealitiesanduncertaintiesofthehyperloop-PierNext xHyperloopfaceschallengesasitattemptstogetbackontrack|ASCE xiCentripetalForce xiiHowdoesVirginHyperloopwork?|Virgin
he industry surrounding Artificial Intelligence has boomed over the last decade reaching a revenue of $94 41 billion in 2024 (Howarth 2025) The efficiency and availability of AI makes it extremely useful in many sectors across the global market, from online chatbots, to patient diagnosis However due to this heavy reliance on AI it can have severe negative impacts if not used with caution, such as threatening data privacy, and discriminating against minority groups
Several courts across America use an Artificial Intelligence algorithm called COMPAS to identify the likelihood of criminals becoming repeat offenders using information such as criminal records and demographic information, and the decision of the algorithm plays a large role in deciding the punishment for criminals However in 2016 several reports of inaccuracies in the system were identified, resulting in unjust decisions, biased against minority groups For example, an investigation by Julia Angwin for the ProPublica (Angwin & Larson 2016) took data from over 7 000 cases and found that African Americans were twice as likely to be falsely labelled as high-risk than White people Similar trends have been identified with gender as Dr Melissa Hamilton found that the COPMAS assessment was half as accurate with women as with men: 25% of women classified as ‘high risk’ re-offended, whereas 52% of high-risk men reoffended
FIGURE1-COMPARINGFALSETESTRESULTSINAFRICAN-AMERICANAND CAUCASIANCOMMUNITIES (ANGWIN&LARSON,2016)
There are a variety of reasons why AI may demonstrate a bias, most notably due to skewed test data Artificial Intelligence algorithms are made by using large amount of training data with which it can categorise information (unsupervised learning) and associate characteristics of the data with given answers (supervised learning) However, the creation of data sets which are used to train AIs may contain an underrepresentation of reality, portraying only one demographic group This means the AI has a poorer understanding of other groups and produces more inaccurate results For example Amazon used an AI-based hiring tool to help make the recruitment process more efficient collecting resumes over a 10-year period to use as test data However a large majority of these came from male applicants, making the AI favour male-oriented language, causing it to discard any female candidates Another example is Google, which teaches its AI with data from thousands of published books As AIs based their information on human-generated data the bias of humans is replicated in the decision making of AI Figure 2 shows the consequences of this, with a case from 2016 where Black people in a Google image search resulted in pictures of criminals, and White people showing results of people smiling
FIGURE2–LEFT:GOOGLE IMAGESEARCHOF ‘THREEBLACK TEENAGERS’RIGHT: GOOGLEIMAGESEARCH OF‘THREEWHITE TEENAGERS’.(SINI,2016)
Another feature of AI algorithms which make them susceptible to bias is due to a mechanism they use known as ‘positive feedback’ This means they reimplement the feedback it receives from previous outcomes to learn and strengthen its source of knowledge However, as the morality of AI is limited, it could just as easily reinforce previous stereotypes it has made Since 2018 an AI software called PredPol has been used throughout America to assist the police force in predicting the location of crime The algorithm initially started identifying neighbourhoods containing Black and Latino people as having the highest chance of crime, due to their previous crime rate records from its training data As a result more arrests would happen in this area as police became more focused on that area This information is then fed back into the system reinforcing its premade ideas about those areas being more dangerous
As a result of the AIs attention to those areas it is less likely to identify crimes which occur outside those areas despite being mainly equal in number and severity This loop greatly magnifies any initial bias in the system and plays a significant role in the targeting of minority groups
The Markup and Gizmodo analysed over 5 million predictions made by PredPol and identified that in Indianapolis Black and Latino neighbourhoods were targeted up to 400% more than neighbourhoods which contain mostly White people For example, two neighbourhoods in New Jersey, only a mile apart, demonstrated vastly different amounts of crime predictions, simply due to the percentage of White residents in those areas as shown in Figure 3 (The Markup & Gizmodo 2021)
The causes of bias in AI discussed so far have largely been unintentional and simply a result of the nature in which AI is developed However, there have been many scenarios where data in AI has been purposefully skewed, and algorithms maliciously implemented to favour certain groups of people A popular example of this was Microsoft’s chatbot Tay which had to shut down within just 24 hours of its release It was designed to imitate human behaviour and conversations by following similar speech patterns to users on Twitter However, users on these platforms started to repeatedly and deliberately communicate with the bot using offensive language reinforcing this behaviour into the model As a result, it began repeating these phrases, making racist and discriminatory comments
On a small scale bias as a concept is essential to the proper functioning of AI, and doesn’t pose any issues, as it allows these algorithms to perform educated guesses based on patterns it has recognised However, when the bias is against certain groups of people, it becomes a much greater issue In particular it may influence court cases and crime detection which is an unacceptable ethical flaw in society As well as this, it may make racist and untrue claims, which can spread damaging misinformation As humans are so reliant on AI, what solutions are there to deal with the consequences that come with AI effectively?
AI software typically operates as a ‘black box’, which refers to the hidden nature in which AIs make decisions Due to the complex structure of AI algorithms and their attempt to imitate the human brain, it is difficult for humans to identify flaws in the logic of an AI model to mitigate any bias out of the system As a result many AI models have moved to a more ‘transparent’ method of implementation (Figure 4) by publishing their source code online and providing clear documentation This allows for better accountability and enables users to identify and prevent biases in AI applications limiting any discrimination that may otherwise arise The importance of transparency is illustrated in the Ethical AI Guidelines of The Documentary Film Industry in September 2024 (The Gaurdian, 2024)
Governments also implement ethical AI guidelines with the Vatican releasing a set of principles in February 2025 (The Independent 2025) These highlight that AI shouldn’t replace humans altogether, but they should work cooperatively This is an effective solution as it allows the benefits of AI efficiency and reliability to assist decision making, will allowing human intellect and morality to correct mistakes and misjudgements that AIs make due to unintentional biases
While there are many strategies people can implement to reduce the issues of bias with AI it can never be truly erased and there will always exist some prejudice This is due to the inherent nature of AI, relying on human data which may be distorted, using feedback learning techniques that reinforces pre-existing stereotypes and containing complex processing algorithms making it very difficult to implement moral neutrality As a result, many people are advocating for the removal of AI from public service positions, such as law enforcement and healthcare systems, and even for the banning of AI, due to the violations of human rights and morality that come with it The NAACP an organisation combatting racism are greatly concerned about the evidence of AI biases in predictive policing models and are requesting people of authority to regulate its usage (NAACP, 2024)
n conclusion it is evident how the fast-growing nature of AI has come with the significant issue of poorly implemented algorithms reinforcing biases This has been supported by various case studies which show the significant impact this has on targeted minority groups While various people argue for the banning of AI, I believe a hybrid solution of AI operated tasks and human intervention is an ideal solution Artificial Intelligence is vital to the functioning of modern society, and in sectors such as the police force, it can reduce workload by up to 40% With the identification of biases by humans, the loss of accuracy from AI can be avoided and with the development of transparent algorithms and law-binding government legislations many of the issues with AI can be avoided Furthermore, the time saved on accurate crime predictions, or determining the likelihood of recidivism, humans in public roles can spend more time performing their duties to a greater extent such as preventing more crimes or evaluating more crime cases
Angwin, J & Larson, J , 2016 Machine Bias [Online]
Available at: https://www ibm com/think/topics/shedding-light-on-ai-biaswith-real-world-examples
[Accessed 10 February 2025]
AP News, 2025 Vatican City offers AI guidelines [Online]
Available at: https://apnews com/article/vatican-artificial-intelligenceethics-pope-risks-warnings-231b4b7b8ed6a195ec920f1f362c15e2 [Accessed 10 February 2025]
Ashe, J , 2020 Algorithmic Bias and Fairness [Online]
Available at: https://www youtube com/watch?v=gV0 raKR2UQ
[Accessed 10 February 2025]
Grasso, C , 2024 Black Box vs Explainable AI [Online]
Available at: https://blog dataiku com/black-box-vs -explainable-ai
[Accessed 10 February 2025]
Howarth, J , 2025 Artificial Intelligence Statistics [Online]
Available at: https://explodingtopics com/blog/ai-statistics
[Accessed 10 February 2025]
NAACP, 2024 Artificial Intelligence in Predictive Policing [Online]
Available at: https://naacp org/resources/artificial-intelligence-predictivepolicing-issue-brief
[Accessed 10 February 2025]
Sini, R , 2016 'Three black teenagers' Google search [Online]
Available at: https://www bbc co uk/news/world-us-canada-36487495
[Accessed 10 February 2025]
The Gaurdian 2024 Documentary producers released AI guidelines [Online]
Available at: https://www theguardian com/film/2024/sep/13/documentaryai-guidelines?
[Accessed 10 February 2025]
The Independent, 2025 Pope Francis issues AI warning [Online]
Available at: https://www independent co uk/news/world/europe/aivatican-pope-francis-deepseek-b2688259 html
[Accessed 10 February 2025]
The Markup & Gizmodo, 2021 Crime Prediction Software [Online]
Available at: https://themarkup org/prediction-bias/2021/12/02/crimeprediction-software-promised-to-be-free-of-biases-new-data-shows-itperpetuates-them
[Accessed 10 February 2025]
PROGRAMMING >
olymorphism is defined as the "provision of a single interface for entities of different types [1] and it is one of the four pillars of Object Oriented Programming It can be used in programming to allow objects of different classes to respond to a common function in a different behaviour For example, the len() function in Python could be used to find the number of elements in an array or the number of characters in a string As a result, it is polymorphic because len() behaves different depending on the type of object being passed into it
Machine Learning is everywhere in the news
It is an algorithm which "trains" Artificial Intelligence to learn from their past experiences and examples It does this by con- tinuously performing an action (or receiving training data) and recording which action is more "successful" than others
Subsequently it gains insights from its previous outcomes which means in the future their actions become more sophisticated and ac- curate as they build from their past successful actions and prevent making the same mistakes
For example a simulation by AI Warehouse taught the AI how to walk by controlling its limbs Initially, the AI struggled, where in the first 100 attempts, the AI could not even stand up and could only crawl in Figure 1(before the rule was set to punish the AI for touching the ground)
LISTING1:OOPEXAMPLE
As shown in Listing 1 the code defines A series of classes
The first class animal is called the parent class It defines the category that the sub-classes (Dog, Cat, Cow) lies in The sub-classes are a subset of the parent class ObjectOriented Programming and its significance in Machine Learning In depth, OOP consists of four core concepts as shown in Figure 1 :
Abstraction
Inheritance 3 Encapsulation
Polymorphism
bstraction refers to the removal of the unnecessary information from an algorithm to simplify the programme and hide its complexity It could either be data abstraction such as removing information on how data is stored or structured, or process abstraction Data abstraction could be used to simplify usage of complex data, protect data integrity or encapsulate data Process abstraction refers to when the internal operations of processes are hidden and focusing only on its function or syntax
Inheritance is defined as the "mechanism that allows a class to inherit properties and behaviours from another class" This means that code can be reused often as well as establishing a hierarchy in code which makes it easier to manage and represents a clear relationship between classes As demonstrated in Listing 1, classes are defined as shown The parent class is defined as Animal whilst sub-classes of the class Animal can be shown as the parent class is contained inside the brackets of the objects Dog, Cat and Cow
LISTING3:OOPINCARS
EXAMPLE:STATISTICALMODELLING
LISTING2:OOPINCARS
This is an example of inheritance "self" represents the instance of the class being used Notice how the parent class is Car and the sub-classes are GasCars and ElectricCars which fall under the "car" category As a Gas Car is part of the Car class, the brackets specify which parent class the GasCar inherits from, so brackets are formed around the Car The parent class defines the default purposes of the functions used for those actions in the context of a generic car
f a machine learning algorithm is statistical modelling This gram observes previous patterns on the "training data" e program a sense of the relationship between variables make predictions based on these patterns Some data may native method depending on its advantages over others ne of the most simple statistical models is the Linear del This model means that data is fitted using a straight uation y = mx + c) hence assumes that data follows a hip On the other hand other methods such as the Model combines multiple decision trees from different ata before calculating an average of all predictions, which suitable for non-linear data In this example, would be used to allow different model types to be used y through a common interface This design pattern promotes code reusability, modularity, and extensibility, making it easier to experiment with different algorithms
In a regression Machine Learning model an abstract class may be used to define the functions needed to operate and manipulate the data An abstract class refers to a blueprint for other classes that cannot be instantiated, however, can have other classes inherit and extend from its structure When working with multiple regression methods, the following code will ensure functions can be used for multiple regression methods such as fit() and predict() whilst ensuring they serve a different purpose tailored to the method being used
This is because the method to fit() a Linear Regression would be different to fitting a Random Forest model Therefore this ensures that the interface is easier to program and manage as well as making models easier to use interchangeably
Polymorphism and OOP are also used in neural networks A neural network is a computational model inspired by the human brain, used i artificial intelligence to process data and make predictions Originally proposed by Donald Hebb in 1949, suggested a machine learning technique reminiscent of the operations of how to human brain processed data It does this by having three distinct layers :
Input Layer
Hidden Layer
Output Layer
For a neural network aiming to identify images, the input layer would contain the characteristics, or raw data, of the image and passes it on t the next layer Each node would represent a pixel's raw data such as its RGB value ( a pixel is the smallest identifiable component in an image) The hidden layer would process the input using weights and biases, then performs an activation function The activation function allows the model to look at complex patterns in data such as colour gradients and contrasts outlines which help identify the picture whilst taking 'rules' into account For example if it has detected an living mammal it may detect the image as a face because there would be contextual data stored that helps identify the animal, such as the texture of a zebra, or the facial proportions of a human The output layer would then convert the final prediction or result into a label
The NeuralNetwork() is a base class This acts as the template for all classes to inherit from - i e , it provides common attributes and functions for all child classes Polymorphism would then be used so that different layers would have the same interface for functions but perform different operations For example, for the think(self, inputs) function, when acted upon the input layer, it would receive the data and pass it on, possible with data normalisation
In the hidden layer, the function would perform the weighted sums and the activation function such as sigmoid The output layer would then
LISTING4:BASECLASSFORSINGLELAYERNEURALNETWORK
Cowan D Polymorphism
Warehouse A AI Learns to Walk (deep reinforcement learning) 2023
What is object-oriented programming? Explain OOP in depth
Vukovic S What is object-oriented programming? Explain OOP in depth 2024
Exploring Inheritance in Object-Oriented Programming 2024
Gillis AS What is Object Oriented Programming? 2023 TechTarget The Power of Polymorphism 2023
Neural network (machine learning)
Ali M Introduction to Activation Functions in Neural Networks
Bostoen J simple-neural-network Github
Thorben Abstraction in Programming: A Beginner’s Guide 2023
habsboysschool habsgirlsschool officeboys@habselstree.org.uk officegirls@habselstree org uk 020 8266 1700 020 8266 2300
www.habselstree.org.uk Computer Science Society compscisoc@habselstree.org.uk
Haberdashers’ Elstree Schools Butterfly Lane, Elstree, Hertfordshire, WD6 3AF marketing@habselstree.org.uk