Skip to main content

Computer Sciecnce Society Print() Statement N25

Page 1


PRINT() STATEMENT

Presented by the Dubai College Computer Science Society

Lead Editor: Anna Z
Heads of DC CSS: Ali-Mansur V, Harihar R

TABLE OF CONTENTS

An Introduction to Pine

How does a Large Language Model Work?............................................................................Harihar

The Leap to Agentic Large Language Models

What was the First Programming Language?

Why Programming Languages Die.....................................................................................................Eklavya

Esoteric Languages

Algorithms

Why you can’t stop scrolling.........................................................................................................................Anna

The Rise of Quantum Computing

Neuromorphic Computing..............................................................................................................................Riya

AI in Face and Voice Recognition

From AlphaGo to ChatGPT

The Illusion of AI Objectivity...........................................................................................................................Geet

TERM 1 OVERVIEW

Heads of the Computer Science Society

Term 1 for CS Society has been nothing short of exceptional A multitude of projects, speaker sessions and hackathons have taken place through the society’s weekly sessions, giving students the opportunity to truly involve themselves with Computer Science beyond what they learn within the curriculum

Firstly, the DC CarShare initiative, in partnership with the Sustainability Club, has seen significant progress. Students have gotten involved with hands on programming, brainstorming ideas and turning the app vision to reality This all has culminated a presentation to members of the SLT to discuss to viability and next steps for integration of the app within the school Thank you to all students who have been involved with the app, we hope to launch it soon!

Additionally, both the internal and external speaker list for this half-term’s sessions has been incredible Firstly, the society welcomed the 2026 InnovAIte team to present on the upcoming 3rd edition of the UAE-wide AI Hackathon.

Next, Myra Sobti (Year 9) was invited to present on her robotic arm project, where she discussed her work in creating a robotic arm that could be controlled through the use of a glove, mimicking the actions of the wearer. This provided great insight for students into how robotics knowledge can be used to create real and meaningful projects with direct applications in the wider world.

In one of the recent sessions, the society had the privilege of hosting Tucker Highfield, who was discussing the applications of the Bittensor protocol in the AI startup landscape. This seminar provided great insights for students interested in fintech, AI, and crypto, allowing students to understand how important these different concepts are in a real working environment. This was the first guest speaker the society hosted in over two years, so it was a major milestone for the society to reintroduce this unique experience.

Lastly, the Society also hosted two hackathons this term. This first was a huge success, with over 8 teams of students creating their own projects under the prompt “Tech for Social Good”. Projects ranged from elderly support forums to frictionless health advice using AI Congratulations to the winners:

3rd Place: Eklavya Tomar (Yr12)
2nd Place: Myra Sobti, Anya Dewan, Guoguo Li (Y9)
1st Place: Annika Baberwal (Y9)

Our second hackathon, a classic HackerRank challenge, took place in our last session of the term Students competed to solve tough coding problems under timed conditions in an intense finale. Weekly Code of the Week (COTW) challenges were released to hone students’ programming skills throughout this half-term, and events like this are a great opportunity for dedicated and skilled students to receive their due recognition.

Congratulations to the winners:

3rd Place: Myra Sobti

2nd Place: Youxi Isabella Pan

1st Place: Rishabh Khaund

We thank everyone who has contributed and been involved with the Society this term, and we hope to continue this momentum into Terms 2 and 3.

AN INTRODUCTION TO PINE

After the first machine algorithm created by Ada Lovelace in 1843, programming languages have evolved rapidly Each language created since has its own use case, with its own strengths and weakness However, there is one programming language that you might have never heard of before – Pine Script. In short, Pine Script is the language of TradingView, a platform where millions of traders and investors analyze charts. It has been used to create a variety of trading tools by traders all over the world This essay will delve into the possibilities available in a language that is known for being accessible and easy to learn, yet incredibly difficult to master

Pine Script Uses

Custom Indicators:

Coding your own technical indicators – for example a customized moving average or a unique oscillator – is the most popular use of Pine Technical indicators in a nutshell are tools that help traders get a better understanding of the market by deriving information from charts. Traders around the world use Pine script to code these algorithms which help them become informed when executing trades Algorithms can be based on simple logic or complex mathematical formulae but are all coded using Pine These indicators can appear as extra visuals on the trader’s chart – in the form of a direct overlay on market price – or a separate pane such as a histogram The way an indicator can look is limitless and ultimately depends on a user’s skill level, experience and most importantly creativity when using Pine.

Trading Strategies:

Beyond just displaying information, traders can use Pine script to code strategies that can act in real time. By using buying and selling conditions, traders can create strategies that simulate buying and selling orders according to your rules. What this means is that traders can test out different scenarios of buying and selling different stocks, commodities and indices, all without actually using their own money This allows traders to back test different conditions for strategies without any manual requirements, with detailed performance metrics that give investors insight into the effectiveness of a strategy.

Alerts and Signals:

Additionally, using Pine, traders around the world can save time by creating alerts based on price movement or technical conditions, signaling to them that a potential entry may be near. This allows traders to only get on the chart when the market conditions are right, ensuring that they can trade during the best times according to their strategies. Additionally, alerts can provide traders with entry setups without any need for human analysis, reducing the bias that many traders can experience in the market This can allow for traders to trade many different charts and be able to easily know using alerts which charts have the best conditions for trading.

As a language, Pine is very similar to Python regarding its syntax. Below is a simple Pine script algorithm which creates a simple moving average written in TradingView’s very own Pine editor

The code above shows the core structure of any Pine script indicator The most important part of the code is line 2, which defines what type of script is being created – in this case an indicator – its name and some other optional key factors, such as whether it is to be overlaid on the chart. Lines 3 is simply used to create a variable, which in this case is the length of the moving average. Line 4 uses the function ‘ta.sma’ to create a simple moving average, and Line 5 uses the plot function to actually plot the moving average onto the chart

When executing the above code, the difference in the two NASDAQ 30 Second Charts can be seen. The chart on the left is before execution, and the chart on the right is after execution:

The code executed above is the simplest indicator you can make using Pine. However, it can have profound uses for traders. Traders can use the simple moving average as a way to gauge whether the market is bullish, bearish or consolidating Additionally, the simple moving average can be used as a dynamic support and resistance zone as confirmation for entries The use cases for this tool are endless

Under the Hood

The reason why pine script can very easily run this moving average that would be much more complex to code in other languages is because of the way it is built

In Pine script, functions like ‘ta.sma’ exist for different basic trading tools like simple moving averages which helps traders save time when coding more complex indicators. Instead of writing the code going back every single bar for 50 periods, programmers can simply use the function to create a moving average for a specified length Functions like this exist for a variety of staple trading indicators, including pivot points, linear interpolation and MACD

Moreover, Pine script is a language that is built for time series data – data that is recorded over time in a specific order Each piece of data has a time and a value, with each piece of data having a place in the sequence of time Because of that, Pine code doesn’t behave like a traditional top-down language Instead, it executes automatically on each new bar of data, while considering the data that came before it. On live data, it updates every second for the current bar, allowing the flow of information to be continuous to the trader

This structure is what makes it so powerful for technical analysis It is also what makes Pine so unpredictive is you don’t fully understand how it runs Ultimately, it is what has allowed traders to create such powerful indicators to assist them with making money in the markets.

A More Complex Approach

However, this is only the tip of the iceberg for Pine script as a language. The image below shows a more complex snippet of code from a more complex indicator

This code utilizes functions, for loops, if statements, arrays, and percentile ranking calculations to create a sophisticated ranking of volume for every bar that is executed Essentially, it ranks the bars that it analyzes from most impressive to least impressive in terms of volume. This code is certainly a step up from the moving average seen before It provides a skeleton base for a variety of different indicators that can help traders make more informed decisions in the markets

Here are some possibilities listed below:

- Volume Candle Coloring

- High Volume Candle Zones

- Volume Accumulation based on Buying and Selling Volume

The possibilities – again – are only limited by the creativity of the programmer By using this one function in Pine, programmers can create sophisticated tools using this core logic as a can serve as a base, made to suit each trader’s specific use case scenario.

Pine Script Gallery

Although Pine script code is interesting in of itself, it is only when looking at what it can create that you can understand its true potential Here are just a few things that Pine has allowed me to build over the last year

Volume Entry Assistant:

Price Action Candle Colouring

By considering moving averages, price action and standard deviation, this indicator creates coloring on the candles to signal a shift in market trend in the short, medium or long term The use of this indicator can be up to the trader themselves, but generally this indicator is utilized as confirmation for biases or the direction of general price movement. The red coloring signals a bearish movement, purple signals a change in movement while green signals a bullish movement

The indicator shown on the pane below the chart is a tool created to shown overbought and oversold conditions in the markets. The green highlight shows when price is overbought, signaling a reversal to the downside is to come, with the red highlight signaling that price is oversold, signaling that a reversal to the upside is to come.

This can help traders time their entries with more ease, and has helped me analyze the markets for both longer term investments and shorter-term trades

Shown above is a custom liquidity zone indicator that tracks liquid areas in the market, which are areas that price is likely to react to base on a combination of price, moving averages, standard deviation and volume As you can see above, this indicator can be either green or red, signaling a general trend direction Additonally, price can enter the zone, which signals an optimal place to enter in the general direction of the trend if other factors align.

The Developer Experience

From the perspective of a Pine developer, developing in Pine script feels significantly different from other programming languages Before starting coding in Pine, there are a few things to consider that may either help or hinder your programming experience

The Syntax

At first, the syntax in Pine was surprisingly tedious Although Pine is very similar to Python, there are certain syntax changes which have – on many occasions – significantly increased debugging time One example is assignment of variables When first defining a variable, it is created like python, using the ‘ = ’ sign However, to assign a different value to a variable, the ‘:=” sign has to be used. This is technically not a syntax error in Pine, so it often goes unnoticed by the editor, causing logic errors during programming. Various small changes like this initially may make it difficult to code in Pine, considering that many programmers are used to coding heavily in one specific language syntax, such as Python

Instant Feedback

However, one feature of Pine that makes it so rewarding to use is the build-in editor’s instant feedback When saving the code, it is automatically executed on the chart you are viewing This means that as soon as you create something new, you can see it on the chart with it being ready to use for analysis. This makes creating new tools exciting as it allows for smooth and responsive programming. Additionally, the fact that almost every tool can be easily seen on any chart – whether its stocks, crypto, forex or futures – makes most Pine tools versatile for any type of trader

Community Support

Although almost all programming languages nowadays do have community support, none do it quite like Pine. In TradingView, there is a tab where you can see your created indicators and strategies, as shown below.

At the bottom of the menu is the community page This page allows you to view all published Pine script indicators or strategies created by any other user This serves two main purposes The first is to utilize other people’s tools on your chart This can bring your own trading to a new level by using what other people created. The second use is viewing under the hood of tools people have created. Most indicators on this tab are open source, meaning anyone can view the code Direct copying of indicators is monitored, but this feature provides programmers a pathway to uplevel their Pine programming skills through viewing other’s people code and learning from what they have created

Conclusion

For any computer science student looking to apply programming skills outside of traditional problems in Python or other mainstream languages, Pine offers an opportunity to code in a unique and fascinating environment. I have not yet seen any other language that is quite like it If you are looking for a new language to pick up, Pine would be a great option to broaden your depth and help you widen your thinking skills

HOW DOES A LARGE LANGUAGE MODEL WORK?

What is an LLM

A Large Language Model, or LLM for short, is a generative, artificially intelligent program that specializes in understanding and producing human-like, general-purpose text The GPTs of OpenAI, BERT, PaLM, and Gemini from Google, and Meta’s Llama are all examples of LLMs They power some well-known tools such as ChatGPT, which is run using the GPT models or Bard/ Gemini, which is run on Google’s models. How exactly does this work, though?

Training

Like the beginning of any journey, LLMs need to go through the process of training This occurs in multiple steps:

1 Training on corpus: Here, petabytes of data are fed to the model It is unstructured and unlabeled, and the training is unsupervised The advantage of this is the sheer amount of data that can be provided this way compared to alternatives, but it can also lead to bias. The LLM begins to understand connections between words here. The data should be obtained consensually, however.

2 Self-Supervised: Here, the model undertakes some self-supervised training with some labeled data It begins to improve its accuracy in understanding words and phrases here

3 Deep-Learning: Here, the LLM is trained by the transformer neural network architecture. This is a system that enables the LLM to understand words and phrases completely by assigning a score (weights) to each part of the text (tokens).

Now, your LLM is completely ready for use Based on the quality of data, labelling, finetuning, and transformer architecture, LLMs can come in a range of quality

How Does It Work?

The LLM first converts text into embeddings in the embeddings layer It then proceeds to understand the text in the Feed Forward Layer before capturing inter-word relationships in the recurrent layer. It then filters to get the specifics of the task to generate its output. Put simply (very simply), the reverse process occurs when generating the text.

What can LLMs be used for?

LLMs are highly adaptable This flexible nature allows them to accurately perform at a high standard with any task you give them They can also be fine-tuned with some quick training to specialize them to a certain function Think of this like taking a general purpose LLM (a zero-shot model) like GPT-3.5, who can talk and code, and converting it into a coding specialist, like OpenAI Codex. Here, it can lose some communication skills, but becomes an expert programmer with the highest quality CS knowledge and coding skills With or without fine-tuning, most LLMs can be used for a variety of tasks such as human-like communication, translation, programming, and more New models can even receive images, sound, and/or videos as input (multimodal models) such as GPT-4 or Google’s Gemini Ultra.

The Advantages and Disadvantages of LLMs

·Advantages:

Adaptability to numerous situations

Open to fine-tuning and specialization

Rapid improvement due to large investment into the field, eg Multimodal abilities

Specialization in human-like text but can generate more through zero-shot learning. Disadvantages:

Bias and hallucinations (Skew due to following biased data too closely or outright misinformation when asked a question, going against training data or creating new data to fill a gap) Some language models seem to have a political bias Some make up information when they cannot access the internet

Producing dangerous or controversial responses due to lack of censorship, or losing originality and usefulness due to too many restrictions. Models such as GPTs have been under some fire for refusing to speak of certain political figures due to restrictions but speaking of others due to lack of controversy A model named Dolphin Mistral is uncensored, and this leads to more obvious issues A lack of guardrails can allow the model to participate in wrongful and even illegal activities and schemes, such as phishing, malware development, or production/acquisition of illegal goods.

When it comes to LLMs, professionals need to get the parameters just right If they do, they get great results but if they do not, they could be accused of propaganda, misinformation, polarization, abetting, and more

Conclusion: What do LLMs mean for the future?

Generative AI is one of the most revolutionary fields on the globe, being a force of great potential to make our lives easy or a disruptive power that can uproot millions of jobs rapidly. LLMs are one of the biggest players in this zone as they could, in the future, do what many professionals depend on for a living, but they can also result in intelligent voice assistants and catalyze futuristic smart homes and more In an age where we witness the rise of AI, it is important we educate ourselves on the power that these programs must change everything we know It is important to note that this article is a simplified explanation of Generative AI and LLMs. There is much more to know, learn, and discover in this emerging technology, and nothing of it is certain.

KNOWLEDGE-BASING - THE LEAP TO AGENTIC LLMS

With the rise of Large Language Models (LLMs), especially LLM-based AI Agents, a new field in the technological landscape is gaining traction AI Agents are LLMs that have access to some knowledge and that can perform some actions in their environment These are the tools that are opening the possibility of AI job replacement, such as in customer service, where an input, human-like, data-driven processing, and a response with adequate action would suffice for many real customer response jobs. However, LLMs have a limitation. Since they have no information outside of their training, they need to be fed this information in some way, shape, or form That’s where knowledge-basing comes in This article will cover some of the prominent knowledgebasing methods, including how they work, their advantages and disadvantages, their use cases, and more.

A Brief Overview of how an LLM Works

An LLM is an AI system which involves a group of layers whose sole aim is to predict the next word in a string. LLMs predict the next word, then the next, and so on, to form a response, stemming from your base prompt By repeatedly predicting the next word, they are able to form a cohesive response

LLMs do this by repeating a set of operations many times per word prediction. These are known as Attention blocks and Multi-layer Perceptrons, which together make up the ‘transformer’ in Generative Pretrained Transformer (GPT) ‘Attention’ allows the words, or more accurately, tokens, in the sentence to communicate with each other and share meaning The word ‘Bat’ for example, can mean two different things when talking about a ‘Vampire bat’ or a ‘Cricket bat’ Multi-layer Perceptrons (MLPs) store a lot of the facts and inherent knowledge an LLM has, including some of its knowledge of facts and grammar.

LLMs do this with pure math The sentence first undergoes tokenization, where it’s split into tokens, which then undergo embedding, where they are made into vectors, or long lists of numbers. Think of these lists of numbers as representing an arrow in some higher dimensional space, say 3-dimensional space for now. These vectors store information about the word, so the vectors for ‘father’ and ‘mother’ may point in a similar direction, compared to words like ‘milk’ or ‘cheese’, which may be in another direction Attention moves these vectors based on their place in the words via matrix multiplication MLPs are the same, but store facts instead, and will often leave the word untouched (often using what’s called a ReLU function) if it’s not related to the fact.

Whilst attention moves the vectors around based on neighboring tokens, MLPs will move vectors to truly encode their meaning For example, the word ‘Einstein’ is not in the English language. An MLP detecting it may leave hundreds of words untouched, but if it sees ‘Einstein’, it may align it with other words like ‘Physics’ or ‘Genius’. This is a fact, and it, like ‘attention’, is stored using matrices The contents of these matrices are decided in training and are known as ‘weights’ Many facts, however, are not stored here For example, until recent upgrades to ChatGPT (using methods we will discuss later) it had a knowledge cutoff. Certain pieces of information would be unavailable to it.

Now that an understanding of LLMs and their inner workings has been established, their processes can be abstracted The article will now progress onto a variety of methods, some of which modify the input, the weights, and/or the output format through said modifications, to overcome LLMs’ lack of inherent knowledge

Fine-tunning

Fine-tuning is a relatively simple method to understand, but a difficult one to accomplish. It involves taking a pre-trained model continuing its training, but in a niche. This allows it to adjust its weights to the specific field of interest, adjusting its inherent knowledge and output language style

For example, AI firm Mistral AI recently released LLM Mistral 8x7b It was soon fine-tuned into Dolphin Mistral, adjusting its weights to make it uncensored. This is an example of fine-tuning which affects output language style. It changed the training data of the model to remove cues for censorship, which would have removed the model’s internal blockades to address questionable or controversial topics, which are fact-based mechanisms

One major application of this kind of knowledge-basing is with legal LLMs, or LLMs whose aim is to imitate a lawyer. Fine-tuning gives them domain-specific information, allowing for not only a good understanding of legal facts, but also legal language and terminology (legalese) in framing its output

Fine-tuned LLMs can provide fast response times whilst remaining highly customizable and fit for the purpose. Fine-tuning is, however, an incredibly resource intensive process, and can be very expensive compared to other methods The data is also placed within the training, so just like current LLMs, the data needs to be periodically updated to stay up to date, incurring further costs, without which the data is a static addition to the LLMs existing knowledge (which may be acceptable in some cases) With fine-tuning, there is also the risk of hallucination, where the LLM simply outputs false information.

Retrieval Augmented Generation (RAG)

To overcome the issue of static data, the idea of Retrieval Augmented Generation (RAG) was developed RAG involves breaking down a large corpus of information and adding it to a vector database When a user sends a prompt, a simple embeddings model will analyze the query and retrieve the top-k (such as the top five) most useful pieces of information using a semantic search from the Vector DB. This information is sent to the LLM to enhance its factual knowledge, which in turn returns an appropriate response

Not only does this bypass the expensive process of fine-tuning, but it also means that the independent vector database can be updated periodically (which is relatively inexpensive) to build the LLMs knowledge base. By limiting the quantity of data that is extracted from the Vector DB and appended to the prompt, hallucination can also be avoided, but this does reduce the total information available to the LLM on the topic in question

In the legal context, a case with thousands of documents which are all relevant will likely need fine-tuning, whilst a case with a select few relevant documents or data points from a wide array of sources, which could be dynamic and ever-changing, would be better off with RAG It is also important to note that RAG systems have increased latency due to the additional steps of requesting the semantic search

Search RAG

RAG has spawned numerous offshoots, sharing some of its characteristics. Search RAG, for example, takes in a query, but rather than a Vector DB search, will query the internet on a search engine to index and leverage all the available information present there There can be many such queries, and these can produce a summary of all the relevant details for the primary model This works by employing a second model, which operates the queries to the search engine. The second model will summarize the results from the top-k results and pass this to a higher-up model, which will generate a result.

This method guarantees up to date information, and it is used by major LLM services such as ChatGPT to overcome its knowledge cutoff The idea of using additional LLMs is also not new; major reasoning models use these to query themselves, so in this context, secondary LLMs query the internet instead. Of course, the benefit of up-to-date data and mitigation of the risk of hallucinations is accompanied by even higher latency and reliance on a corpus of data that may be flawed or inaccurate since you are literally querying the internet, which is not always 100% accurate These models will also be reliant on an internet connection, and the queries across networks would bring about the biggest effect on latency.

GraphRAG

GraphRAG is a new RAG framework that aims to further increase accuracy and decrease the risk of hallucinations in RAG systems GraphRAG constructs a Knowledge Graph (KG) from the data, which introduces a structure that can help LLMs understand the data more effectively and accurately.

Like all RAG systems, a prompt is input, and a search occurs Here, it is graph traversal with multi-hop reasoning, which essentially involves moving between the nodes in the graph to collect relevant information to the prompt that may not be picked up in a simple semantic search. GraphRAG systems leverage this method to provide LLMs with much richer and more structured data that the LLM is less likely to hallucinate on and more likely to use as a basis for reasoning However, the graphs can be difficult to construct initially, and latency may increase further GraphRAG systems have applications in healthcare, finance, and academic research. Another application would be our running example of law, where KGs can be used to represent the relationships between statues, cases, and legal entities in an LLM-friendly manner.

Cache Augmented Generation (CAG)

Thus far, these RAG systems have been introducing a trend of reducing hallucination and providing richer dynamic data at the cost of latency To combat this, the final RAG offshoot of this article, CAG, was developed Cache Augmented Generation ignores all of the prior ideas of storing information in a structured format and querying it for relevance. Rather, it passes the entirety of the pure, unstructured content to the LLM to decipher.

This drastically reduces latency and provides LLMs with the entire scope of data for obvious reasons, but it does limit the size of the data that can be input It is limited by the context window of the LLM, a hard stop, but also should be decreased to reduce the risk of hallucination, so it effectively skips the search procedure and only keeps the top-k relevant information on the topic for a very narrow field.

Model Context Protocol (MCP)

MCP is an evolution of knowledge-based LLMs, often called ‘the USB-C of AI Applications’ Developed by Anthropic, MCP brings knowledge-based LLMs to life, making them real LLM agents It sets a standard for communication between LLMs and MCP servers, where LLMs have knowledge imbued with the tools they have access to.

Based on a user query, they will use their available information to decide what tool to use, output the response in an appropriate structure to invoke that tool, and invoke the callable in the MCP server The MCP protocol shows the applications of knowledge-based LLMs in real agentic use cases by teaching the LLM about an arbitrary set of functions or callable and allowing the LLM to act within its environment.

Conclusion

Knowledge-basing, and other leaps in the field of AI agents, are the center of global technology, economics, politics, and public view AI stands to change the world with its decision-making capacity in a way no prior technology has done, and being able to teach it and feed it data is the first step towards a learning AI, which itself may be the first step to the coveted Artificial General Intelligence

WHAT WAS THE FIRST PROGRAMMING LANGUAGE?

If it were not for programming languages, we would not have anything close to a website and so computer software would be incredibly restrictive and we would essentially be stuck in the 1900s Programming languages have created the world we live in today technologically But where did it begin? What was the original programming language, and how did they create the grounds for the software that exists today?

The Origin of Programming

If we are to learn about the history of programming languages, then we have to go back in time to the 19th century and visit the life of Ada Lovelace, an English mathematician who is better known as the world's first programmer In the 1840s, she developed Charles Babbage's proposed plan for the Analytical Engine, a pre-mechanical general-purpose computer Lovelace developed an algorithm to be run on the Analytical Engine, and her writings are some of the first known computer programs.

Assembly Language and Machine Code

If it were not for programming languages, we would not have anything close to a website and so computer software would be incredibly restrictive and we would essentially be stuck in the 1900s Programming languages have created the world we live in today technologically But where did it begin? What was the original programming language, and how did they create the grounds for the software that exists today?

Plankalkül: The First High-Level Language

While assembly languages improved programming efficiency, they were still machine-dependent. In the 1940s, German engineer Konrad Zuse created Plankalkül, which is regarded as the first high-level programming language Plankalkül introduced structured programming concepts where the programmers could declare data types and procedures independent of machine hardware. Regrettably, Plankalkül never caught on at that time

FORTRAN: The First Widely Used Language

IBM built FORTRAN (Formula Translation) during the 1950s by John Backus. FORTRAN was the first commercially viable programming language to make it simple for scientists and engineers to program FORTRAN was the sole language to support complex mathematical calculations, and it was the pioneer for many follow-on languages.

The Legacy of Early Programming Languages

The development of early computer languages paved the path to the programming paradigms of today COBOL, Lisp, BASIC, C, and Python are some such languages which developed and rendered programming efficient and flexible From Lovelace's algorithm to the popularity of FORTRAN, programming developments took us from here to where we are today and to the advanced software-driven world that we enjoy today

In hindsight, computer programming languages have come a long way, but it all started with what was conceived many ages ago As we continue to advance computing to even more unimaginable heights, it is interesting to note how it all began with pioneers like Lovelace, Zuse, and Backus

WHY PROGRAMMING LANGUAGES DIE (AND HOW NEW ONES ARE BORN)

Programming languages are like living organisms. They are developed, they mature, and eventually, some of them die. From Pascal to Perl, former industry leaders, they have now gone out of fashion or gone out of existence Meanwhile, newcomers such as Rust, Kotlin, and Zig have appeared, capturing the enthusiasm of programmers all over the world But why do some programming languages wither away while others thrive?

The Lifecycle of a Language

Just like technology products, programming languages also have an informal life cycle At the beginning, a language is born- usually to solve some specific problem or fix flaws in existing tools. If the programmers find it handy and convenient to use, then it gains traction Communities form, libraries are created, and usage is widespread

But popularity is hard to sustain If the language fails to adapt to new trends, performance needs, or modern developer preferences, it starts to become obsolete. Soon enough, it can be replaced by newer, more efficient, or more elegant languages.

This is a natural phenomenon and not necessarily indicative of the language's quality In fact, most of these dying languages are technically well-formed but programming with software is about more than syntax

Why Languages Die

A programming language doesn't just disappear overnight it fades away slowly, probably due to technical stagnation, community erosion, and changing industry requirements.

One of the main reasons is loss of community support When developers leave a language behind, its forums go quiet, tutorials are outdated, and libraries deteriorate New students steer clear of it, creating a cycle of decline Languages like Ada and Tcl saw their populations dwindle to the point where they were nearly irrelevant in today's development.

Another factor is that of modernization Some languages were excellent in their day but never evolved Visual Basic, for example, dominated the early 2000s as the Windows programming language of choice but was abandoned with web and mobile platforms It could not evolve. Developers moved to more versatile and forward-thinking tools.

The linguistic ecosystem also has a very important role to play Even a technically beautiful language won't survive without an adequate amount of libraries, frameworks, and tools. A good example is D a design-oriented language with poor third-party support, which in turn restricted its usage in the mainstream.

Backward compatibility too is an issue of concern Whenever an update in the language ruins the code already being used, it fills the developers with anxiety, especially when the organization is large with existing systems in place. Python 3 was fought for years due to incompatibility with Python 2, and survived only because it had a large set of users and a gradual transition plan

Finally, corporate acceptance or abandonment would greatly affect the destiny of a language Objective-C became inextricably bound to writing iOS applications for several years, but following the advent of Swift by Apple, it lost favor. Similarly, when Oracle bought Java, licensing and innovation issues concerns affected its grassroots support for a while

Briefly put, languages die when they are no longer able to cope with the evolving needs of developers and when no one is left to speak up for their sake.

The Genesis of a New Language

So what inspires a new language to come out of the woodwork? Unlike the popular myth, new programming languages do not sprout on a whim. They are likely to materialize with intent: to address shortcomings in current tools or to address new challenges.

For instance, Rust was created by Mozilla to deliver the memory safety of languages like Python without sacrificing the performance and control of C++ Rust was widely used by systems programmers who could avoid memory bugs without a hit on performance.

Kotlin, from JetBrains, solved the verbosity problems of Java while remaining 100% compatible with Java code With Google endorsing Kotlin for building Android apps officially, its popularity grew overnight

Meanwhile, Go (Golang) came out of Google as a response to growing software infrastructure complexity Its elegance and native concurrency characteristics made it appealing in cloud-native environments

Should you learn a “Dying” Language?

You might be asking yourself: is learning a dying language worth the effort? Interestingly enough, the answer is yes in some cases

Legacy code is still run on COBOL, Fortran, and even really old versions of C Banks, airlines, and governments still use developers who know these "ancient" systems because it is too expensive or risky to retype millions of lines of time-tested code.

Of course, then let's focus on future-proof and adaptable languages as the foundation of your education Python, JavaScript, Java, and C++ still dominate the job market and potential newcomers like Rust or TypeScript are certainly worth keeping an eye on

Final Thoughts

Programming languages are not immortal. They're born with innovation, die with neglect, and occasionally linger in surprise places. As a beginner or young programmer, it's nice to know this history not just to select the right tools, but to appreciate the everchanging character of the technology landscape

No matter whether you're learning your first language or your fifth, remember this: the best developers are not committed to syntax they're committed to solving problems.

ESOTERIC LANGUAGES

Esoteric, meaning obscure, unusual, and specialised. In the context of esoteric languages, or esolangs, programming languages that are weird. That are unconventional, and perhaps push the limits of what can be considered programming languages But why? Why care about these wayward rules? Why bother trying something that has neither a practical application nor an intuitive format?

First, some examples.

The most famous esoteric language is BrainF. You only need 8 symbols to program with it, yet it is still Turing-Complete This means it supports variables, iteration, and selection

Here’s how you would write “Hello, World!” in this language: >>+<--[[<++>->-->+++>+<<<]-->++++]<<.<<-.<<..+++.>.<<-.>.+++.------.>>-.<+.>>.

It works with a series of boxes (cells) that hold numbers, and a pointer < and > moves the pointer between boxes + and – changes the value in the boxes by 1 , and are input and output [ and ] allow iteration.

As you can see, it is minimalistic, but not at all simple or intuitive.

Similarly, Whitespace is a language that only uses spaces, linefeeds and tabs to write programs “Hello, World!” would look like this:

The most difficult esolang is said to be Malbolge, aptly named after the eighth circle of hell It took two years to find a way to write Hello, World! and the solution had to be discovered by a computer.

It looks like this:

(=<`#9]~6ZY327Uv4-QsqpMn&+Ij"'E%e{Ab~w= :]Kw%o44Uqp0/Q?

xNvL: H%c#DD2^WV>gY;dts76qKJImZkj

However, there’s more to these experiments than pure difficulty. Chef is a language that reads like a recipe ><> (Pronounced “Fish”) and Befunge are two dimensional languages, where the code is no longer bound to running linearly, allowing for some complex loops Back to the question If these languages are so difficult, so obtuse, so convoluted, why on earth should we try them? Because by trudging through all the chaos, you will reach clarity.

You don’t get pretty error messages and syntax that is almost identical to pseudocode And you’ll learn how to solve problems

You don’t get the patterns and templates that you reuse for every program. And you’ll uncover the processes computers go through.

You don’t get the machine code pre-wrapped in layers of interpreters And you’ll build resilience

You don’t get to have bugs as simple as a missing closing bracket. And you will have fun.

Now, pick a language, one that grabs your attention Skim over the rules Try to print a number Move a value from one memory cell to another Loop three times Be confused Get stuck. Then push further. Try something more complicated. A calculator, a pattern generator, a simple game. The deeper you go, the more you’ll work out about how a machine interprets instructions, how data flows, how every command propagates like a ripple through the program

You might say, “don’t reinvent the wheel”, but that’s not what this is Esolangs aren’t for efficiency or practicality, it’s about understanding. You can put thick, strong wheels on your car but if you don’t know how the wheel has those attributes or why they’re helpful, and you try to race with that car, you will fail miserably

Through playing with these esoteric languages, you’ll stop writing code and start understanding code. You’ll train your mind to step back, zoom in, flip the problem over, and solve it. And it will be worth it.

ALGORITHMS

The Invisible Powerhouse

Have you ever wondered why Netflix always has what you would like to watch? Or why Google Maps always knows the fastest route? That is the magic of algorithms. They’re the "behind the scenes" act they learn from what we do, protect our data and help us in solving problems. You may not realize it, but they shape our world.

Contrary to what you may think, algorithms existed even before computers! During the ancient times, people used it in math and astronomy and during the 15-17th century, people used it to sort their mails, make timetables on trains and assist in their taxes. Actually, the earliest computer algorithm was written by a woman named Ada Lovelace in the nineteenth century

Now, imagine an AI inventor, experimenting how to solve problems quicker Instead of humans designing everything, it picks the best out of thousands of possibilities that it comes up with itself. It’s like giving a computer the ability to think creatively. AlphaEvolve is one such AI tool being used in today’s world

All this talk about algorithms got me thinking how there are already so many algorithms that are a part of my life If my alarm clock rings, I wake up If you search something once, it will come on your for you page. There are even cooler ones, like if you clap your hands, the lights go on or mood lighting. Apart from that, there are some algorithms that I wish were in my life For an example, if the weather is cold, all my winter clothes would go to the front of the pile

To summarize, even though you may not be able to see algorithms, they're omnipresent, silently shaping the world around us. They've been around for centuries, effectively helping us with modern inconveniences that we come across Algorithms may be invisible, but they’re the reason our world works the way it does It's not magic, it's the magic of algorithms!

WHY YOU CAN’T STOP SCROLLING

Most people think marketing is all about catchy slogans or bright visuals In reality, modern digital marketing is driven by computer science Every time you scroll through your TikTok “For You” page or see an Instagram post recommendation, you are experiencing systems designed and refined through algorithms, data structures and large-scale experimentation, that work out of your sight to keep you engaged and coming back

A big part of this is data Apps collect far more information than most users realise They track what you click, how long you pause on a video, which posts you ignore, the time of day you usually open the app and even how fast you scroll. None of this is shocking on its own, but when combined it builds a detailed model of your behaviour, that those algorithms accommodate for, to keep your attention for as long as possible Behind the scenes this information is stored in huge databases that can handle millions of read and write operations every second Engineers then run queries to find patterns across users

How Recommendations Actually Work

For example, if thousands of people who like a certain genre of music also tend to watch cooking videos the system will notice and use that connection during recommendations The actual recommendation logic often uses an approach called collaborative filtering. It is simpler than it sounds. If two users interact in similar ways, the system assumes they have similar tastes Suppose you and another user both like the same sports videos If that person then starts to watch a new series of match highlights, the system will recommend that series to you as well The algorithm does not understand football or game strategy It only understands patterns in numbers To make this work at scale engineers represent each user as a long vector of numbers based on their behaviour. The closer two vectors are in a mathematical space, the more likely the users will get similar recommendations

Digital marketing also relies on constant testing Companies rarely release a new feature without running A/B tests. Two different versions of a page or a button are shown to two random user groups. The system measures which version performs better based on criteria like click through rate or session length These tests might seem small, but they are very influential Something as tiny as changing the shade of a button can increase sign-ups by a significant amount A/B testing lets companies iterate quickly because the decisions are based on real user data rather than logical guesses

of the image

Another system to think about is the growth loop, where user actions create more users. For example, when you send an invite link to a friend you are doing free marketing for the app Once your friend joins the app, they might share content that brings in someone else Engineers track this loop with formulas that measure virality If each user brings in more than one extra user, the app grows naturally without huge advertising budgets.

Tricks and Ethics

Of course none of this works without a good design Interface design uses concepts like friction, which measures how hard it is for a user to complete a task. Companies try to reduce friction for actions they want you to take such as watching another video or signing up for a trial At the same time, they sometimes add friction to actions they do not want you to take This is where the ethics become complicated Delays in account deletion screens and confusing settings are examples of dark and clever patterns that take advantage of human behaviour.

When you put these elements together you get the modern world of digital marketing It is not just psychology or advertising It is algorithms, large datasets and constant optimisation Understanding the computer science behind these systems helps you see apps for what they truly are: carefully engineered machines built to capture your attention. And when you know what they actually are, you know how to prevent yourself from falling into the endless trap they built for you.

Source

THE RISE OF QUANTUM COMPUTING

Imagine a computer so powerful it could solve in mere seconds what today’s supercomputers would take centuries to do With the recent emergence of quantum computing, researchers believe that this futuristic vision could become a reality

Quantum computing is a rapidly emerging technology that uses principles of fundamental physics to solve complex statistical problems, even ones that are far beyond the reach of today’s computers While the digital computers that we have been using for decades rely on a binary processing method (by using bits that are either 0 or 1), quantum computers are able to use qubits, the basic unit of quantum information. In contrast to bits, qubits can exist in 0, 1, or both states at the same time, in a property known as superposition This means that they can perform many calculations simultaneously Furthermore, qubits also take advantage of the phenomenon of quantum entanglement. This is a property of qubits in which the state of one qubit is linked to the state of another, no matter how far apart they are This connection means that quantum computers can coordinate processes across multiple qubits at once, which significantly increases their efficiency for some problems.

However, building a quantum computer is not an easy task, due to the fact that they are extremely expensive to build and maintain Qubits are also extremely delicate and can lose their quantum state if disturbed by their environment (decoherence) To prevent this from happening, quantum computers usually only operate in carefully controlled conditions, often just a tiny fraction of a degree above absolute zero - the lowest possible temperature in the universe, where particles have almost no movement Despite these challenges, researchers around the world are still making rapid progress with quantum computing, and bringing it closer to practical use

The potential applications of quantum computing span over a wide range of industries For example, in medicine, it could model molecular interactions for drug discovery and analyse complex health datasets. In finance, it could optimise investment strategies and detect fraud Meanwhile, in environmental science, it could model climate systems which could help guide better decisions for our planet

For us students and young coders, quantum computing is a fascinating field of research that could prepare us for a world where computers are faster, smarter, and capable of solving problems we once thought were impossible The emergence of quantum computing makes this one of the most exciting times to explore the world of computer science.

NEUROMORPHIC COMPUTING

The next wave in revolutionising AIi

As technology advances, artificial intelligence (AI) is driving transformation across industries, traditional computing systems are hitting fundamental barriers These classical chips process data sequentially and rely on binary logic, which has its own limitations and falls short in tasks that require parallelism, adaptability, and real-time learning. Neuromorphic computing offers a paradigm shift, drawing inspiration directly from the human brain to create systems that are more efficient, adaptive, and intelligent

Neuromorphic chips are designed to emulate the brain's architecture by mimicking the behavior of biological neurons and synapses. Instead of executing instructions linearly, these chips use parallel, event-driven processing to handle complex information dynamically and efficiently This structural innovation significantly improves performance in areas such as pattern recognition, sensory data interpretation, and autonomous decision-making

A key strength of neuromorphic computing lies in its energy efficiency. Unlike traditional processors that continuously consume power during operation, neuromorphic chips are largely event-driven This means that they activate only when they need to process specific inputs Due to this extremely low power is consumed For example, certain neuromorphic systems can perform advanced predictions and calculations using as little as 20 watts of energy [1].

These capabilities make neuromorphic chips ideal for a range of applications In robotics, they enable real-time sensory data processing and adaptive control In autonomous vehicles, they help identify and respond to complex environments by processing visual, auditory, and spatial data simultaneously. By simulating how biological neurons adapt and learn, neuromorphic systems have the potential to revolutionise AI, giving machines the capability to learn and evolve more naturally and efficiently

Fundamentally, neuromorphic computing reimagines the chip as a living, learning entity Similar to how a neural network's artificial neurons interact, neuromorphic chips feature transistor nodes that operate similarly to biological neurons. When combined with synaptic plasticity the ability of connections between neurons to strengthen or weaken over time these chips can make temporal decisions and perform predictive tasks more effectively than conventional digital systems [2]

One of the major limitations of existing AI models like deep learning is their insatiable demand for computational resources. They require vast amounts of energy and memory, making them difficult to scale Neuromorphic computing sidesteps this issue by implementing hardware architectures that operate more like the brain. These include parallel processing units, analog circuits, and novel materials that support controlled ion flow for synaptic communication Such materials include single crystalline silicon layered with silicon germanium, as explored by researchers at MIT, or tantalum oxide for durability and precision, which was investigated by a team in South Korea [3]

There has been an emergence of new hardware architectures For example, the University of Manchester’s SpiNNaker (Spiking Neural Network Architecture) system demonstrates how traditional digital components like ARM cores and routers can be configured to simulate the cortex of the human brain SpiNNaker has now achieved performance parity with supercomputers in simulating cortical activity. This shows how neuromorphic systems can match current highperformance standards, using far less energy [4]

These developments reflect an ongoing global effort to move beyond the limits of siliconbased computing. Simple binary logic isn’t enough for the complex and fast decisions modern systems need to make. Neuromorphic computing enables communication between artificial neurons in a way that closely resembles human cognition, involving intricate electric signals and synaptic weights instead of simple on/off binary signals This opens the door to machines capable of processing information with more depth, adaptability, and context

While software-based neural networks have achieved significant success in machine learning, the challenge now lies in translating those advances into physical neuromorphic systems. As researchers improve materials, architectures, and algorithms, the goal is to create chips that can learn and adapt, effectively becoming more brain-like in both structure and function

This technology impacts much more than just speed and power Neuromorphic computing could play a pivotal role in medical research, particularly in simulating and understanding the brain. For example, researchers hope these systems will enable simulations of neurodegenerative diseases like Alzheimer’s at a level of detail previously unattainable By replicating how biological brains operate, these chips offer a platform for studying and potentially finding solutions to neurological disorders

In conclusion, neuromorphic computing represents a transformative step toward the next generation of intelligent machines. By aligning more closely with the structure and behavior of the human brain, neuromorphic chips will lead to improvements in energy efficiency, learning capacity, and real-time decision-making Whether it’s enabling autonomous robots, accelerating AI, or helping decode the brain itself, the frontier of neuromorphic computing is one of the most promising and exciting in modern science and engineering.

References

[1] IBM, "Neuromorphic Computing: Thinking Beyond the Binary," IBM, 2024 [Online] Available: https://wwwibmcom/think/topics/neuromorphic-computing

[2] Intel, "Neuromorphic Computing Research," Intel Corporation, 2024. [Online]. Available: https://wwwintelcom/content/www/us/en/research/neuromorphic-computinghtml

[3] Science Alert, "Neuromorphic Computers Are Getting Smarter Thanks to Better Materials," 2024 [Online]

[4] University of Manchester, "SpiNNaker: Simulating the Human Brain," 2024. [Online].

AI IN FACE AND VOICE RECOGNITION

Artificial Intelligence can now detect human emotions by analysing the way we speak and the way we look. This can be done by using machine learning systems trained to identify emotional patterns in sound and facial expressions. These systems are already being used in apps, virtual assistants, security tools, and mental health platforms

Emotion detection through voice begins with feature extraction When someone speaks, their voice carries acoustic signals that reflect emotional states. AI models analyse features like pitch, intensity, speed, and rhythm. These paralinguistic features are then measured For example, an angry voice might be louder, sharper, and more abrupt than a calm one A sad voice might be slower, softer, and lower in pitch [1]

The program or system does not understand these patterns by default. It is trained using large datasets of recorded speech, with each recording labelled according to the emotion being expressed The training process allows the model to learn patterns between vocal features and emotions Once trained, the model takes new audio input, processes the sound into numerical features, and uses them to predict the emotion with a certain level of confidence [2]

In facial emotion recognition, the process begins with detecting the face from a video or image Once detected, the system maps specific points on the face These include the corners of the mouth, the shape of the eyebrows, the edges of the eyes, etc These are called facial landmarks Movements in these landmarks form recognisable patterns linked to different emotions. A genuine smile, for example, changes both the mouth and the area around the eyes [3].

Machine learning models process these facial features and compare them with previously learned patterns Some systems use traditional algorithms to measure distances and angles between facial points Others use deep learning models, such as convolutional neural networks, which can process raw pixel data and extract features automatically. The result is a prediction about the most likely emotion expressed in the image or video

Combining voice and face data gives better results This is known as multimodal emotion recognition. By analysing both what a person is saying and how their face is reacting, the system increases its accuracy. If the audio suggests someone is frustrated but the facial expression suggests boredom, the model adjusts its prediction based on the full picture Multimodal models are built using systems that can process different types of input at once and sync them in real time [4]

These models are trained using supervised learning. This means the system learns from examples that are already labelled. The more diverse and high-quality the data, the better the model becomes Many datasets contain thousands of voice recordings and facial images from people expressing various emotions under controlled conditions

There are many real-world applications In customer service, AI tools can detect when a caller is frustrated, allowing systems to respond more carefully or transfer the call In cars, driver monitoring systems analyse the driver’s voice and face to detect drowsiness or stress. In mental health, some apps detect tones to identify signs of depression or anxiety. Even classroom tools are being developed to detect student engagement based on facial expressions and tone of voice

Despite this progress, emotion detection is still a difficult problem. Human emotions are complex, and people express them differently. Cultural differences, personality, and context all affect how emotions appear A neutral face does not always mean a neutral emotion A raised voice does not always mean anger Models can misread these signals, especially when trained on limited or biased data

Emotion recognition is one of the most interesting examples of how AI is moving closer to understanding human behaviour. Machines are learning not just to respond to what we say, but how we say it

References

[1] B Schuller et al, “The INTERSPEECH 2009 Emotion Challenge,” Proc INTERSPEECH, 2009

[2] C. Busso et al., “IEMOCAP: Interactive emotional dyadic motion capture database,”, 2008

[3] P Ekman and W V Friesen, “Facial Action Coding System (FACS), 1978

[4] S. Poria et al., “Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines,” IEEE Intelligent Systems, 2017.

FROM ALPHAGO TO CHATGPT

In 2016, an AI model by Google’s DeepMind, was trained to play the ancient, complex board game Go, and it won against one of the best players in the world, Lee Sudol. This was a major milestone in the 20th century, as it showcased AI’s remarkable capabilities. Unlike traditional algorithms that follow fixed rules, AlphaGo was trained by analysing millions of positions and playing games against itself The AI helped lead to today’s systems, such as ChatGPT and Gemini

Most importantly, AlphaGo’s success is mainly due to its capabilities to learn and adapt to data, rather than following instructions, unlike traditional algorithms By utilising deep neural networks to calculate the greatest probability of the next position, AlphaGo gradually improved by learning which moves led to better outcomes than others The model utilised reinforcement learning techniques in conjunction with a Monte Carlo Tree Search (MCTS) helped predict the probability of each possible move on the board. During the match, AlphaGo uses MCTS to simulate thousands of potential future moves The system was trained on supervised learning on human expert games and reinforcement learning, where it played millions of games against itself

MCTS is a method for predicting the best move in a game through simulating many possible sequences; this data will build a ‘tree’ of possible moves. The algorithm has four main steps; the first is selection, where the MCTS selects moves based on a balance between promising moves and moves not explored yet After this stage, expansion occurs when the algorithm explores a new state, it adds a node to the tree for that state Once the node is added, the algorithm will then simulate a random sequence of moves until the game ends. The last stage is backpropagation, where the result of the simulation (i.e. win, loss, draw) is propagated back up the tree, enabling the algorithm to ‘learn’ which moves lead to better outcomes

Both systems, AlphaGo and ChatGPT, are very similar; while AlphaGo predicts the best moves in a game, ChatGPT predicts the next word in a sentence Both systems rely on patterns learned from data to use probabilities for decision-making. However, the type of data and tasks differ greatly: AlphaGo predicts moves in a structured environment, whereas ChatGPT predicts through natural language, which is more ambiguous and sometimes unpredictable

The techniques underlying AlphaGo and ChatGPT are not limited, rather being increasingly integrated into the modern, everyday world Similar methods are utilised in healthcare for early diagnosis, in finance to predict the stock market, and in natural language processing on literature texts. Everyday tools, such as streaming platforms or recommendation systems, rely on the same type of pattern recognition, due to AI’s remarkable abilities to understand, improve and analyse

Despite their impressive capabilities, all AI systems have clear limitations. One significant limitation is their inability to think or understand like humans, as they are the result of the dataset they have been trained on If the data is biased or incomplete, predictions may be inaccurate or misleading, known as ‘hallucinations’ in Large Language Models (LLMs). As they rely on data, not pure, human logic, this significantly increases the risk of error in high-stakes situations, such as in clinical settings.

To conclude, AlphaGo and ChatGPT show how AI models can train and learn from previous outcomes through techniques such as reinforcement learning Many of these deep learning concepts are being increasingly used in the modern world in numerous domains, such as healthcare and finance. However, these models have numerous limitations; their lack of common sense and true understanding may pose crucial threats, especially in high-stakes situations Despite these disadvantages, these successes showcase the potential of artificial intelligence in real-world applications

THE ILLUSION OF AI OBJECTIVITY

Why “neutral” models can still be biased

While Artificial Intelligence (AI) is becoming more and more integral in our daily lives, there is ongoing debate whether AI is neutral, objective and free of human biases or prejudice as it is system that makes choices purely of data. This idea stems from the perception that machines process data and output information, without human emotions, background and bias, leading to objective results

However, this idea conceals a much more complex truth AI systems are built on manmade foundations Their training data, decisions and development all carry some bias

One of the most prominent sources of bias comes from training data. Many AI systems are trained on datasets that do not reflect the full diversity of the real world population, that this AI is built to assist This means that the model might be unable to operate accurately for marginalized groups or other minorities

Secondly, individuals who design and develop AI carry their own views. These design choices inherently mirror the backgrounds and assumptions of the developers, subtly integrating human prejudice into the system These biases could seep into the model through decisions such as data collection and model evaluation- all of which can greatly affect how AI behaves.

Finally, historical prejudice is one of the biggest areas of bias This occurs when Al models are trained on data that may reflect past societal inequalities such as stereotypes, gender dominance and many more This can cause Al to replicate, if not amplify those biases These biases could have a significant impact in areas such as hiring and recruitment, healthcare and image generation.

A real life example is the study Bloomberg conducted in 2023 They asked Al to generate over 5,000 images of people working in different 'high paying' or 'low paying' jobs The study found that the image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while those with darker skin tones were associated with the prompts like 'fast food worker' or 'social worker' Additionally, categorizing images by gender showed similar prejudice For each image depicting a woman, AI generated almost three times as many images of men Most occupations were dominated by men, except for low-paying jobs like caretakers

Tackling these problems and creating truly unbiased AI programs will not be an easy task At the end of the day AI is created by humans and will always reflect some sorts of bias It is quite difficult to fix because we don't really understand how Al learns and generates information

In conclusion, the belief that AI programs are neutral is misleading and almost unattainable What looks like objectivity, actually contains numerous human biases embedded within the system. Although it is difficult to achieve complete neutrality, we can try to actively identify, mitigate and manage biases in a transparent and ethical manner

CYBERSECURITY

Introduction

In an increasingly digital world, it is becoming essential to understand and deploy cybersecurity systems in our networks. At its core, cybersecurity refers to the protection of digital attacks on data, systems, and networks It does this by raising firewalls, installing passwords, encapsulating data, and much more

Common Cyber Threats

One may fall prey to a cyber threat quite easily Here are some ways how:

Phishing – These fraudulent emails or messages that trick victims into allowing access to their passwords and data (usually through a trapped link) to the attackers This can obviously be harmful; imagine sensitive information that can potentially destroy a victims life or work.

Malware – By tricking a victim into downloading a fake software (usually through platforms like GitHub), hackers can insert viruses, worms, spyware, and more into the OS

Ransomware – When a hacker gains access to the victims computer and encrypts their files, denying access until the hackers’ needs are met (usually a hefty sum of money)

Data Privacy and Student Info Breach – When hackers gain access to a network in whole, and compromise sensitive or confidential information (like student data)

Source: European Parliament

Common Cybersecurity Strategies

To combat these unfortunate threats, experts have designed many ways to counteract these hackers. A few include:

Firewalls – Experts can install digital “Walls” to prevent hackers from gaining access to networks and data They do this through complex code and specially designed Wi-Fi routers

Anti-Virus and Anti-Malware Software – These programs run scans for and remove malicious software from your computer systems. Many computers come preinstalled with these programs, like our Surface Pros, which have Windows Defender Software Updates and Maintenance – Regularly updating your computer’s software has been proven to patch security loopholes (like Log4j (a known loophole system for hackers at the core of your computer)

Sources: NIST (National Institute of Standards and Technology), NCSC (National Cyber Security Center)

What can you do?

Along with these expert solutions, you too can play your part in battling cybercrime

Some include:

Be cautious towards spam or suspicious emails or messages – these could be hackers trying to gain access to your computer (remember, official organizations never ask you to reveal sensitive information via email)

Learn to spot AI-generated content – AI generated content often have odd inconsistencies, like weird pauses and unnatural facial expressions. Strengthen your passwords – Using complex passwords (with a mix of capital letters, symbols, and quantity) can help deter any cyberattacks

Conclusion

Cybersecurity isn’t just a fancy tech buzzword it’s a basic survival skill in a world where threats evolve faster than most people update their passwords. From phishing scams to full-scale data breaches, the risks are real, and ignoring them is basically inviting hackers in for tea By understanding common threats, using the right protective tools, and staying alert online, we can significantly reduce the chances of becoming the next easy target In short: stay updated, stay cautious, and don’t assume “it won’t happen to me,” because that’s exactly what every victim thought right before it did.

Turn static files into dynamic content formats.

Create a flipbook