6 minute read

Artificial intelligence is not new but is now democratised

By Dr vathiswa Papu-Zamxaka, tshwane university of technology’s Deputy vice-chancellor for research, Innovation and Engagement at the Tshwane University of Technology.

For the person on the street, the launch of ChatGPT by OpenAI in November 2022 spurned interest in large language models and brought artificial intelligence (AI) into their reality. It was the moment AI became democratised. No longer the realm of researchers, coders, science fiction literature and blockbuster movies, it’s now a tool accessible to anyone with an internet connection. It was the moment AI became democratised.

Since its launch, ChatGPT has raised important questions for society, education, employment, the economy, and governance.

AI through the ages

AI has been a human preoccupation since antiquity. The Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent automata such as Talos and artificial beings such as Galatea and Pandora.

Real and fictional examples over the ages abound. In 10 BC, Yan Shi presented King Mu of Zhou with mechanical men, capable of moving their bodies independently. From 384 BC–322 BC, Aristotle described means-end analysis, an algorithm for planning in Nicomachean Ethics, the same algorithm Newell and Simon’s General Problem Solver used in 1959.

In 1206, Ismail al-Jazari created a programmable orchestra of mechanical human beings in Mesopotamia. He is described as the ‘father of robotics’ and modern-day engineering.

In 1642, French mathematician physicist Blaise Pascal invented a mechanical calculator.

In 1726, Jonathan Swift published Gulliver’s Travels, which described The Engine - a machine for improving speculative knowledge by practical and mechanical operations.

In 1863, English novelist Samuel Butler suggested that Darwinian evolution also applied to machines and speculated that they would one day become conscious and eventually supplant humanity.

German physicists Wilhelm Lenz and Ernest Ising created the Ising Model in 1925, the first artificial recurrent neural network.

In 1836, German engineer Conrad Zuse filed his patent application for a programme-controlled computer.

But it was in 1955 that a young Assistant Professor of Mathematics at Dartmouth College, John McCarthy, coined the phrase ‘Artificial Intelligence’ when he organised a group to clarify and develop ideas about thinking machines. This gave way to the Dartmouth Summer Research Project on Artificial Intelligence in 1956, widely considered the founding event of artificial intelligence as a field.

AI has also been part of our modern existence. Think Siri or Alexa, personalised content recommendations on retail platforms, autonomous vehicles, and navigation systems. It’s only since the introduction of ChatGPT that alarm bells have sounded, and countries and industries are suddenly pressed to take decisions to ensure that AI’s power is wielded to the advantage of humanity. In 2023, there have been calls for a pause on giant AI experiments and for countries to implement AI rules and safety regulations. On April 13, 2023, OpenAI CEO Sam Altman announced the company would not proceed with Chat GPT-5 after the release of Chat GPT-4 in March 2023. Geoff Hinton, the father of AI, resigned from Google over existential risks. Legislators and prime ministers are calling for a global summit on AI.

What does AI do that peddles hype or fear?

The advantages range from streamlining work, saving time, automating repetitive tasks, and facilitating decision-making, amongst many others. The disadvantages include the potential exclusion of sectors in society, mainly in developing countries, that can’t keep up with advancements in AI technologies, potential biases, the generation of AI-generated deepfakes, and lack of emotion.

Artificial Intelligence must be implemented ethically to ensure it benefits all spheres of society.

Some argue that AI threatens public purpose regarding discrimination, accountability and privacy. AI systems are trained on data collected by humans and subject to the sampling bias from the humans collecting and annotating that data.

For instance, researchers found that facial detection software sold by IBM, Microsoft, Amazon, and Face++ disproportionately misidentified dark-skinned women compared to lighter-skinned men. The training data contained few examples of women of colour compared to white men. This problem was fixed by adding more examples of women of colour to the training data.

Therefore, to avoid worsening bias and discrimination, engineers and data scientists must ensure that AI systems are audited for bias before being approved for public deployment.

Additionally, some experts caution around the potential danger to democracy, lamenting that large language models may be used to manipulate, persuade and engage in one-on-one interactions with voters. This includes using pseudo images and deep fakes, which can be easily generated. An example would be the deep fake video of President Volodymyr Zelensky telling his people to surrender in March 2022.

Despite the mentioned challenges, AI can potentially advance essential elements of public purpose, such as health, accessibility, safety, and fairness.

Health Industry: AI is improving detection rates and accelerating diagnoses for cancer and other serious illnesses. In addition, the massive amounts of big data from wearables could unleash a new revolution of insights into disease.

Access and safety: the rise of autonomous vehicles is expected to significantly enhance mobility, especially for disabled populations or senior citizens, while drastically reducing driving fatalities.

Security: AI can be used to detect and defend against cyberattacks more effectively. It has also been deployed for environmental purposes, such as determining the location and status of endangered species and improving flood and wildfire predictions.

Now, zooming into the higher education milieu, large language models such as ChatGPT force us to re-think how we deliver teaching and learning, particularly how students are assessed through assignments. One lecturer at Harvard University asked his students to video themselves while working on an assignment. For example, if asked to develop a code, they must record themselves developing it.

Writing an essay or motivation letter to be accepted at university must also be reassessed in light of ChatGPT. The debate also continues in research around the originality of work if ChatGPT is employed. Some argue that when one has used ChatGPT, that would need to be referenced, whilst others say that the work produced with ChatGPT is theirs.

Some positives of using ChatGPT are that it improves the accessibility to education by removing barriers for people with disabilities and non-English speakers and offers the potential for personalised learning.

For instance, ChatGPT can verbalise the responses of students with sight impairments. It can summarise topics or concepts for students with learning disabilities and enable students who have trouble typing to pose questions verbally. Moreover, ChatGPT can translate English content for students into a language they are comfortable with, thus enabling them to understand their course material better.

We can’t deny that AI has impacted almost every sector of society. I agree with former IBM chair Ginni Rometty’s sentiments that although it is referred to as artificial intelligence, the reality is that it will enhance us. So, instead of calling it artificial intelligence, we should embrace the concept of augmented intelligence that considers a human-centred partnership model that collaborates with AI to enhance cognitive performance.

This article is from: