The Future of Gen AI

Page 1

The Future of G AI ENG AI EN

Introduction

In recent years, companies have built and developed algorithms – or Large Language Models (LLMs) - that learn deep learning techniques on massively large datasets. These datasets are now so large that they have been trained on everything that has been published on the internet for a long period of time.

Because these models are so complex, we find they can make rapid predictions that are often highly accurate. However, they can also make incorrect predictions, often referred to as “hallucinations.” This is because the models are trained on data that contains errors and biases.

Another challenge with LLMs is that they are expensive to train and run. For example, the training of ChatGPT 4 reportedly cost around $4 million. This means that only large companies and organisations can afford to use these models.

Despite these challenges, LLMs have the potential to be very powerful tools that can be used as a force for good across a variety of applications, such as generating text, translating languages, and summarising conversations across a diverse range of sectors.

In this short guide, we’ll include greater insight into the world of LLMs and Gen AI, along with use cases that may inspire your next great AI-powered business solution.

If you’d like to learn more, feel free to get in touch with me or any member of the Appsbroker team.

The Future of Gen AI 2
Penton Head of Data & Analytics, Appsbroker
Because these models are so complex, we find they can make rapid predictions that are often highly accurate.
The training of ChatGPT 4 reportedly cost around $4 million.
LLMs have the potential to be very powerful tools that can be used as a force for good.
3

Generative AI: It’s All About the History

1950s and 1960s

The history of generative AI can be traced back to the early days of machine learning. One of the first examples of generative AI was the Markov chain, a statistical model that could be used to generate new sequences of data based on input. However, the computational power and data resources needed for generative AI to flourish were not yet available at that time.

1970s and 1980s

This period saw the introduction of neural networks – a type of machine learning algorithm that can learn to recognise patterns in data. Neural networks were used to create a variety of generative AI applications, such as image generators, speech synthesisers, and natural language processing systems.

This was also a time of great experimentation in the field of generative AI, as researchers explored a variety of different approaches including genetic algorithms, rule-based systems, and fuzzy logic that helped lay the foundation for the more sophisticated models.

1990s and 2000s

As advances were made in machine learning and computing power, generative AI began to see renewed interest. In 2006, Geoffrey Hinton and his colleagues published a paper that introduced the concept of deep belief networks, which are a type of generative model that can learn from large amounts of data.

The real breakthrough for generative AI came in 2014, with the introduction of generative adversarial networks (GANs). GANs are a type of generative model that learns by competing with another model, called a discriminator. The discriminator tries to distinguish between real and fake data, while the generator tries to create fake data that is indistinguishable from real data.

As generative AI continues to develop, it is likely to have a profound impact on many different industries, including healthcare, entertainment, and education.

The Future of Gen AI 4

GPT-3 was released with 540 billion parameters, illustrated by DALL-E 2, a generative AI model that can create realistic images from text descriptions.

2020

2022

Geoffrey Hinton and his colleagues published a paper on deep belief networks.

The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is won by a deep convolutional neural network.

An open-source LLM called ELMo is published, which had 94 million parameters.

ChatGPT 4 and Bard is released, with a hundred trillion parameters each.

5
The Generative Pre-trained Transformer (GPT) is introduced. Ian Goodfellow and his colleagues introduce GANs. Deep learning is developed. Joseph Weizenbaum develops ELIZA, one of the first chatbots.
2006 1950s 1960s 1990s 2014
The Markov chain is introduced.
2015 2017 2019

Challenging the Model

One of the key challenges with LLMs is that they are “black box” models. This means that it is difficult to understand why they make the predictions that they do. This can make it difficult to trust the predictions of these models, especially for applications where the stakes are high.

However, there are a number of techniques that can be used to improve the explainability of LLMs. These techniques include using attention mechanisms, visualising the model’s predictions, and providing human feedback.

As LLMs continue to develop, it is important to address the challenges of explainability

Large Language Models have the potential to revolutionise the way we interact with computers. By providing us with access to vast amounts of information and the ability to generate text, these models can make our lives easier and more productive. However, it is important to use these models responsibly and ethically.

We need to ensure that they are not used to spread misinformation or create harmful content by putting regulatory measures in place before it becomes a problem.

Q: How do we avoid misinformation when Gen AI models are trained on data from the internet?

A: The internet is a vast and complex place, and it can be difficult to ensure that the models don’t pick up on harmful or misleading content. Equally, without fresh information, the value of responses can diminish and not move the models forward, thereby creating a big echo chamber.

However, even if it did, it will take a long time to get there. And this presumption is predicated on the fact that your generative AI content is being published and then used in the training of LLMs. Some providers keep submissions private. Others – that don’t –are facing concerns over IP.

Q: How do we combat bias in LLMs?

A: Although these models are trained on data from the internet – which can be a biased source – there are clever ways to mitigate this bias, such as by using a variety of data sources and manually reviewing the models.

Eventually, there will be models that can distinguish between human-generated and AI-generated content. However, this will take some time, as the models will need to be trained on a large amount of data.

Q: How do we know when AI is wrong?

A: The answer is that AI is not always right. However, the goal of AI is to be right most of the time. If it’s wrong, then – in the case of auditors – it will be flagged and corrected. While there is the potential for bias and echo chambers in LLMs, there are steps that can mitigate these risks. By being aware of these issues and taking steps to address them, we can ensure that LLMs are used for good.

Waste & Recycling

Appsbroker is working on a really interesting use case for the technology. Believe it or not, fires in recycling centres can cost businesses hundreds of millions of pounds - all because of electric scooters, power tools and other Lithiumion battery devices.

The Challenge

Lithium-ion batteries, when damaged in the waste management process, can catch fire, disrupting ops, posing health risks & financial implications. ESA wanted an industry-wide, collaborative cloud solution that could continue to safely detect items at the earliest stage of integration into the waste management process to reduce potential risks, limit insurance cost rises and improve efficiency.

The Solution

We delivered a proof of concept in just seven weeks, manually annotating images from CCTV recordings at waste collection sites using Vertex AI. Advanced, deep learning techniques helped train object detection models with a precision rate of up to 69%. This rapid delivery of a working solution highlights how Google Cloud technology optimises workflows and drives actionable insights.

Customer Quote

The Benefits

Google AI and ML can collate, manage and report on data to transform risk management with ops improvements and better safety measures in the supply chain. This helps to mitigate risks and

“Working with Appsbroker to develop an ML solution for the ongoing detection of hazards within recycling operations such as Lithium Batteries has been very successful. The sprint process was well documented and communicated, engagement across all stakeholders was great and we proved a solution could be used to an enormous problem for the waste industry.”

The Future of Gen AI 8

AI isn’t perfect, but it’s often better than humans at making diagnoses, delivering faster results with more accuracy. And because LLMs are high performing, the algorithm can be used to detect early signs of mental illness in speech.

The Challenge

Early-stage dementia and Alzheimer’s can be difficult to diagnose, and there are not enough professionals with the necessary training to do so.

The Solution

The project uses a Speech-to-Text engine to transcribe a patient’s speech and then uses a LLM to analyse the transcript for signs of illness.

The Benefits

The project has shown that LLMs can be used to accurately identify early signs of these conditions, with an accuracy of 90%. This could also help to reduce the burden on healthcare systems by making it easier to diagnose early-stage mental illness, thereby preventing more serious cases from developing.

Although still in its early stages, it has the potential to make a real difference in the lives of people with mental illness. And as the models become larger and more sophisticated, the accuracy of the predictions is likely to improve even further.

This is a promising approach, and although it is still in its early stages, this technology has the potential to revolutionise the early detection of mental illness.

9
Healthcare

Capital Markets

The global capital markets industry is a complex and sophisticated system worth trillions of dollars. To maintain trust and prevent hefty fines, companies need an accurate and automated monitoring system that’s fast and reliable.

The Challenge

Humans aren’t very good at going through thousands of pages of transcripts to detect market abuse – and unstructured data is a huge problem. As much as one-third of all data is unstructured, and that includes documents, transcripts, and voice files.

The Solution

Organisations can unlock value by structuring the data contained within audio recordings. AI automates the highly-accurate creation of sound files from transcripts and can decipher which traders are speaking and summarise the conversation into a few paragraphs.

The Benefits

A powerful tool that helps prevent market abuse and protect investors, AI can also identify salient terms and create alerts for auditors. This helps them to focus on the most important information and minimises the risk of missing something. This approach is especially useful for regulatory compliance, making it easier for auditors to find relevant information.

The Future of Gen AI 10

Cycling World Records

The Challenge

In pursuit of multiple world records, James MacDonal – Customer Engineer at Google – needed to accurately record lap times with an automated, secondary system that removed the need for someone to press a button every 22 seconds, for up to 24 hours. James’ team also wanted to see timing data in real time in a single pane of glass and make adjustments to keep the attempt on track and improve performance.

The Solution

Appsbroker put certainty around the manual timing system with two additional backup systems using Google’s Vertex AI. This learned what a bike and rider look like, recognised the images and used it against the laser trigger, and logs the lap time. BigQuery collated information into a data dashboard that ingested multiple datasets so James and his team could focus on other things during Ride 24, rather than constantly having to crunch the numbers.

The Benefits

James broke two World Ultra Cycling Association (WUCA) age category world records with data captured and verified by Appsbroker. This was a perfect challenge for Appsbroker as our engineers love solving problems. From hardware and software, to app dev and data, this project has offered them all.

11

About

Appsbroker partners with Google to help customers in various industries, such as retail, financial services, media and entertainment, and manufacturing, to overcome digital transformation challenges. As a trusted Google Premier Partner and Managed Service Provider, we have over fifteen years of experience working with innovators seeking to leverage the power of Google Cloud and deliver impactful business outcomes.

The Future of Gen AI 12

PROUD TO BE A B CORP

We’re dedicated to positively impacting both people and the planet.

13
OUT MORE
FIND
#ExpectExtraordinary © Appsbroker 2023

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.