
4 minute read
PEEKING INSIDE THE BLACK BOX
from CHAIN - Issue I, Nov 2023
by CHAIN
BY MOAYYAD MESLEH
Artificial intelligence is advancing rapidly and mysteriously, transforming the world in profound ways, and changing how we interact with and harness the power of machines. AI is the science of creating intelligent agents – machines, software, and systems that can simulate human-like thinking, learning, and problem-solving.
Did you notice the word “mysteriously”?
Yep AI is mysterious because in many cases, AI produces results without providing a clear explanation for its decision-making process. This lack of transparency can make AI seem mysterious, as it may be challenging to discern why a particular decision or prediction was made. In essence, this is what we often refer to as a “black box” – a term frequently used to describe the operations that happen behind the scenes in AI.
Why Do AI Systems Have Black Boxes?
AI systems end up with Black Boxes because they often utilize complex machine learning models, such as deep neural networks. These models consist of layers upon layers of interconnected artificial neurons, and they learn from data to make predictions or decisions. During the learning process, these models adjust their internal parameters to minimize errors, making them exceptionally proficient at specific tasks. However, as these models become more intricate, understanding how they reach particular conclusions can become challenging. It’s like trying to decipher a secret code – the more complex it is, the harder it becomes to crack.
The Quest for Explainability
In 2018, a landmark challenge in artificial intelligence (AI) took place, namely, the Explainable Machine Learning Challenge. The goal of the competition was to create a complicated black box model for the dataset and explain how it worked. One team did not follow the rules. Instead of sending in a black box, they created a model that was fully interpretable. This leads to the question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge? Imagine AI not only making a recommendation but also saying, “I think you should watch this movie because it has a high rating and it matches your previous preferences.” Talk about helpful!
BRING LIGHT TO THE BOX
With the advent of XAI (Explainable AI), the situation changed. Now, we are looking through a dual-tube night vision device. With XAI, we can uncover the reasons behind certain decisions. For instance, if we use deep learning for an image classification problem with multiple classes, XAI allows us to see the features that the neural network has extracted from images and on which it bases its predictions. Therefore, XAI highlights the features that the model relies on to make decisions.
Explainable Ai (XAI) techniques
LIME (Local Interpretable Model-agnostic Explanations) – LIME is a popular XAI approach that uses a local approximation of the model to provide interpretable and explainable insights into the factors that are most relevant and influential in the model’s predictions. To implement LIME in python, you can use the lime package, which provides a range of tools and functions for generating and interpreting LIME explanations.
SHAP (SHapley Additive exPlanations) – SHAP is an XAI approach that uses the Shapley value from game theory to provide interpretable and explainable insights into the factors that are most relevant and influential in the model’s predictions. To implement SHAP in python, you can use the shap package, which provides a range of tools and functions for generating and interpreting SHAP explanations.
ELI5 (Explain Like I’m 5) – ELI5 is an approach that provides interpretable and explainable insights into the factors that are most relevant and influential in the model’s predictions, using a simple and intuitive language that can be understood by non-experts. To implement ELI5 in python, you can use the eli5 package, which provides a range of tools and functions for generating and interpreting ELI5 explanations.
In the end, I would say that XAI acts as a guiding beacon. It leads us toward a future where AI decisions are no longer mysteries but well-lit paths, making the technology fairer and more ethical. While we may not fully unravel the enigma of the AI black box, we’re on a path towards greater transparency and accountability in artificial intelligence. The journey continues, and with each step, XAI brings us closer to a world where AI is a trusted and indispensable tool.