3 min read

Understanding Explainable AI

Understanding Explainable AI
5:01
“It’s important that we never lose sight of the fact that every AI application uses rules, data and inferences that are often derived from inputs originally sourced from human beings. As a consequence, bias, stereotypes and other influences are all in play, so we sometimes need a way to understand how and why certain conclusions are reached by the system. Accurate and understandable explanations are a key challenge as these systems evolve to deeper learning and complex neural network algorithms.” - Mimi Brooks

 

As humans, we typically use our instincts, expertise, and understanding to reach decisions. Unlike machines, we may also use empathy or creativity to solve a particular problem. Conversely, machines can process vast amounts of information and uncover relationships in data much faster than even the most intelligent among us. As a result, we now rely on machine ‘intelligence’ more than ever before to produce outcomes and generate predictions across a broad range of everyday transactions. This trend will grow exponentially as the Fourth Industrial Revolution evolves.

Yet when we rely on AI to make routine decisions, there are often many unanswered questions. In most cases, we have no idea why AI made those recommendations or predictions, but in the absence of any dissent, we assume that the reasoning behind each one was sound.

Yet what happens when we need to know how and why the machine determines a particular outcome? For example, why was a loan application turned down when the customer’s credit history appeared to be impeccable? Or why did an autonomous car behave in a certain way given an unusual set of circumstances? Or why did the doctor struggle to understand the rationale behind an algorithm’s conclusions?

The fact is that complex AI systems are black boxes that provide minimal, if any, insight into how they work, resulting in these sorts of unanswered questions. Today, modeling contemporary AI-powered applications and tools is becoming even more difficult for users to understand, as neural networks and deep learning algorithms gather momentum.

No alt text provided for this image

Figure 1. Adding the Explainable Model and Interface to AI Applications

 

The traditional AI project identifies a use case for the technology, and then accesses historical data from existing sources. This is followed by building and training the AI model before conducting testing and validation. The model is then deployed into production and monitored to ensure optimal performance.

Yet as users demand ever greater transparency and interpretability in the decision-making process, the tools and methods that can assist users in explaining outcomes must become part of the validation process. This capability is the basis for what is being called ‘Explainable AI’ (XAI).

As shown in Figure 1, the addition of an explainable model and an explanation interface to the deployment will allow the user to understand why or why not a certain decision was reached by the system. This includes the three components of Explainable Data, Explainable Predictions, and Explainable Algorithms (Figure 2). Although some AI systems are trying to improve transparency and interpretation by suggesting a certain level of confidence in the answer, XAI remains an enigma for most.

No alt text provided for this image

Figure 2. The Components of Explainable AI

 

If a business wants to understand exactly why and how an AI model is generating certain responses and predictions, it needs visual or plain language visibility into the logic of its Machine Learning (ML) method. ML uses three types of techniques; supervised learning, which trains a model on known input and output data so that it can predict future outputs; unsupervised learning, which finds hidden patterns or intrinsic structures in input data, and reinforcement learning, where an intelligent agent interacts with the environment and learns to behave within that domain. Irrespective of the ML technique, Explainable AI is intended to allow the user to more easily interpret the model's use of weighting and features to determine the given output.

Importantly, the necessary transition to XAI requires the establishment of a governance framework to set standards and guidance for AI development teams, as well as implementing best practices and carefully selecting the tools that will unlock the ‘black box’ of decisions reached by AI systems. A talent strategy to support AI governance is also essential, to include both legal and risk experts who can navigate applicable regulations, in addition to future-proofing core AI applications.

Explainable AI is the future of business decision-making and a critical part of the future of AI because explainable artificial intelligence models explain the reasoning behind their decisions. This necessarily provides an increased level of understanding between humans and machines, which in turn helps to build trustworthy, ethical, and responsible AI solutions.

 

The Future of Work: Where Machines Fear to Tread

3 min read

The Future of Work: Where Machines Fear to Tread

During the First Industrial Revolution in the 1800s, machines replaced human muscle in some manual tasks, such as in factories and through the use of...

Read More
The Year of Digital Disruption: 7 Industry Advancements that Dominated the Enterprise Landscape in 2023

3 min read

The Year of Digital Disruption: 7 Industry Advancements that Dominated the Enterprise Landscape in 2023

The past year saw a concerted effort across most industries to encourage resilience and support sustainable growth, as organizations recognized the...

Read More
White Paper: Leading a Purpose-Driven Organization in a Digital World Order

White Paper: Leading a Purpose-Driven Organization in a Digital World Order

Our Leading a Purpose-Driven Organization in a Digital World Order white paper explores the pressing challenges confronting business leaders during...

Read More