Keeping people in the loop with eXplainable AI (XAI)

What role does explainability (XAI) play in Machine Learning and Data Science today? The challenge of the last ten years in data science has been to find the right "algorithmic recipe" to create ML models that are more and more powerful, more and more complex and therefore less and less understandable.

14/09/2022

Trustworthy AI

All articles

Contents

Download

AI systems based on these algorithms often make predictions without the designer or user of the AI being able to understand what motivates the prediction. These systems are nicknamed "black boxes" because of the opacity of the data processing that is done.

The phenomenon of black boxes and the natural distrust they generate is a major brake on the adoption of AI. An effort must be made on the R&D of a trusted AI in order to find a form of explicability for each algorithm, even the most complex ones.

In the actions to be implemented to create a trusted AI (Explicability / Confidentiality / Frugality / Fairness), the XAI is particularly important because it concerns the model as a whole, including algorithms and data. Trust then becomes the responsibility of the creator of the AI, the one who designed the model, who chose the algorithm. 

Many software companies, whose solutions are used maliciously, hide behind the excuse of the neutrality of their algorithm. They want to put the blame on the end-user and his use of the tool, absolving themselves of any ethical responsibility. 

Social networks are the most eloquent example. They prefer not to have to explain their algorithms, even if it means losing control of them, rather than assume the ethical responsibility of their creation. The very reason for explicability is to prevent these drifts and to allow a trusted AI.

In addition to these ethical and regulatory aspects, explicability has an operational interest. The example of the autonomous car is probably the most telling of what explainability (XAI) can and should bring to our daily lives. Whether from the point of view of the designer or the end user, XAI allows access to the three levels of understanding of AI: 

  • Understanding or knowing in detail what the AI does and what influences it.
  • Control or knowing what AI can do
  • Trust or knowing what AI can't do

Interest of XAI from the designer's point of view

The process of creating a machine learning model is most often made up of three phases: design, training and evaluation with an iterative logic until the model is "frozen" as desired to put it into production. 

Explainability reveals here its first great operational interest. With a better understanding of the algorithmic recipe, the designer can choose not to put his model into production if the explanations are not satisfactory, despite good statistical results. He can thus restart a "design, training, evaluation" iteration and avoid deploying an uncertain AI.

XAI provides an understanding of what the algorithm is doing in the training phase and offers the opportunity to: 

  1. correct what he does (understanding), 
  2. know his potential, know what he can do (mastery) and 
  3. to see its limits (confidence) 

The XAI allows us to verify that the model we build corresponds to what we wanted to model and thus accelerates the development phase by helping the Data Scientists to "freeze" the model with confidence in order to put it into production. 

During the deployment phase of the model in production, it will be confronted with dynamic data and it will then be necessary to call upon explicability once again to :

  1. Debug and understand design errors that could not emerge by simply training the model with static data (understanding).
  2. Understand how the model makes its prediction (control). Let's take the example of an autonomous car that stops at a stop sign. Obviously, the model must make the car stop every time, without exception. However, it is crucial to understand how the car stops at the sign, what is its level of understanding. Does it recognize the stop sign or will it stop at every red sign? Is it able to adapt to unknown data such as the degradation of the sign (stickers, deformations, etc.)? 
  3. Knowing what the Machine Learning model cannot do and how it will react to unknown data (confidence). Explainability allows to be sure that the model will react well to a case not anticipated during training, where a simple statistical evaluation will not help. To continue with the example of the autonomous car, several cases of cars stopping without reason in the middle of a lane have been reported. The only common parameter between these cases is that they all happened at night. This is where the statistical evaluation ends, but the XAI made it possible to understand that the AI of these cars confused the moon with traffic lights.

Interest of XAI from the user's point of view

From an end-user perspective, explainability is also of major interest. We believe that XAI has a decisive role to play in the adoption of AI by the general public, by enabling a trusted AI and by making the user experience more fluid. Broader adoption is an essential step in the development of AI today. 

If software companies want to continue to promote artificial intelligence, they need to open the black box, expose and explain its workings. They need to give the end-user elements of understanding to keep them in the decision loop: should they follow the predictions or recommendations of the AI or not?

As with designers, AI users need access to all three levels of AI understanding:

  • Understanding or knowing in detail what the AI does and what influences it.
  • Control or knowing what AI can do
  • Trust or knowing what AI can't do

Understanding AI

To understand a service using artificial intelligence, each user must be able to question the decisions made by the algorithm: 

It is possible to keep the human in the AI loop. This is done by using XAI methods.

Mastering AI

To master an artificial intelligence, you need to know what it can do and understand the interactions between data and its causes:

  • Knowing that my autonomous car will turn right or left depending on the country I am in.
  • Understand the shortcomings that led to a recommendation
  • Knowing that theassignment of a schedule or course has been done fairly
  • To be able to act on my energy consumption to save and pollute less.

The challenge of mastering AI is to be able to create an enabling complementarity between the human and the algorithm. Reinforcing the human by the machine can only be envisaged within the framework of a trusted AI, in particular thanks to its explicability.

Trusting AI 

XAI is the pillar of trusted AI where fairness (removing bias and discrimination present in data) frugality (minimizing the amount of energy needed to train models), and confidentiality (respecting privacy and data confidentiality) are found. 

Trust is based on tangible evidence, it is the result of understanding and control. 

An ML model is built on a limited set of past data. Having explanations about how it works will give confidence in its behavior in production, especially if it is confronted with data it has never seen.

It is the role of explainability to go beyond statistical guarantees to convince the greatest number of people that trusted AI, which keeps humans at the heart of the technology, is possible and already operational.

Developing trust to encourage adoption is also the approach taken by the European Union. With the aim of creating a safe and innovation-friendly environment, the European Commission has proposed a battery of initiatives in 2021 that will help strengthen trusted AI and make explainability the norm in artificial intelligence.

To learn more about Craft AI's commitments and vision 

A platform compatible with the entire ecosystem

aws
Azure
Google Cloud
OVH Cloud
scikit-lean
PyTorch
Tensor Flow
XGBoost
jupyter
PC
Python
R
Rust
mongo DB