Go from rear-view to predictive with AI

21/02/2022

Applications

All articles

Contents

In this article we will try to define what Artificial Intelligence is, what role data plays in it, understand how Machine Learning is the future of AI and discover the use cases that are already revolutionizing the daily life of our schools, hospitals, communities and companies.

Download

"Artificial intelligence is when the machine tries to do what humans do, but not as well."

This definition, which might seem counter-intuitive or even crudely iconoclastic, is in fact one of the most precise available. It makes it easy to separate what is and is not AI in its aim to replicate human cognitive processes. For example, a super calculator is not AI, a computer will always be better than a human at performing a mathematical calculation. On the other hand, software that is able to systematically recognise a person in photographs contains AI because we are typically in the situation where a human would be better at this exercise. If an AI is systematically less efficient in its task than a human could be, what is its interest? Scaling up! What a human does perfectly, an AI will do almost as well, but more importantly, it will do it millions of times more in the same amount of time. A super calculator whose calculations can recognise photos is just one of the countless use cases of AI that are part of our daily lives.

The deployment of AI solutions is made possible by the multiplication of the data we produce and the explosion in the computing power of our equipment. This datafication of society and a recent algorithmic maturity allow a decisive shift for companies: from a "rear-view mirror" management that prevailed until now (learning from the past) to a predictive vision. 

To better understand how our data is used to make predictions, we must first discern the four levels of data analysis: observe, understand, predict and prescribe.

  • Observe: the famous rear-view mirror that allows to identify what happened thanks to a fine analysis of historical data.
  • Understand: the higher level of analysis that allows you to draw concrete lessons from the data in order to understand what happened. 
  • Predicting: the first level of analysis is no longer focused on the past but on the future: being able to anticipate in a probabilistic way what will happen. 
  • Prescribe: the final level of data analysis, it allows companies to make the best decision by leveraging what they have learned.
Get your data out of the fridge, Mick Levy, ed. Dunod, 2021

The predictive and prescriptive levels are made possible by the advent of Machine Learning tools. The concept of Machine Learning is based on three phases: the learning phase (the machine recognizes patterns and correlations in historical data), the inference phase (the machine applies what it has learned), and the supervision phase (management of the life cycle, the performance of the models over time and potential re-training).

These Machine Learning models now allow companies to move into the predictive era and deploy operational AI solutions of three types: prediction & optimisation, hyper-personalisation and automation.

Forecasting & optimisation

Accurately anticipate the needs, risks and failures of an industrial process in order to implement appropriate actions in the right place at the right time. 

Example: Predictive maintenance (Industry)

Detect structural defects, premature wear and tear and loss of machine efficiency in order to prioritise maintenance operations and maximise production.

Hyper personalization

Anticipate the specific needs of a user on the basis of his or her data and make tailor-made proposals to help him or her on a daily basis.

Example: Customised learning (Education)

The aim of this application is to accompany students in the manner of a Adaptive Learning and to recommend content based on their background, cognitive profile and educational gaps.

Automation

Relieve employees of thankless tasks that require numerous iterations without the human being adding value to the process.

Example: Energy management of a building stock (Energy)

The objective is to develop an energy manager that automates the supervision of a group of buildings, tracks any drift in consumption and forecasts it to improve overall energy efficiency. 

At Craft AI, we are deploying more than ten use cases in key sectors of our society such as energy, education or health.

This is possible because we are approaching maturity on the algorithmic part and are able to collect enough data. However, a third ingredient is missing to really talk about a revolution: usage.

There is a deficit in the use and adoption of AI in companies, as Gartner found in a study: 85% of AI projects do not go into production and generate no ROI. There are two main reasons for this lack of adoption:

  • The difficulty of industrialising AI projects (the famous industrialisation wall) 
  • Society's distrust of AI, the fear of having machines make decisions for us.

If they are well addressed, one by MLOps, the other by a trusted AI, you can very easily develop multiple use cases and fully enter the predictive era.


A platform compatible with the entire ecosystem

aws
Azure
Google Cloud
OVH Cloud
scikit-lean
PyTorch
Tensor Flow
XGBoost
jupyter
PC
Python
R
Rust
mongo DB

You may also like

MLOps
02/05/2023

Why MLOps is every data scientist's dream? Part 2

We will try to provide some answers to this questions in two parts. This second article focuses on the first deployment and iterations to quickly improve it while the first one focuses on conception, data collection, exploration and application prototyping.

Read the article

MLOps
25/04/2023

Why MLOps is every data scientist's dream? Part 1

We will try to provide some answers to this questions in two parts. This first article focuses on conception, data collection, exploration and application prototyping while the second one focuses on the first deployment of the solution and iterations to quickly improve it.

Read the article

Trustworthy AI
29/03/2023

Improve your ML workflows with synthetic data

As a data scientist, you know that high-performance machine learning models cannot exist without a large amount of high-quality data to train and test them. Most of the time, building an appropriate machine learning model is not a problem: there are plenty of architectures available, and since it is part of your job, you know exactly which one will best suit your use case. However, having a large amount of high-quality data can be much more challenging: you need a labeled and cleaned dataset matching exactly your use case. Unfortunately, such a dataset is usually not already available. Maybe you only have a few data matching your requirements, maybe you have data but they are not matching exactly what you want (they can have biases or unbalanced classes for example), or maybe a dataset exists but you cannot access it because it contains private information. Therefore, you need to collect new data, label them and clean them, which can be a time-consuming and costly process, or even not be possible at all.

Read the article