"Artificial intelligence is when the machine tries to do what humans do, but not as well."
This definition, which might seem counter-intuitive or even crudely iconoclastic, is in fact one of the most precise available. It makes it easy to separate what is and is not AI in its aim to replicate human cognitive processes. For example, a super calculator is not AI, a computer will always be better than a human at performing a mathematical calculation. On the other hand, software that is able to systematically recognise a person in photographs contains AI because we are typically in the situation where a human would be better at this exercise. If an AI is systematically less efficient in its task than a human could be, what is its interest? Scaling up! What a human does perfectly, an AI will do almost as well, but more importantly, it will do it millions of times more in the same amount of time. A super calculator whose calculations can recognise photos is just one of the countless use cases of AI that are part of our daily lives.
The deployment of AI solutions is made possible by the multiplication of the data we produce and the explosion in the computing power of our equipment. This datafication of society and a recent algorithmic maturity allow a decisive shift for companies: from a "rear-view mirror" management that prevailed until now (learning from the past) to a predictive vision.
To better understand how our data is used to make predictions, we must first discern the four levels of data analysis: observe, understand, predict and prescribe.
- Observe: the famous rear-view mirror that allows to identify what happened thanks to a fine analysis of historical data.
- Understand: the higher level of analysis that allows you to draw concrete lessons from the data in order to understand what happened.
- Predicting: the first level of analysis is no longer focused on the past but on the future: being able to anticipate in a probabilistic way what will happen.
- Prescribe: the final level of data analysis, it allows companies to make the best decision by leveraging what they have learned.

The predictive and prescriptive levels are made possible by the advent of Machine Learning tools. The concept of Machine Learning is based on three phases: the learning phase (the machine recognizes patterns and correlations in historical data), the inference phase (the machine applies what it has learned), and the supervision phase (management of the life cycle, the performance of the models over time and potential re-training).
These Machine Learning models now allow companies to move into the predictive era and deploy operational AI solutions of three types: prediction & optimisation, hyper-personalisation and automation.
Forecasting & optimisation
Accurately anticipate the needs, risks and failures of an industrial process in order to implement appropriate actions in the right place at the right time.
Example: Predictive maintenance (Industry)

Detect structural defects, premature wear and tear and loss of machine efficiency in order to prioritise maintenance operations and maximise production.
Hyper personalization
Anticipate the specific needs of a user on the basis of his or her data and make tailor-made proposals to help him or her on a daily basis.
Example: Customised learning (Education)

The aim of this application is to accompany students in the manner of a Adaptive Learning and to recommend content based on their background, cognitive profile and educational gaps.
Automation
Relieve employees of thankless tasks that require numerous iterations without the human being adding value to the process.
Example: Energy management of a building stock (Energy)

The objective is to develop an energy manager that automates the supervision of a group of buildings, tracks any drift in consumption and forecasts it to improve overall energy efficiency.
At Craft AI, we are deploying more than ten use cases in key sectors of our society such as energy, education or health.
This is possible because we are approaching maturity on the algorithmic part and are able to collect enough data. However, a third ingredient is missing to really talk about a revolution: usage.
There is a deficit in the use and adoption of AI in companies, as Gartner found in a study: 85% of AI projects do not go into production and generate no ROI. There are two main reasons for this lack of adoption:
- The difficulty of industrialising AI projects (the famous industrialisation wall)
- Society's distrust of AI, the fear of having machines make decisions for us.
If they are well addressed, one by MLOps, the other by a trusted AI, you can very easily develop multiple use cases and fully enter the predictive era.