[Replay] The industrialisation of AI & the concept of MLOps



All articles


MLOps appears to be a necessity to overcome the difficulties in scaling up AI within companies: reproducibility, versioning, continuous integration... This was the subject of one of the conferences on the industrialisation of artificial intelligence at Enjeu Day Industrie & Services 2022. You couldn't attend? Watch the replay.


MLOps (Machine Learning Operations) is not just a role, it is also a culture that involves training both data science teams and developers.

The Enjeu Day Industry & Services 2022 organized by the Pôle Systematic Paris Région had as a theme: "Technological innovation at the service of the challenges of Industry".for the occasion, Matthieu Boussard, Head of R&D of Craft AI, spoke about the interest of MLOps alongside :

  • Arnaud Renouf, President of Datexim ;
  • and Jérémie Abiteboul, CTO of SPIDEO.

The replay of the Heads Up Industry & Services 2022

You can find the replay of the event on the Youtube channel of the Systematic Paris Region cluster.

A platform compatible with the entire ecosystem

Google Cloud
OVH Cloud
Tensor Flow
mongo DB

You may also like


MLOps ROI for companies

When speaking of Artificial Intelligence, the efficiency and profitability of projects depend on the ability of companies to deploy reliable applications quickly and at low cost. To succeed, you need to organize and improve the processes for creating, implementing, and maintaining AI models with a diverse and sizable team.

Read the article


Don't just build models, deploy them too!

You don't know what "model deployment" means? Even when you try to understand what it means, you end up searching for the meaning of too many baffling tech words like "CI/CD", "REST HTTPS API", "Kubernetes clusters", "WSGI servers"... and you feel overwhelmed or discouraged by this pile of new concepts?

Read the article

Trustworthy AI

Un-risk Model Deployment with Differential Privacy

As a general rule, all data ought to be treated as confidential by default. Machine learning models, if not properly designed, can inadvertently expose elements of the training set, which can have significant privacy implications. Differential privacy, a mathematical framework, enables data scientists to measure the privacy leakage of an algorithm. However, it is important to note that differential privacy necessitates a tradeoff between a model's privacy and its utility. In the context of deep learning there are available algorithms which achieve differential privacy. Various libraries exist, making it possible to attain differential privacy with minimal modifications to a model.

Read the article