Model Serving: how to implement real-time inference?


All webinars


In this webinar, we demonstrated how to create a Machine Learning pipeline, put it into production in just a few minutes, serve the results via a secure API and analyze its execution (experiment tracking). We used as an example a use case for predicting electricity consumption by region, every 30 minutes over a 24-hour horizon, based on live data collected by the French grid operator (RTE).

ūüéô S peakers
Matteo Lhommeau, Data Scientist at Craft AI
Roman Vennemani, AI Architect at Craft AI
Hélen d'Argentré, Head of Marketing at Craft AI


ūüĎč About
Craft AI has developed a SaaS platform that facilitates the development and deployment of Python applications, without DevOps skills. The platform makes it easy to set up computing infrastructures, create Python code pipelines and deploy them in production as secure APIs.

We are positioned in the market as a pure-player in MLOps, specializing in model deployment and management. We offer an easy-to-use solution, with an ergonomic interface and clear pricing. As the only French and European player, we offer our customers a high level of proximity and responsiveness, as well as a particular focus on sovereignty and data protection.

Thanks to our offer, our customers can cut the time needed to deliver an application to their end-users by a factor of 5, and reduce the associated costs by 80%.