Why MLOps is every data scientist's dream? Part 2

We will try to provide some answers to this questions in two parts. This second article focuses on the first deployment and iterations to quickly improve it while the first one focuses on conception, data collection, exploration and application prototyping.



All articles



As we have seen in the previous article(Why MLOps is every data scientist's dream? Part 1: Starting an AI project), MLOps accelerates the AI application design from its very beginning (infrastructure set-up, data collection and prototyping). However, it also brings tremendous benefits for the users themselves by allowing the data team to deliver and improve an application in a real-life context of use (or in "production")

Deploy & Monitor

The main MLOps feature that unlocks all the others is to give data scientist the ability to deploy their machine learning pipeline. 87% of AI project don't even make it to production phase and the first goal of MLOps is to make this number smaller and smaller.

There are two main reasons behind this number.

  • First deploying a robust and secure AI application in production requires a close interaction between data science knowledge and DevOps skills such as setting up servers, writing an API, adding authentication mechanisms, embedding your code in a docker image, orchestrating the different services with Kubernetes. DevOps resources are not always available and when they are, the collaboration between teams make this whole process long and tedious. In a MLOps context, the DevOps team would simply set up and configure tools that will then enable data scientist to be completely independent to put their models into production, breaking the wall of industrialization.
  • The second reason is that when deploying your models is hard, you tend to stay in the experimentation phase for a long time and when the time comes to put your sophisticated machine learning pipeline in the hand of the end user, whether a person or another program, you often realize that your application does not fully fit your user's needs. This is completely normal and should be expected. You can never anticipate everything, data changes over time, so does the user need and there are things you can only realize when you put your application to the test of the real world. But seeing your model fail after months of experimentations kills a lot a AI projects. That is why it is so crucial to put a first simple model in production as soon as possible and iterate.

However, deploying an AI application is nowhere near the end of a data scientist's role in an AI project life cycle. What happens to the model once released? How its performance evolves over time? Does the model need retraining? On which criteria? Are there specific cases in which the performance is low? Does it sometimes fail and why ? Is it used by the end user ? Will it require bigger machines soon ? If so what is the bottleneck : CPU, RAM, the disk? ... These are some of the questions a data scientist has to ask himself to improve an AI application in a meaningful way. To answer all these questions, he needs to monitor a wide variety of metrics : model performance metrics, statistics on input data and predictions, usage metrics, execution logs, machine usage... Setting up all this monitoring in can be very challenging and data scientist usually don't have the expertise or the time to build it. MLOps tools not only allow you to deploy your models easily, it also provides you with all the monitoring you need to make the right decisions. It gives you access to both ready-made metrics like machines usage, but also to custom metrics that you can design to get the most useful insights for your application.

With the ability to deploy and monitor your AI applications in your hands you can now work in fast iterations through the whole machine learning cycle. That is why the primary job of MLOps is to enable data scientists to put their own model into production in a reliable, scalable and easily maintainable way and to make it so easy that it becomes just another data science task.

Let's iterate on the MLOps cycle ! Engage the business teams, collect feedback and fastly adjust your solution to the real needs

MLOps aligns all stakeholders on a common goal: the delivery of a user-centric AI application that end users can quickly bring into play. Indeed, any AI solution, no matter how technically good it is, that is not used cannot be considered a success. As we have seen previously, ith the automation of repetitive tasks and the ability to reuse previous work, the data scientist can deliver product increments faster and continuously adapt to shifting user needs. MLOps can be seen as an agile framework application on machine learning solutions. For instance, if you need a means of transportation available shortly, maybe getting a bike will be sufficient in the first few weeks. You may not have to wait for a complete and state-of-the-art electric car that would take several years to deliver and whose functionalities would not suit you.

MLOps reduces the duration of iterations on AI projects and puts the end-users at the heart of them. Data scientists no longer focus solely on the performance of their models but also on the design of a comprehensive and user-centric product. The benefits are numerous:

  • Business teams, by seeing results integrated into their daily processes and available in a few weeks, feel much more involved in the project and feel like allowing more time to it
  • Even if the solution is imperfect, it lets data scientists collect more precise and structured feedback on their work by presenting concrete results
  • The metrics shared with the team are no longer centered on models performance, but rather on the product use: availability of the service and results, acceptable latency, frequency of re-training...
  • When the need evolves, the speed of adaptation is increased thanks to the reusability of elements already developed (data ingestion & inference pipelines...). The data scientist saves time to focus on end-user satisfaction

We see here that MLOps brings teams together and breaks down silos. It enables a much more collaborative, iterative way of working, focused on the final value delivered to end-users.

Finally, MLOps provides strong Benefits along the hard way of delivering AI applications to end-users

At every stage of the project, MLOps empowers Data Scientist and provide them more autonomy and comfort while realizing their day-today activities. MLOps unlocks a new way of working and provide Data Scientists all the tools they need to reduce frictions and personal frustrations along the difficult road of AI applications delivering.

In addition, it allows them to improve collaboration with every stakeholders, and mostly end-users, and simplify the iteration process in order to build an AI solution centered on the real business needs.

Finally MLOps reduce all the risks associated with the project and make the gap between experimentation and industrialization disappear by allowing Data Scientist to build a solution that is "production-ready" from the start.

Written by Raphaël Graille, Senior Data Scientist & Roman Vennemani, AI Architect

A platform compatible with the entire ecosystem

Google Cloud
OVH Cloud
Tensor Flow
mongo DB