Craft LLMOps
Your trusted AI operationalization platform

Craft AI's MLOps & LLMOps platform contains everything an ML Engineer or Data Scientist needs to build and operationalize an AI application. From generative AI to Machine Learning, fine-tune, deploy and drive model performance.
Cut the time needed to industrialize your AI by a factor of 16.

Craft AI platform

LLMOps at the service of Data Science teams

Without MLOps, the industrialization of an AI application takes 7 months on average, and costs €500K per project.
With an MLOps platform, operationalize your AI in just a few days.

Deploy your ML models & LLMs

Streaming, Real-time, Batch
A/B Testing, Canary

Scale up your models

Build robust, massively used AIs

Pilot and explain your models and LLMs

Drift, Metrics, Toxicity, Bias, Evaluation, Alerts

Deploy and build infrastructures

In just a few clicks, without DevOps skills

Discover the platform

Our platform under the microscope

Machine Learning pipeline

Develop your models in a workspace compatible with all open-source libraries. Import and fine-tune your LLMs in dedicated pipelines or create RAG workflows. Identify key steps in your code. Assemble them into complete pipelines from data preparation to model training.

The pipeline is versionable and easily deployable. Our containerization technology ensures that it scales without code rework.

Machine Learning Pipelines -
Screenshot of the platform

Environments (CPU / GPU)

Set up environments made up of databases (vector database) and compute servers (CPU & GPU) in just a few clicks. Easily configure the size and power of each component. Monitor costs for each environment in real time. Build environments dedicated to experimentation and production.

Environments are easily parameterized by Data Scientists without DevOps skills. Control your infrastructure budget with a FinOps module.

Model Serving

Deploy your Machine Learning pipelines in production in just a few clicks. Create a service to expose the pipeline via API to end-users in real time. Define execution conditions (temporal or metric-based) to automate retraining. Redeploy your pipelines using multiple methods: A/B testing, Canary, Shadowing, Failover.

Deployment is done in a few clicks, without any DevOps skills. It saves a lot of time and offers total autonomy to Data Scientists.

Deployments - Screenshot of the MLOps platform

Execution tracking

Get detailed tracking of your executions, step by step. Analyze the execution time for each step, as well as the resources used. Easily visualize the results and models generated by your pipelines. Retrieve your pipeline metrics and input & output settings. Be alerted in the event of a failure during pipeline execution, so you can proceed with debugging.

Run tracking takes the guesswork out of data scientists' job of tracking runs and ensures that they are working properly.

Tracking executions - Screenshot of the MLOps platform

ML Monitoring

Monitor the performance of your production models in real time. Evaluate the reliability of your LLMs (loss of context, accuracy drift, hallucinations or tone alteration). Automatically detect when your models drift and lose accuracy. Trigger re-training to correct drifts. Manage your infrastructures and deployments by monitoring their health. Set alerts to react as quickly as possible in the event of a problem.

The production model monitoring tool provides a 360-degree view of the health of your AI applications in real time.

ML model pilots - MLOps platform screenshot

Explainable AI (XAI)

Analyze the weight of each explanatory variable in the prediction. Visualize your decision trees. Obtain a local or global explanation, agnostic of the algorithm used with Shap. Inspect and analyze the behavior of each explanatory or predicted variable. Monitor the evolution of explainability even in production.
Explain and evaluate the capabilities and performance of your LLMs using our LLM monitoring pipeline.

With the model explainability module, remove the "black box" operation of algorithms and keep the human in the loop.

XAI - Screenshot of the MLOps platform

A platform compatible with the entire ecosystem

Google Cloud
Tensor Flow

Request a demo

A platform compatible with the entire ecosystem

Google Cloud
OVH Cloud
Tensor Flow
mongo DB

Get started !

Import your code

From your individual workspace, retrieve your existing code from a Github or Gitlab repository and start a new Data Science project, alone or in a team. Simply run your code without worrying about Dockerfile configuration, which is natively integrated into the pipeline.

Create pipelines

Build your own Machine Learning pipelines by identifying a series of steps ranging from data preparation to model training. You could, for example, import an open-source LLM and build a fine-tuning pipeline to specialize the LLM, followed by an inference pipeline to serve the results. Save and version each pipeline in the platform.

Run the pipelines

Run a pipeline directly in the workspace or on the platform for additional resources and access to various monitoring, control and explainability features.

Deploy pipelines in production

Once the pipeline is validated, you can deploy it to production very easily either by exposing it via an endpoint or by defining runtime rules. Multiple redeployment methods are also available to you.


We answer the most common questions you may have about the MLOps platform.

See all questions

In which language(s) is the platform available?

The platform is available in English at first. We will progressively add a multi-language mode allowing you to set the language of your choice.

What are MLOps and LLMOps?

MLOps is a set of practices aimed at deploying and maintaining Machine Learning models in production reliably and efficiently. LLMOps is based on MLOps and adds all the functionalities needed to specialize, operationalize and pilot LLMs.

What can the MLOps platform do?

The platform, currently in version 1.1, can be used to set up computing and storage infrastructures, create and deploy Machine Learning pipelines, fine-tune, operationalize and pilot LLMs. It's what you might call a platform for MLOps and LLMOps.

How does the subscription / billing of the platform work?

You pay a subscription fee that gives you access to all the features of the MLOps platform, present and future. On top of that, you will be billed monthly based on your usage of computing and storage resources.

How to stay informed about Craft AI news?

Follow us on Linkedin & Twitter and visit our blog where we regularly publish articles on artificial intelligence.

Lead your AI to production

Test our MLOps platform free of charge to accelerate the deployment and management of Machine Learning models and LLMs.

Test for free

Craft AI platform