Keep up to date with the latest developments in trusted AI, MLOps and their applications in your sector.
News in the spotlight
Why MLOps is every data scientist's dream? Part 2
We will try to provide some answers to this questions in two parts. This second article focuses on the first deployment and iterations to quickly improve it while the first one focuses on conception, data collection, exploration and application prototyping.
Why MLOps is every data scientist's dream? Part 1
We will try to provide some answers to this questions in two parts. This first article focuses on conception, data collection, exploration and application prototyping while the second one focuses on the first deployment of the solution and iterations to quickly improve it.
Improve your ML workflows with synthetic data
As a data scientist, you know that high-performance machine learning models cannot exist without a large amount of high-quality data to train and test them. Most of the time, building an appropriate machine learning model is not a problem: there are plenty of architectures available, and since it is part of your job, you know exactly which one will best suit your use case. However, having a large amount of high-quality data can be much more challenging: you need a labeled and cleaned dataset matching exactly your use case. Unfortunately, such a dataset is usually not already available. Maybe you only have a few data matching your requirements, maybe you have data but they are not matching exactly what you want (they can have biases or unbalanced classes for example), or maybe a dataset exists but you cannot access it because it contains private information. Therefore, you need to collect new data, label them and clean them, which can be a time-consuming and costly process, or even not be possible at all.
How will MLOps streamline your AI projects?
When speaking of Artificial Intelligence, the efficiency and profitability of projects depend on the ability of companies to deploy reliable applications quickly and at low cost. To succeed, you need to organize and improve the processes for creating, implementing, and maintaining AI models with a diverse and sizable team.
All our news
Don't just build models, deploy them too!
You don't know what "model deployment" means? Even when you try to understand what it means, you end up searching for the meaning of too many baffling tech words like "CI/CD", "REST HTTPS API", "Kubernetes clusters", "WSGI servers"... and you feel overwhelmed or discouraged by this pile of new concepts?
Un-risk Model Deployment with Differential Privacy
As a general rule, all data ought to be treated as confidential by default. Machine learning models, if not properly designed, can inadvertently expose elements of the training set, which can have significant privacy implications. Differential privacy, a mathematical framework, enables data scientists to measure the privacy leakage of an algorithm. However, it is important to note that differential privacy necessitates a tradeoff between a model's privacy and its utility. In the context of deep learning there are available algorithms which achieve differential privacy. Various libraries exist, making it possible to attain differential privacy with minimal modifications to a model.
A Beginner's Guide to MLOps
MLOps is the combination of Machine Learning and Operations. Like DevOps for the software world, the concatenation of "ML" with the agile execution methodology "Ops", augurs a coming of age of Machine Learning.
Keeping people in the loop with eXplainable AI (XAI)
What role does explainability (XAI) play in Machine Learning and Data Science today? The challenge of the last ten years in data science has been to find the right "algorithmic recipe" to create ML models that are more and more powerful, more and more complex and therefore less and less understandable.
The industrialisation of AI & the concept of MLOps
MLOps appears to be a necessity to overcome the difficulties in scaling up AI within companies: reproducibility, versioning, continuous integration... This was the subject of one of the conferences on the industrialisation of artificial intelligence at Enjeu Day Industrie & Services 2022. You couldn't attend? Watch the replay.
A guide of the most promising XAI libraries
Using Machine Learning to solve a problem is good, understanding how it does is better. Indeed, many AI-based systems are characterized by their obscure nature. When seeking an explanation, looking to understand how a given result was produced, exposing the system's architecture or its parameters alone is rarely enough. Explaining that a CNN recognized a dog by detailing the Neural Network weights is, to say the least, obscure. Even for models deemed as glass-box such as decision trees, a proper explanation is never obvious.
The difficulties of industrialising AI
For a long time, the main objective of a Data Scientist has been to find the best algorithmic recipe to answer a given business problem. To facilitate this prototyping phase, many tools have emerged such as open-source libraries and Data Science platforms; the latter even offer a no-code experience.
Implement PostgreSQL Pool connection in Rust
At Craft AI, we build a new product so data scientists can code and push, quickly and easily, in production, their machine learning algorithms. Our purpose is to make life easier for data scientists, for example, we handle data storage in a nice way so data scientists do not have to bother with saving and loading data from a database.
Enter the predictive age with AI!
In this article we will try to define what Artificial Intelligence is, what role data plays in it, understand how Machine Learning is the future of AI and discover the use cases that are already revolutionizing the daily life of our schools, hospitals, communities and companies.