Trusted AI: can you explain an LLM?

17/10/2023

All webinars

Contents

The explosion in the use of generative AI raises a major problem: can we trust and understand AI results?

💡 Description
The race for LLM power that we've been witnessing over the last few months makes the issue of algorithm explicability as crucial as it is complicated: can we still understand the progress of a model with several tens of billions of parameters?
If Shapley Values have become the standard for explicability in Machine Learning, their usefulness disappears on a model as complex as an LLM. The very nature of the type of explanation sought for an LLM differs from that sought for a classification-type model.
So how do you know whether a Large Language Model is acting like a "parrot", repeating what it has learned, or whether it is giving you an answer based on the acquisition of a concept?

Schedule
‍14h00-14h10
: Introduction
14h10-14h30 : Presentation
14h30-14h45 : Question-Réponse

🎙 Speakers
Bastien Zimmermann
, R&D Engineer at Craft AI
Hélend'Argentré, Head of Marketing at Craft AI

👋 About
A French startup pioneering trusted AI, Craft AI specializes in the industrialization of artificial intelligence projects. Over the past 8 years, Craft AI has developed unique technological expertise in the operationalization of Machine & Deep Learning models. In particular, the company enables its customers to develop, specialize and operate their own generative AI. Finally, it offers a responsible vision of AI, i.e. energy-efficient, explainable and respectful of privacy.

Download