How to detect silent failures in ML models | Wojtek Kuberski | Conf42 Machine Learning 2022
The objective of the talk is to build understanding on why and how you need to monitor ML in production. We'll cover the taxonomy of failures based on use cases, data, characteristics of systems they interact with, and human involvement. You'll learn the tools (both statistical and algorithmic) used in dealing with these failures, their applications, and their limits. Fact is, the world changes and data drift and concept drift lead to model degradation and losses to the business. We'll leverage real-life use cases to showcase the importance of ML monitoring in one of the biggest industries. Finally, we'll show you how to address this by by monitoring ML performance Other talks at this conference 🚀🪐 https://www.conf42.com/ml2022 — 0:00 Intro 0:22 Talk