Monitoring Model Performance with Crowdsourcing
Date and time
Location
Online event
Building a human in the loop system for ML observability - Magdalena Konkiewicz
About this event
In this hands-on workshop, we’ll go through all the stages of building a human-in-the-loop system that monitors the performance of a machine learning model.
We’ll cover:
- Designing a system for monitoring model performance
- Sampling the model’s predictions and using crowdsourcing for evaluating whether these predictions are correct
- Visualizing the collected data
- Automatically retraining the model when performance degrades
- Using the collected data for retraining
By the end of this workshop, you’ll be able to design a similar system for monitoring the quality of your own ML models.
Important: Register an account in Toloka before the workshop: https://alexeygrigorev.notion.site/Registering-in-Toloka-bec507b358594c3f8a245c5bff12508a
About the speaker:
Magdalena is a Data Evangelist at Toloka, a global data labeling company serving around 2,000 large and small businesses worldwide. Magdalena holds a Master's degree in Artificial Intelligence from Edinburgh University. She’s worked as an NLP Engineer, Developer, and Data Scientist for businesses in Europe and America. She now teaches and mentors Data Scientists, and regularly contributes to publications like Towards Data Science.
DataTalks.Club is the place to talk about data. Join our slack community!
This event is sponsored by Toloka.ai.