ML Ops Best Practices

listen on castbox.fmlisten on google podcastslisten on player.fmlisten on pocketcastslisten on podcast addictlisten on tuninlisten on Amazon Musiclisten on Stitcher



MLOps Best Practices

Today, we are joined by Piotr Niedźwiedź, Founder and CEO of is a powerful tool for tracking and managing machine learning models. Piotr discusses common MLOps activities and how data science teams can take advantage of for better experiment tracking.

Piotr started with a background about his life and how he got into coding and machine learning. He then gave some insight into how the average user of uses the platform. He also mentioned when beginners are advised to start using machine learning tools. 

Piotr also gave some advice on key activities that should be done by machine learning specialists during machine learning development. They include logging the evolution of your training metrics, the data used, hyperparameters tuning, etc. Furthermore, Piotr compared and contrasted the logging activities in a data science role and a software development role. 

Speaking of teamwork, Piotr discussed the collaboration potential for teams using ML tools such as He also explained how is not only useful for collaboration in a team but in the entire organization.

Piotr then spoke about the place of in the ML tech stack ecosystem. He mentioned the typical users of and the growth trajectory of experiment tracking as a typical ML operation. He then talked about the short-term and long-term benefits of experiment tracking and model registry for machine learning developers. 

You can read more about how to use the platform from their blog page or learn more about the platform from their documentation.

Piotr Niedźwiedź

Piotr Niedzwiedz is an accomplished competitive programmer (top 30 worldwide in Google Code Jam 2009) and a serial entrepreneur who founded two successful software companies Codilime (Software-defined Networks) and (Machine learning). “When I came to machine learning from software engineering, I was surprised by the messy experimentation practices, lack of control over model building, and a missing ecosystem of tools to help people deliver models confidently. It was a stark contrast with the software development ecosystem, where you have mature tools for DevOps, observability, or orchestration to operate in production. So when ML engineers at my previous company showed me a tool they built for experiment tracking, I felt it might be something that could be big. Fast forward to today, and we are one of the most popular tools for experiment tracking and model registry on the market. We thought about expanding to other use cases in MLOps, but model metadata management is such an important component of the stack that we figured it is better to focus on solving this well. I am glad we did, and we will continue building a better product that ML engineers and data scientists can plug into their workflows.