March 26, 2022

Building a Machine Learning Platform

Published by

This blog is written by John Roberts. He summarizes the MLOps coffee chat session with Orr Shilon at Lemonade.

Machine learning has centered on building accurate models which initiated the development of frameworks like TensorFlow, PyTorch and Scikit-Learn. The job isn’t done once the model is developed because the purpose of developing a  model is to solve real-world problems.

What do you do after you build your model?

You need a process to deploy the model to production to make it available to solve real-world problems. But before you can go about putting a model into production, you need to define a machine learning platform. To respond to this question, we must first define a machine learning platform. Machine learning platforms are tools used for automating the development and improvement of machine learning models.

Let’s dive into the pillars of machine learning and highlight the tools Lemondae used as well as some other options that may fit your usecase. Remember, the right tools aren’t necessarily what Lemonade uses although they can serve as an inspiration. The right tools are the right choice for your use case!

Features of the Machine Learning Platform 

  • Allows team to execute at scale
  • Easy update and retraining of models
  • Makes it easy for teams to share model, code, and data
  • Automates deployment  

Five Pillars of Machine Learning Platform

  • Feature Management
  • Workflow Management
  • Monitoring
  • Tracking 
  • Model serving

Feature Management

Features? Are there any differences between features and datasets? Yes!

Dataset is raw data retrieved from data storage while features are preprocessed datasets; they are the direct input to the machine learning model.

Feature management is sometimes referred to as feature store.  Feature stores are used to create, store and manage features used in training machine learning models. Feature engineering is the process of creating a feature. Feature engineering varies depending on the task, dataset, and project.

Fun fact: There were no off-the-shelf feature stores at the early stage of Lemonade.

Feature Stores

Tecton Feature Store
  • Feast
  • Tecton
  • Iguazio
  • Hopsworks
  • Databricks Feature Store
  • Sagemaker Feature Store
  • Google cloud Vertex AI Feature Store 

Workflow Management

Workflow management involves managing tasks in machine learning workflow and pipeline. Workflow is a sequence of tasks in a machine learning lifecycle while a pipeline consists of infrastructures used in automating the workflow.

ML requires a sequence of tasks, these tasks are sequential but sometimes you need to return to a previous task if some conditions are not met.

Tools Lemonade Uses

It is intriguing to see how Lemonade manages their pipeline with a Slack bot. The Slack bot is called Cooper and it is built with the Rasa framework. Cooper runs commands to automate model training and deployment. For instance, Cooper can start and shut down an AWS Sagemaker notebook, start model training, etc. Lemonade combines this Slack bot with Airflow to manage their workflow.

Other Accessible Workflow Management Tools

  • KubeFlow
  • Airflow
  • Kedro
  • Luigi
  • Dagster

The list above is overwhelming and it keeps increasing. This article outlines the differences between all these tools and can guide you on the best choice for your use case.


Monitoring means observing and checking the progress or quality of something over time. And that begins by defining what you want to monitor.

Artifacts to monitor in machine learning:

  • Code
  • Data
  • Model
Aporia Monitorign Dashboard

Code Monitoring

Code version tools like GitHub, GitLab, and Bitbucket are ubiquitous, but you also need to monitor the version of the code that generated a result. 

Data Monitoring

Data is one of the most important artifacts in machine learning. Once your data goes wrong, every other thing will be wrong. In data monitoring, you monitor: 

  • Data drift – change in input data. This means variation in the data that was used in building the model and then used in production over time. Over time, the training data can degrade. This could be a result of a change in data distribution or new features that affect the data. Data drift causes decreased inaccuracy. You can detect this problem with a Kolmogorov Smirnov (KS) Test, population stability index, adaptive windowing, or a model-based approach.
  • Concept drift – this focuses on the statistical properties of the target variable. Machine learning models map independent variables to the target variable. Once there is a significant change in this mapping, it affects the accuracy of the model.
  • Imbalance data – This is caused by skewness in class proportions in your data. For example, in a fault detection dataset, most of the classes are negative. 
  • Bias – this is caused by data acquisition. Imagine a wedding dress data was acquired in the US, this data is biased to US weddings and should not be used in training a model that will be used all over the world because it will not recognise traditional wedding wear from other countries.
  • Invalid data  – check for data types and NaN values 

For an in depth presentation on ML monitoring you can see more about the 4 types of drift from this MLOps Meetup by Amy Holder.

Tools Lemonade Uses

This involves monitoring the model hyperparameters, model architect and model performance. Lemonade uses Aporia for monitoring. They also recommend the monitoring dashboard should be built by data scientists to avoid overwhelming alerts. You can find a comprehensive look at ML monitoring tools in this space under our monitoring compare page.

Other Accessible Monitoring Tools

  • Fidder
  • Superwise
  • Arize
  • Aporia
  • Neptune 
  • Grafana
  • WhyLabs
  • Evidently AI


Tracking is sometimes confused with monitoring. Tracking involves logging the metadata of experiments. Machine learning involves several experiments. Tracking the result and metadata of every experiment will make you know which experiment to deploy to production. Just like in monitoring, you also need to track your code, model and data versions used for an experiment.

Tools Lemonade Uses

At Lemonade, MLflow is the go-to tracking tool for machine learning experiments.

Other Accessible Tracking Tools

  • Weight and Bais 
  • Neptune 
  • Comet 
  • MLflow
  • ClearML

Model Serving

Model Serving involves how you make your model available to be used by others. Model can be served as an API, endpoint, or as a library.

Model serving tools

  • Sagemaker 
  • Cortex Labs 
  • BentoML
  • Torch Serve 
  • Tensorflow serving 

Challenges with Creating a Platform

  • Because there are numerous tools for each phrase in machine learning, deciding which one to use is a challenge. We’ve gone through the MLOps pillars and highlighted the tools Lemonade uses for each in addition to other accessible tools. But how you know which tool is ideal for your usecase is unique to your usecase and influenced by the rapid evolution of machine learning.

To select the right tool, answer the following questions: 

– Which of your processes would you like to automate?

– What would you gain in terms of business value if you automate this process?

– What infrastructure is available?

– What are the users’ skill sets (data scientists, business analysts, etc.)?

– Can you build it yourself, use open-source, or purchase a tool?

– What kind of model are you going to use? 

– What platform (web, mobile, embedded system) are you deploying to?

Find a list of tools for different tasks in machine learning here.