To confidently deploy models to production, you need to know how each model was built, trained, re-trained, and evaluated. That is where tools for model metadata storage and management come in. They give you a central place to store, organize, display, compare, search, review, and access all your models and model-related metadata. You can use tools from this category as an experiment tracking tool, model registry, or both. Read more about what model storage and management is and checkout additional resources below:
The MLOps Community has worked with vendors and community members to profile the major solutions available in the market today, based on our model store evaluation framework.
Are you looking to add some metadata storage and management to your ML stack? MLOps Community, with the collaboration of many Experiment tracking, model store and metadata management vendors has created an evaluation framework to help you choose the right product for your needs.
First, you need to assess whether the product’s commercial characteristics meet your needs. We recommend evaluating the following commercial criteria:
You will want to make sure that the model store fulfills all the capabilities you need across the operational data workflow. We’ve broken down the capabilities as follows:
How much work is needed to set up the infrastructure, deploy the tool, maintain it, and connect it to your training workflow?
Flexibility, Speed, and Accessibility
Can you adjust the metadata structure to your needs, is the API and UI fast enough to handle your workload, and can you access models and metadata easily from other tools in your stack?
Log and Display of Metadata
What model and experiment metadata can you log and display in the tool, what gets logged automatically, can you see it live?
Comparing Experiments and Models
What model and experiment metadata can you compare, which comparison visualizations does the tool provide, are there special comparison utilities for your modality (computer vision)?
Organizing and Searching Experiments and Models
How can you organize experiments/models in the tool, how advanced are the search capabilities, can you customize what you see both for a single run and many experiments/models?
Model Review, Collaboration, and Sharing
How does it support model audit, review, approval, and transitions between stages (dev/prod)? Can you lock experiments/models/artifacts downstream for published models?
How does it support continuous integration and delivery, how does it connect it to continuous training and testing workflows?
Which 3rd party data and ML tools does the feature store integrate with?