Metadata Storage and Management

What is Metadata Storage and Management?

To confidently deploy models to production, you need to know how each model was built, trained, re-trained, and evaluated. That is where tools for model metadata storage and management come in. They give you a central place to store, organize, display, compare, search, review, and access all your models and model-related metadata. You can use tools from this category as an experiment tracking tool, model registry, or both. Read more about what model storage and management is and checkout additional resources below:

Metadata Storage and Management Comparison

The MLOps Community has worked with vendors and community members to profile the major solutions available in the market today, based on our model store evaluation framework.

Sort by

How to choose a solution for Metadata Storage and Management

Are you looking to add some metadata storage and management to your ML stack? MLOps Community, with the collaboration of many Experiment tracking, model store and metadata management vendors has created an evaluation framework to help you choose the right product for your needs.

Criteria 1

Commercial Information

First, you need to assess whether the product’s commercial characteristics meet your needs. We recommend evaluating the following commercial criteria:

  • Delivery Model: Open source or managed service? 
  • Standalone feature store or part of a broader ML platform? 
  • Is the product available on-premises and / or in your public cloud?
  • Is the product delivered as commercial software, open source software, or a managed cloud service?
  • What is the pricing model?  
  • SLOs / SLAs: Does the vendor provide guarantees around service levels?
  • Support: Does the vendor provide 24×7 support?
  • SSO, ACL: Does the vendor provide user access management?
  • Security policy and compliance

Criteria 2

Model Store Capabilities

You will want to make sure that the model store fulfills all the capabilities you need across the operational data workflow. We’ve broken down the capabilities as follows:

Setup
How much work is needed to set up the infrastructure, deploy the tool, maintain it, and connect it to your training workflow?

Flexibility, Speed, and Accessibility
Can you adjust the metadata structure to your needs, is the API and UI fast enough to handle your workload, and can you access models and metadata easily from other tools in your stack?

Log and Display of Metadata
What model and experiment metadata can you log and display in the tool, what gets logged automatically, can you see it live?

Comparing Experiments and Models
What model and experiment metadata can you compare, which comparison visualizations does the tool provide, are there special comparison utilities for your modality (computer vision)?

Organizing and Searching Experiments and Models
How can you organize experiments/models in the tool, how advanced are the search capabilities, can you customize what you see both for a single run and many experiments/models?

Model Review, Collaboration, and Sharing
How does it support model audit, review, approval, and transitions between stages (dev/prod)? Can you lock experiments/models/artifacts downstream for published models?

CI/CD/CT compatibility
How does it support continuous integration and delivery, how does it connect it to continuous training and testing workflows?

Integrations
Which 3rd party data and ML tools does the feature store integrate with?