Model Store Overview

Vendor Name


Stand-alone vs. Platform


Delivery Model

Available as a managed cloud service (SaaS) and as a commercial on-premises deployment.

Clouds Supported

Once you get the package, you can deploy it wherever you want.
See the on-premises installation guide at

Pricing Model

Usage-based (consumption) pricing on top of a free monthly quota.
For example, the Team plan is $49 for your entire team + 200 monitoring hours and 100GB storage free.
You can get additional 200 monitoring hours for $18 or 100GB for $8.
See pricing at

Service Level Guarantees

On-premises plans


Available during working hours 8 - 18 CEST


Available in the Scale plan.
You can manage organization and project-level access in the Scale plan.

Security and Compliance

Depending on your security needs, you can use SaaS or on-premises versions.

Model Store Capabalities


Hosted version: Minimal. You install the client library and start logging.
On-premises deployments: You set it up on a Kubernetes cluster and need at minimum 8 CPUs and 32 GB of RAM.
Read the installation guide for more details here

Flexibility, Speed, and Accessibility

You can use the dictionary-like metadata structure to log and display metadata in any custom way.
Both the API and the UI scale to thousands of runs.
There are various logging modes (sync, async, offline) to adjust logging to your setup.
Every logging method has a mirror query method to access any metadata you want via the client library.
There is no CLI interface for logging/querying metadata.

Model Versioning, Lineage, and Packaging

You can version models by recording anything you want during the training process like hyperparameters, model weights, code, environment configuration, data versions.
There are no utilities for model packaging, but you can use any packaging format (ONNX, Tensorflow SavedModel) and save the model or reference to Neptune.
Model lineage is not supported yet but is on the roadmap.

Log and Display of Metadata

Supports logging and visualizing as single objects or series of values:
- metrics, parameters, text
- code, config files (.yml, .dvc and others), notebook files (rendered interactively)
- images, videos, audio
- interactive visualizations (Bokeh, Altair, Plotly, and any other HTML-compatible charts)
- tables and arrays: pandas, NumPy, torch, TensorFlow
- hardware metrics: CPU, GPU, Memory

Doesn't support:
- Tensorboard-like histograms for gradients/activations

To see the full list read:

Comparing Experiments and Models

You can compare metrics, parameters, learning curves, hardware consumption, and text between models and experiments.
Available comparison visualizations are:
- table with parameters/metric/text diff
- parallel coordinates plot
- overlayed learning curves
- overlayed hardware consumption metrics
Comparisons for images and rich media are on the roadmap.
For more information, see

Organizing and Searching Experiments and Models

You can:
- search experiments by tags, metrics, parameters, text
- customize any table view with metrics and parameters you want to see
- create dashboards that combine many metadata types
- save table configurations and dashboards for later use
For more information, see

Model Review, Collaboration, and Sharing

You can collaborate on experiments and models by:
- sharing persistent links to comparisons, visualizations, and runs
- creating persistent table views for each teammate
Model review, audit, and transition are not available but are on the roadmap.
Read more about collaboration:

CI/CD/CT Compatibility

There are no utilities for CI//CD/CT workflows, but you can implement it for most workflows thanks to the easy log/access interface.
See example:


Focused on metadata storage and management but integrates with 25+ tools in the ecosystem:
- model training frameworks
- model visualization libraries
- hyperparameter optimization frameworks
- IDEs and Notebooks
See all integrations:


There are no reviews yet. Be the first to write one.