Spell

Model Store Overview

Vendor Name

Spell

Stand-alone vs. Platform

Standalone platform with community, self-serve, and managed deployment options.

Delivery Model

Primarily managed cloud service, with custom delivery as needed for enterprise

Clouds Supported

Available on public cloud (AWS, GCP, Azure) and full enterprise-level on-prem deployment

Pricing Model

Seat-based license pricing or custom

Service Level Guarantees

Depends on your plan.

Support

Every company in subscription or contract is provided a private slack channel for direct engineering support 24/7.

SSO, ACL

All users are provided user access management within collaboration features.
Enterprise-level customers are provided customized SSO and authentication features.

Security and Compliance

User access management features

Model Store Capabalities

Setup

Fully automated, CLI-based deployment process that connects user cloud profile and compute access to platform from pip installation and one command.
Enterprise-level is fully managed service.

Flexibility, Speed, and Accessibility

Combination of pre-defined metadata and user-defined custom metadata.
Multiple entry points to add and adjust metadata, including web console UI, CLI, and Python API.
Tools available for model and data access (web console hierarchical resource manager, and model registry), integrations to visualize model and run data, and features for direct metadata downloads (logs, metrics, runs metadata).

Model Versioning, Lineage, and Packaging

Supports end-to-end model lineage.
Users begin with data, code, and versioning history ran with a specified infrastructure for model training, referred to as a "run". Runs are organized by projects, groups of projects ("experiments"), where users select the best model outputs which are stored in a model registry.
Models within the registry and available for instant deployment on a K8 cluster for use, along with built-in monitoring tools. The relationships between runs, models, and deployment are documented, linked, and reproducible end-to-end.

Log and Display of Metadata

Durning experimentation and training, Spell tracks and displays live: hardware metrics (e.g., Disk, GPU), built-in framework-specific metrics (e.g. Keras val and loss), user-defined metrics (from Python API), execution information (e.g., run start/stop), compute machine state, user-specified hyperparameters, source control information (e.g., git hash, last commit message), stdout/err logs, and others.

Comparing Experiments and Models

Provides both tabular and visual metrics inspection and comparison tools.
The default run table allows comparison and filters on any default built-in metric as well as user-defined metrics from the Python API.
Within a project, users can select a specific subset of runs (an "experiment"), which persists this grouping and offers an additional interface for metrics comparison and visualization.
Each metric can be tracked over time and across training epochs, and hyperparameters, metrics, and other stored training data can be visualized and compared in a scatterplot.

Organizing and Searching Experiments and Models

Experiments are organized by project directories and further grouped into subsets of individual runs.
Users can add custom labels to runs, groups of runs, as well as pin specific runs to the top of runs tables.
Model registries are organized by versions, with the ability to deep dive into model version history linked to runs history.

Model Review, Collaboration, and Sharing

Supports review and collaboration across training runs, model registry, and deployed models, through shared interface for organization.
Models and runs store versioning data, and users can add notes, comments, labels, stars, and pins on specific runs.
No specific audit workflow for transitioning between stages.
Can lock and version models and artifacts for downstream model deployment.

CI/CD/CT Compatibility

Tight integration with git and leverages Github Actions for CI/CD.

Integrations

Deep integrations with Arize, Grafana, WandB, Github/Github Actions, Tensorboard, as well as premade AMIs with common ML libraries installed (PyTorch, Tensorflow, etc.).

Reviews

There are no reviews yet. Be the first to write one.