Arize

What do you do?

Arize Team: The founding team comes from Uber’s ML infrastructure team and Adobe’s analytics teams.

The Arize ML Observability platform allows teams to monitor, explain, troubleshoot, and improve production models. The platform allows teams to analyze model degradation and to root cause any model issue. The solution is unique in the space in helping teams go from finding problems, to understanding the why behind the problem, to actually improving outcomes. 

Arize is underpinned by an evaluation store, a key building piece of ML infrastructure (alongside feature and model stores), that enables ML teams to store and index model evaluations across training, validation and production to power a number of services that include monitoring and performance improvement. Our platform helps teams go from validating a model offline, to real-time performance analysis once deployed, to insights to enable active learning.

How much does it cost?

Number of models and prediction volume per month.

Arize can be deployed as SaaS or on-premise. 

 

What’s a sample use case? Where can I learn from?

  1. Catch drift in features/predictions/actuals from training/validation to production
  2. Compare model performance metrics from training/validation to production and troubleshoot performance degradation. 
  3. Catch model’s data quality issues by setting up various benchmarks from training/validation/production datasets
  4. Explain why your model made certain decisions during troubleshooting or for regulatory purposes 
  5. Surface up where your model isn’t performing well to actively improve your model performance

Feature List

  • Consolidated Real-Time Analysis with Dynamic Cohort Analysis: Analyze every data input and every model prediction ever made. Instantly analyze any time frame and compare any slice of predictions against previous validation datasets. Aggregate data across disparate model environments and wide varieties of model feature data 
  • Robust Alert Engine to Catch Model and Data Drift: Trigger alerts upon model performance drift or data drift 
  • Detect Data Quality Issues: Catch subtle data quality problems. Detect drift in data and data distributions 
  • Automated Model Evaluation Metrics: Evaluate any slice of prediction data on any model performance metric. Purpose built to analyze models + data in both validation and production environments 
  • Integrated Explainability: Explainability designed to help teams understand model outcomes. Holds up for troubleshooting and regulatory analysis
  • Designed for Troubleshooting: Advanced analytics tools to discover data quality issues, data distribution changes or model design problems 
  • Accelerate Model Validation: Compare validation results to production results in seconds. Validate the model is working for any slice of predictions and compare any timeframe 
  • Scales with Your Needs: Use the same platform for validation and production scaling as your predictions scale without sacrificing real-time analysis
  • Lightweight Integration: Activate in any development environment and serving environment with a couple lines of code.

Reviews

There are no reviews yet. Be the first to write one.