Vendor Name | Modeld |
Stand-alone vs. Platform | Stand alone tool with self serve deployment option. |
Delivery Model | The api and sdk are open source. |
Clouds Supported | The user can deploy modeld.io on any kubernetes cluster (managed or on prem). The system is managed by the user. |
Pricing Model | Seat based license. |
Service Level Guarantees | Service level guarantees depends on your plan |
Support | 24/7 support is available depending on your plan. |
SSO, ACL | There is support in the API. User access is based on kubernetes RBAC. |
Security and Compliance | Based on kubernetes rbac. All objects are native kubernetes resources. |
Setup | The tool is installed as helm chart on any kubernetes cluster (10 min). The user need to connect the platform to storage services (Another 10 min). |
Flexibility, Speed, and Accessibility | The metadata is fixed. All objects are predefined kubernetes API objects. CRDs are provided for most of the concepts in the data science/mlops domains (e.g. Datasource, Dataset, Model, ModelPipeline). Since objects are k8s native, the user can add labels to the objects. |
Model Versioning, Lineage, and Packaging | There is a lineage between all the objects in the system. E.g. a dataset it linked to a datasource. A model is linked to its dataset. Each model is attached a model ID. The system automatically package a trained model a docker image with the model version tag |
Log and Display of Metadata | The system is an AutoML only system, hence all logging is done automatically. All metadata get logged automatically in postgres database as well as in etcd. Logged data includes model hyper parameters, model metrics per task (e.g. RMSE, Logloss), model processing times. All the logs from the training jobs are stored on cloud storage. The system also automatically generate performance charts for datasets and models and logged them in cloud storage. |
Comparing Experiments and Models | You can compare profile information for datasets and models. Including all performance metrics and charts. |
Organizing and Searching Experiments and Models | For recent trained models and datasets, all the objects are stored in kubernetes, hence you can use the UI or use kubectl to search for objects. For long term search, the system stores the objects in postgress. Search interface is provided in the UI. |
Model Review, Collaboration, and Sharing | The system support Review custom resource. You can attach a Review to any other object. A review resource allow a team to record a discussion about any other object. The system also support Todo custom resource, A Todo resource can be assign to an account resource. The system support approval of deployment to production. |
CI/CD/CT Compatibility | The system support CI/CD via thje ModelPipeline object. Using a model pipeline, the user can use auto ml to train a model, and than test the resulting model at different environment using ML specific tests (test that are based on test data and ML metric) |
Integrations | The system does not integrate with other ML tools or feature stores. The API contain feature store objects, but no implementation is provided at this release. The system does integrate with 3 party databases and cloud providers storage services. |
There are no reviews yet. Be the first to write one.
Vendor |
Demo link |
Stand-alone vs. Platform |
Delivery Model |
Clouds Supported |
Pricing Model |
Service Level Guarantees |
Support |
SSO, ACL |
Security and Compliance |
Setup |
Flexibility, Speed, and Accessibility |
Model Versioning, Lineage, and Packaging |
Log and Display of Metadata |
Comparing Experiments and Models |
Organizing and Searching Experiments and Models |
Model Review, Collaboration, and Sharing |
CI/CD/CT Compatibility |
Integrations |