Meetup #83

KServe Live Coding Session

We will start with the serialization of TensorFlow/PyTorch/SKLearn models into files and the deployment of an inference service on a Kubernetes cluster. Great MLOps means great model monitoring, so then we will look at inference service metrics, model server metrics, payload logs, class distributions. For AI ethics on production, we will use the explainers pattern with many different explainers, fairness detectors, and adversarial attacks. For integrations, we will use the transformer pattern to process as well as to enrich the inference request with online features from a feature store. Finally, we’ll look at how to build a custom inference service using the KServe SDK.