Blog

MLOps: More Oops than Ops

As model complexity increases exponentially, so too does the need for effective MLOps practices. This post acts as a transparent write-up of all the MLOps frustrations I’ve experienced in the last few days. By sharing my challenges and insights, I... View article

Building the Future with LLMOps: The Main Challenges

The following is an extract from Andrew McMahon’s book, Machine Learning Engineering withPython, Second Edition. Available on Amazon at https://packt.link/w3JKL. Given the rise in interest in LLMs recently, there has been no shortage of people expressing the desire to integrate... View article

Explainable AI: Visualizing Attention in Transformers

And logging the results in an experiment-tracking tool In this article, we explore one of the most popular tools for visualizing the core distinguishing feature of transformer architectures: the attention mechanism. Keep reading to learn more about BertViz and how... View article

Concepts for Reliability of LLMs in Production

Traditional NLP models are trainable, deterministic, and for some of them, explainable. When we encounter an erroneous prediction that affects downstream tasks, we can trace it back to the model, rerun the inference step, and reproduce the same result. We... View article

Is AI/ML Monitoring just Data Engineering? 🤔

While the future of machine learning and MLOps is being debated, practitioners still need to attend to their machine learning models in production. This is no easy task, as ML engineers must constantly assess the quality of the data that... View article