LLM Avalanche

At the end of June, I flew out to San Francisco to do three things:  I want to break down LLM Avalanche. Aside from being basically a mini-conference that we called a meetup, there were incredible learnings. It would be... View article

MLOps: More Oops than Ops

As model complexity increases exponentially, so too does the need for effective MLOps practices. This post acts as a transparent write-up of all the MLOps frustrations I’ve experienced in the last few days. By sharing my challenges and insights, I... View article

Building the Future with LLMOps: The Main Challenges

The following is an extract from Andrew McMahon’s book, Machine Learning Engineering withPython, Second Edition. Available on Amazon at Given the rise in interest in LLMs recently, there has been no shortage of people expressing the desire to integrate... View article

Explainable AI: Visualizing Attention in Transformers

And logging the results in an experiment-tracking tool In this article, we explore one of the most popular tools for visualizing the core distinguishing feature of transformer architectures: the attention mechanism. Keep reading to learn more about BertViz and how... View article

Concepts for Reliability of LLMs in Production

Traditional NLP models are trainable, deterministic, and for some of them, explainable. When we encounter an erroneous prediction that affects downstream tasks, we can trace it back to the model, rerun the inference step, and reproduce the same result. We... View article