‘If you’re not embarrassed by your first product release, you’ve released it too late’ — Reid Hoffman
We’ve all heard of the study that says more ML projects fail than succeed. Even if you don’t accept the figure that’s usually bandied about, it’s reasonable to say that a fair number of AI-related initiatives die on the vine. More go bad than should.
While there are lots of reasons why an ML project might not see the light of day, today I want to focus on one in particular: the tendency in some corners of data science to strive single-mindedly toward delivering perfection.
It’s a pretty common issue across technology. From engineers to programmers, system architects to data scientists, it’s no surprise that people steeped in advanced maths and trained to quantify the most complex numeric interactions aim for 100% certainty in their professional pursuits.
But what if perfection isn’t readily available? Sure, the truth is out there. But the commercial considerations driving most ML initiatives have timelines and pressures that might not be willing to wait for a bulletproof model or data pipeline. Too often, the deadlines are simply too tight.
On the other hand, you probably will have enough time and resources to create something ‘good,’ by which I mean workable, reliable, and proven in a handful of well-defined (even if slightly narrower than you’d like) applications or use cases.
Keeping it real in a culture of perfection
If you’re building a predictive model to guide critical business decisions, obviously, you want it to be as accurate as possible. That instinct to aim for perfection is complicated by the culture of data science in academia.
The Kaggle competitions are a great example of this. The world’s best data scientists and data engineers gather and put forward models that are innovative and impressive, but often generate pretty small performance wins. What do they get in return? Massive jobs and rock star salaries at the likes of Alphabet and Uber.
And good on them! But the problem with rewarding model perfection is that it demands a considerable amount of organisation and effort around infrastructure to reach that performance last mile. The data hygiene has to be ultra-clean … which doesn’t often happen in the real world.
Every cleaning task you add to a training set must be reflected by a systems task designed to ensure the data you’re running against the model is clean. You get two contrary outcomes: the model that performs best falls down in production because the info fed into it isn’t as clean as the training set, or you realise there’s a barrier to adoption in the data pipeline infrastructure.
Either of those can bring progress to a screeching halt.
Trying to boil the ocean
Another issue that can knock AI initiatives off the rails is over-scoping — trying to wrap models around business processes that are simply too big or complex and overloaded with variables.
Maybe in these cases, there’s pressure from senior leadership to use AI as a sort of computing crystal ball that can take all business data and use it to predict business outcomes. It’s OK to reach for the stars, but like striving for perfection, practical barriers could mean you’re destined to fall short.
Take sales forecasting as an example. You might be handed an objective to predict what next year’s revenue will be. So you train a model using data from the previous year or years and use it to forecast next year’s numbers. What you’d get with that methodology is probably about as reliable as predicting the weather. The further out you try and forecast, the less accurate you’re likely to be.
With a scope that’s so wide-reaching, the model will be undone by complexity. The underlying business drivers the model has to track will be constantly changing as new products launch, new advertising and SEO campaigns break, and sales opportunities move through the funnel. Any change at the top of the funnel will cascade downstream and affect revenue. The model, however, won’t be able to register changes until they’ve had time to move through the system. While it’s doing that, more changes will be coming behind it. You end up on a treadmill that makes an accurate prediction almost impossible.
So what do you do? Aim a bit lower. Find joy in the achievable, and embrace the idea that the path towards an ideal model or pipeline is likely to be iterative.
‘You have to start somewhere,’ says Satish Chandra, Co-Founder of Slang Labs. ‘It may not be possible to have a schema for each and everything in your system at the beginning, but you can’t use that as an excuse for not doing anything.
Begin by applying a bit of foresight and working out how the data will be used, he says. Look at the statistics, and think about what actions you have at your disposal to affect a business outcome.
‘What are the practical levers available that the business can move?’ Those will define your limitations and help show you what’s achievable.
** Watch the full MLOps Community podcast with Satish here.
Beauty is in the eye of the beholder
Another thing to think about is that there might be more than one definition of ideal when it comes to creating schema, pipelines, and projects.
Look at all the great answers you get to community queries on Stack Overflow. Is there one gospel truth to solve any data pipeline problem? And what if imperfect pipelines are actually a necessary condition for continuous improvement.
Maybe the best way to approach running a machine learning project is to follow this simple dictum: first you deliver, then you iterate.