After years of failed POCs then all of a sudden one of our models is accepted and will be used in production. The next morning we are part of the main scrum stand-up meeting and a DevOps guy is assisting us. A strange feeling, unknown to us until then, starts growing on the AI team: we are useful!
Deploying models to production is challenging, but MLOps is more than that. MLOps is about making an AI team useful and iterative from the beginning. And it requires a role that takes care of the technical challenges that this implies, given the experimental nature of the ML field, while also serving the product and business needs. If your AI team does not include this role, maybe it's your time to step up and do it yourself! Today, we will chat with Ale about the transition from being a data scientist to a self-called MLOps engineer. And yes, you'll need to study computer science.
- There's a generation that joined ML in 2015-2020 with no computer science background. Neural networks were cool and suddenly easy to train and get mind-blowing results with minor coding experience. I am from that generation and to that generation, I speak.
- Machine learning becomes boring when your work is constantly thrown away. Escape from the POC wonderland when the most valuable output is a meetup talk. Has anybody used something you built?
- Fall in love with the problem, not with the technology. ML is just another tool. It suddenly can be replaced by a non-ML solution that makes the product better and that's awesome. If you want to have more impact you may want to be more versatile. It's your time to catch up and study computer science.
- The motivation behind MLOps is to manage different ML models and seamlessly promote one to production once it's better than the rest. For this an infrastructure that tracks experiments, store and compare models is necessary. It's great that you love reading papers and trying new architectures, but make sure your team has an MLOps mindset and infrastructure or you will get trapped in the POC labyrinth forever.
- The fact that the word MLOps exists is a sign that machine learning development is reaching maturity and it's not a buzzword anymore.
[The Journey from Data Scientist to MLOps Engineer // Ale Solano // MLOps Coffee Sessions #80](https://www.youtube.com/watch?v=4kIUXlP7SqE&t=10s&ab_channel=MLOps.community)
**Welcome back. It looks like you're coming on and you're being a guest host of honor these days.**
**Yeah, you’ll get bored of me now, surely.**
**For those who do not know Adam Sroka, whose last name means “whale” in Welsh, he is… [laughs] It's not that, is it? It's not that.**
**It might be the third time you’ve made that joke on the podcast as well. [laughs] [cross-talk]**
**[laughs] Every time I'm gonna just say that it's something different than it is. “It's a horse. Adam Sroka means a horse’s tooth in Slovenian,” or whatever. But anyway. [cross-talk] [laughs] I'm sorry, I can't help myself. Anyway, we got a special, special episode today. We talked with Ale Solano, who took the leap from data scientist to machine learning engineer to – basically, now as we found out in the call – he's going deeper and deeper into being a software engineer.**
**Down the rabbit hole.**
**Yeah, he got the itch. He got the bug and he wanted to do more. It was awesome. For me, a few takeaways – and then I'll let you give some takeaways. Or why don't you start? Give me some takeaways that you had.**
**Oh, for me, it was really interesting. Because it’s a very similar route to what I've done, right? I've been a data scientist, but I kind of went more on than the manager track. But now I’m doing more machine learning engineering, I've seen that to be quite the trend with a few people I know as well – peers. A major takeaway for me actually was the mindset. Ale's got a really cool mindset and it's something that, if more people emulate that approach, they'd have a lot more success quicker. Yeah, it’s really quite a mature way of approaching ML software and all that. So it was quite refreshing. It's all the kind of stuff you read and you think, “Yeah, that's neat.” To have someone say it and just the self-discovery as well. It was really quite cool.**
**Yeah. And not only say it – embody it. **
**Which is also really cool to see. I mean, that's very true. And it feels like there were multiple conversations going on here. We talked about some just nitty-gritty “how to” stuff or “what did you do” practical things. But then there were the higher level things with “what is the mindset?” or “what are the strategic ways of bringing this in?” And I enjoyed that. The other thing that I thought was just an awesome callout – and it makes total sense that he said it – but when he wrote us before coming on here, he talked about how, from 2015 to 2020, data science was all the rage and you had this influx of data scientists coming on to the job market, who _never_ had to do _anything remotely_ close to something like software engineering. They lived in their Jupyter notebooks and they didn't have to worry about – as you know, quite well, you mentioned that the first guy or some somebody that you worked with a few years ago – never worked in Git. **
**He wouldn’t use it, yeah. [cross-talk] He just refused.**
**Oh, man. Yeah. I wonder how he's doing now. [chuckles] So that just shows, from that time period, it was possible – from 2015 to 2020. It feels like we're out of that phase. Like, you can't be a data scientist who refuses to use Git these days. And I really appreciate Ale talking about this because he went from being not data scientist to saying “I need more,” And it was _because_ of the – he was heartbroken when all of these models that he was spending all this time on, and he would just put all this love and care into – they never actually realized any business value. Nothing happened to them, so he mentioned how he was crushed from that. And he said “There must be a better way,” and hence, he started getting into ML Ops and he started going down the rabbit hole.**
**Hey, I think it was a really cool chat. Ale is a great person to talk to. He’s a real advocate for the community as well and some of the value you get from all the lovely people in the ML Ops community.**
**That's it – all of you listening, everybody that is here in the community. I hope you enjoy this conversation with Ale Solano. Just to remind you all – we've got cool merch. If you are not watching YouTube right now, I'm wearing a sweater that is our merch. You can pick it up at MLOps.community. It has been by far the best way for me to burn through sponsors’ money so far that I found [laughs] because nobody's buying this shit. Come on people get out there, buy it. Help us out. Support the community. And if you are listening on Spotify, or iTunes, or Apple podcasts, whatever they call it – you can leave reviews. You can hit that “follow” button to get updates or notifications when we release new episodes. That would be a huge help for us. And what would be even _more_ of a help, if you're feeling _really_ ambitious, is to leave a review, and let people know how much you like (or dislike) let us know. If you dislike it, maybe tell us first before you leave a bad review and we'll do something. We'll send you some merch. [chuckles] Cause we've got a ton on hand. I'm joking. Merch has been selling great. So, Adam, thanks for coming on here, as always. And let's get into the conversation with Ale.**
**Ale Solano joins us today and he has to be the _number one_ intro that I've ever seen in the ML Ops community slack. If you missed it, I posted it all over Twitter and LinkedIn at the time. But it went something like this, it said “I was a happy data scientist until the day my boss came and told me I needed to productionize the model that I was building.” And then he found the ML ops community – Ale, it's great to have you here, man.**
Yeah. Great to be here. I mean, it's a pleasure. It's so weird because I'm used to listening to the podcast and to listening to a lot of big names and experts – and now I'm here, which is… I'm just a normal guy. I’m kind of a fan.
**Alright. [cross-talk] You're a little bit humble, because you've done some real soul searching from that time that you had to productionize models. And I would say – what I've seen – the growth that I've seen and the desire to actually learn ML Ops from the data science side, first of all, that's not an easy task. But it's been incredible to see what you've done. So there's a lot that I want to get into. I think the first thing that we can get into is just a little bit of a story of what you're doing now. What is the use case? What are you working on? And maybe explain to us that intro that you gave. What was happening at that point in time? I think it was over a year ago now that you came into the community. Can you give us the use case and all of that?**
Okay. Yes, of course. So, it happened a year, a year and a half ago, I think. And I was in this company – the company is a content creation company – they create content, like photos, like videos, like text for brands, especially for marketing purposes. For example, if you're McDonald's, and you need to create a new campaign for a new promotion plan for families, you need pictures, you need images, you need a video for television, you need pictures for Instagram, text, whatever. This company was already doing that but they wanted to automate the process of content creation, because it's the most expensive one – creating the images. So they hired me to try to automate this part. Well, this is a very challenging task. But I joined because of the challenge. And what I found is that I was into, “okay, do a POC… and then another POC… and then another POC,” and they were building a software product to arrange all the marketing campaigns, kind of in parallel with me. So I was kind of in a dark room doing POCs. At some point, we needed to promote the POC to a product. This story is interesting. But yeah, at some point I needed to do it. And I found you and it was kind of a miracle because I found a lot of people like me that were having the same problem, who said “This is what we need.”
**Really interesting. It’s funny because it kind of mirrors my experiences as well, actually. I had a lot of looking at that. It's that thing where you end up it being the data science in the cupboard – just kind of an extra thing. I think there was a big hiring spike about five, six years ago where people thought “Ooh! We need to get on board the ML train,” and then it never… I’ve actually been reading a lot of stuff recently, but it was saying “the last mile with decision making" and this kind of stuff. It's actually beyond getting into production – it's taking action from it. Things like that. Yeah, if you're not doing that, then it’s just an expensive hobby, right? **
Yeah, exactly. I think that it makes sense and that's one of the points that I want to make today. It makes sense for a company to, for example, in my case, this was kind of a moot issue. “Okay, let's try if this works.” But for the individual, it's kind of “Oh, I'm wasting time. All my work is being thrown away.” What I would advise to my younger self is, “Okay, focus on the things that you feel are more useful. The thing is, learn computer science, learn software, and try to help as much as possible. Do not commit yourself, or restrict yourself, to machine learning.” _Or_ “If you want to do machine learning, do it correctly.” And that's why we need ML Ops. That's the problem that ML Ops solves.
**There's something super interesting that you’ve mentioned to me before, and that was, from the time period of like 2015 to 2020, you had this influx of data scientists coming onto the market, who didn't need to know anything about computer science and they could realize value – or they could just play in their Jupyter Notebooks and toy around on things, and _still_ make it through their degrees and university and potentially, from their first and second jobs – they could only live in that environment. And when they read papers, they hear about “What's the newest, coolest, and cutting edge in the ML scene.” But you also mentioned, there is that whole piece, which is that they're not realizing value, and so – you need to go outside of that box, I guess, is what I'm trying to say. Can you dig into that a little bit more?**
Yes. So that's my story. I didn't study computer science – I studied Robotics Engineering. But then, in this 2015-20.. maybe even 2020, machine learning was an explosion. There were _amazing_ use cases. Then, there was this new technology that was very well explained and intellectually very interesting – that was neural networks. And that's why I kind of jumped into it. Because mathematically, it was simple, it was beautiful, and every paper was a new thing. You felt like, “Oh, my God, this is… I want to do this.” In my experience, all the other teams that I've had, I think it was just two people that started computer science. It was just physicists, mathematicians, maybe even electrical engineers. And then we are in the same, like academia mindset, in which we don't really care about doing a product – we don't really care about creating value. And you joined companies that are in the same state. Maybe they want to, but they don't know how to do it. So it's kind of a bubble until it explodes.
**I completely get that. I almost started laughing because I was there. I remember in my early days, I worked with a guy who… I was a physicist and kind of self-taught in software engineering and stuff. I’ve written books and things. I actually wrote code complete on lunch breaks at one point. But I worked with a guy who was a PhD data scientist and he refused to use Git. He just wouldn't use it. He just wouldn't do Git… But it was like, “That's how you work in a team. What do you mean?” [laughs] It was like – it was wild west stuff, though. You had all these people. And it attracted people because it was “sexiest job of the 21st century” and all the money and it's exciting and it's interesting, right? It's good stuff. And actually, I think it was quite dangerous, because a lot of people could hop jobs, like after a year, without ever doing anything. I used to call it “CV-driven development” and I've had other sort of ruder phrases for it as well – but people would know the solution before they knew the problem. Like, “Oh, yes – the recurrent neural network!” “I haven’t asked you anything. Wait a minute.” And it's because they want to get experience with that and move on to the next thing. I suppose…**
Yeah, that's exactly what I feel like. So… you said something that I just forgot. So yes, continue.
**Then, I guess, you come into the real pain of actually – when you dig into ML Ops, I'm just interested to know, what were the first things that really gave you a headache? Because there's so much in ML Ops and things you have to think about. Even now I'm rewriting old models of delivery because I have new artifacts in them and new approaches and stuff. So what was the first big “Ah, one minute. I'm gonna need some help” moment?**
Yeah, I mean, yes. What you said before is that there are people out there that are like, “Okay, I refuse to use Git. I refuse to learn anything that transforms my work into a product.” And I think that there are a lot of people that can live with that and maybe it's fine. It's fine if you have a manager that handles that. But for me, I can’t live with that. I thought I could live with that. But at some point, I just said, “Oh, my God. It's been how many years of me working and I haven't built anything that is used by people.”
**A rude awakening. **
Yeah, that's an awakening. There was kind of no point where I said, “Boom, I need to do something.” It was just kind of gradual. I could say that, yes, at some point that I realized that software people were doing it correctly. They were iterating very quickly. They were transitioning from development to production very quickly – every two weeks, and were delivering more value. And I said, “Man, I want this. I want to be part of the product that is used by people. I want to release value every two weeks. I don't want a POC that lasts for three months.” And I read this book that I recommend. [shows book]
**What’s it called? Sprint?**
Sprint by Jake Knapp. It's like… you don't have to read the whole book. You just have to read the introduction to realize or maybe this [points to the cover], it says “test new ideas in just five days.” From my experience, it was like, “Can you do that?” And yes, you can. Maybe if we have to give a moment like “Yes, I need this [shows book]. I need ML Ops,” is reading the first introduction of the book.
**That really made me laugh, that. I've come down to the office for the week and _that_ is the book that's in my bag. I've read it _ages_ ago and I'm just rereading it. So it’s sort of funny, really. It's a really good book and there's cool workshops and stuff, actually, and other material that follows on from it. Yeah, it’s really done well.**
**Tell us, Ale, how did you start implementing it? So you went from data scientists to really sinking your teeth into ML Ops. What were some things you started doing? And what did that transition look like?**
Yes. At the beginning, I was kind of alone in my team. I was the only part of the ML team. And I tried to do this myself. I tried to, “Okay, I need to do the POCs better. I need to make something that is iterable.” So I began by trying to encompass everything into a Docker image, trying to upload it into Amazon and then to kind of do this version control of data and models. It was difficult for a single person. But I learned a lot and all the learnings were because of the MLOps community. But then everything changed when the product manager changed in the team. He said “Okay. We have this person here. What is he doing? Is there a way that we can either kick him away or integrate him into the whole product cycle?” And he managed to do it. He hired more people to the machine learning team and one of these people was a girl that is amazing. She also had this MLOps mindset and what we managed to do is “Okay, if we first need to do POCs, we're going to test in just one day, two days, three days. The first thing that we're going to do is, we're going to deploy. After we deploy, we're going to create a repository and we're going to just iterate, iterate, iterate.”
**How did you go from a POC that was three months to three days? Was it because you set up all that stuff on the back end and you said, “All right, here's the foundation.”? And then you brought on the new teammate. But it seems like… what was the big shift there? How do you have such a Delta – from three months to three days?**
I think most of it was the product manager knowing what we want, knowing the time that we had, what is possible, what is not possible, and what we could do in the time that we had. So it was just testing ideas and the outcome was, “Can we do it? Can we do it quickly? Is this really something useful? Is this something better that we can do?” And at some point, we became able to do this very quickly. We could get the outputs very quickly. Then we managed to get that idea that we wanted to “Okay, this is the one. We're going to go serious with this.” And we just started.
**I'd be interested to hear, for someone joining the community today, someone that's where you were a year ago – what would be the first thing you'd point them out? What was the biggest weight in the low hanging fruit that you think “Actually, that's the thing I should have done first because that had the biggest effect”?**
First of all, to know what the problem is. “Okay, what is the problem? We are solving this problem right now, how? How do we measure this problem? Let's put a metric here. Do we have some benchmark? Can we solve this easily right now with maybe software?” And then “Can we do it with machine learning? Can we improve it with machine learning?” So the first thing is to know what you're trying to do. Know what is valuable. The second thing is to measure. And the third thing is to have a benchmark. After that, you just have to improve it with machine learning and it's easier – a lot easier.
**So you also mentioned to me that CI/CD became really important for you. Can you go into that?**
Yes. So, we were not alone. We began with a DevOps person from the software team who was trying to help us. We had this idea of doing everything deployable like “next Friday”. The first thing that we needed was the CI/CD that uploads everything into Amazon and that is already usable. What we had was two pipelines. One pipeline was for doing training and the second pipeline was for doing inference. So part of the CI/CD was to build the new development, upload it into a kind of a development stage, and run this training pipeline. And everything of this was done with CI/CD. It was also so with Terraform. That was pretty useful. That was pretty amazing to see how suddenly everything that you do – every experiment that you do – becomes something where you can see the results right away, “Okay, for this model, the accuracy was better or was lower. Have we improved with this development?” For this CI/CD is very, very, very important. Just one click and you have the results.
**[chuckles] Yeah. You mentioned that you enlisted, or you recruited, someone from DevOps to come and help you with that. One question that I had earlier was like, as your team was growing and as there was that reorganization, or as your product manager realized that you were doing something useful and maybe “we can actually get some value out of what you were doing,” who were the other stakeholders that were involved in that? You had the product manager, I guess you enlisted someone from the DevOps crew, you had a new hire that came on that was very MLOps-thinking. Who else was there?**
We had a team of four. In this team of four, we were three developers and then there was this girl that was kind of a mini-Product Manager of the ML team. This girl was crucial because she could communicate with the product manager of the software of the real product. She could transform ideas into tasks and it made it very easy – like super-easy for us. So it's very important to have this conversation with a product manager – to understand what he or she is looking for and then to translate that into tasks – into real things, into things that you can test in two weeks, into things that you can deploy in two weeks. That was crucial. Then there was this DevOps guy that was translating everything into code, doing everything very automatically, just as he was doing with the software team. Then there was another person – he was like a product owner – that was constantly checking with customers if the mission that we have is the right mission or not.
**It's interesting, because I think that this transition to a product and a product team that works that way – that can be really jarring for some. And actually, I think the verdict’s still out – there's like countless things going on about how Agile is not the right approach for data science, for machine learning, or it is – it's the only approach. I think Laszlo posted an article recently about how it's the only way to do it. I'm just interested in your experience then. I think I know what side of the fence you fall on, but I just want to hear about your experience of learning to be agile as a data scientist.**
I think it's possible. I think it's possible and I think that's the dream that we in this community have – to make everything agile, iterable, quick, closer to the user, closer to the value that we're creating. It makes it work in life much, much better to the developers, because you see what you're doing and you see progress. I think it's possible, as we are discussing in the community, the ML stack is not here yet. But I think it will be because, in my mind, this thing that software achieved (delivering value every two weeks) I think it's going to be and we're going to try to put it as the standard for every field. Machine learning is very close to software. It’s the closest thing. So machine learning is going to be the next thing that does this agile, quickly, measurable. We're gonna get there.
**Yeah, it's one of those things I strongly agree on. I think it's not perfect – the big argument that comes back is with the exploratory bit, right? “How do I actually get time to explore and dig around and apply models?” But my thought was always, just timebox that stuff and sort of iterate that way. That doesn't fly with a lot of people, like purists and things like that.**
Yeah, I think that that's the difference between – or one of the main idiosyncrasies of machine learning – that you need to start with a lot of experiments, a lot of things that you don't know. You don't know the result, you don't know the outcome, you don't know if this is going to work or not. So, you need to do a lot of exploration, you need to store the data, all my models are filled with data. But if you have the problem that you want to solve in mind, and think machine learning is just the technology that maybe can help us do it, “Okay, how can we solve this problem? Okay, this is the data that we have. But is this the data that is going to help us solve the problem?” That just gets rid of our exploration that you don’t need. I think we are learning and there are experiments and also this acknowledges the tools that are appearing in the ML stack take this into consideration, “Okay, this machine learning is built not just in commits, but on experiments.” And with that difference, I think you can do agile.
**So, along those lines, I mean, there's something that is fascinating for me, just hearing about your journey and really this evolution that you've gone through. Particularly, this is probably one of the reasons why I'm so attracted to this story – how did you recognize what was a necessity when you were first starting out? And then what you wanted to get to later on? Speaking primarily on the infrastructure side. Sorry, I didn't mention that. But I'm thinking, you mentioned their CI/CD, I think you also mentioned before that you were doing something with experiment tracking, right? So what were the pieces of the stack that you said, “I need to figure this out?” You also said data versioning was important. What are these pieces for you that were like the foundational building blocks where maturity level is zero and then as you've evolved, and you started to see, “Well, maybe we can add this. Or maybe it's better if we trade out this piece.” What were those foundational pieces and then what did the evolution look like?**
Yeah, I think what I wanted is “I want to do this iterable.” And I started just adding technologies to test. One of the things that I wanted to do is “Okay, I think that we need version control for _this_. We need version control not only for data, but for the models that we have.” And I was just looking for tools that helped me do that. So yes, I tried DVC for doing the data version control. When I jumped into trying to do version control with models, that's where I experimented, “Well, wow, there's a lot of variety to do this.” At this point, we decided to just try and deploy CI/CD and measure if we're going to do it. But we couldn't get to the point that we were doing experiment tracking correctly. But the first thing was, “We need to make this iterable, while technologies can help us.”
**Excellent. Yeah. Super helpful to think about. Did you have something, Adam? I see you. [cross-talk]**
**So, It just brings me on, I always like to hear people's thoughts. On the subject of tools and technologies, then –what are you most excited about? What's next to learn? What's up and coming? And what's missing? Like, what do you think – where's the big hole that no one's quite plugged yet?**
Okay, I have to say that I don't know yet because I've been outside machine learning for a couple of months. Especially because I’m trying to center myself and focus on, “Okay, let's see machine learning as a technology – as one possible solution. But I'm going to try to just park here. Let's leave it there and try to solve things without machine learning.” So, the technologies that I'm learning and that I'm kind of using more, are more related with system architecture, with cloud computing, with networking, with micro services. I'm just trying to do it perfectly with software and trying to learn as much in software system design as possible. And then I will jump again to machine learning and say, “Okay, how can we do the same thing with machine learning?”
**Now, that makes sense. I get that. It kind of leads me on to something I think a fair bit about – but is it too much? You're talking about any one of those things, like software engineering, system design – they’re whole jobs in their own right. Is it too much of an expectation? I’m a big strong believer in the T-shaped engineer but there is a limit to how much you can pick up with competence, right?**
I think so. I think that it's very difficult for all people to know _everything_. I don't expect everybody to do it. And I don't think that's the thing that maybe that we should go looking for. That's why we are developing all these tools – to try to not make it a requirement. But I am just acting as an individual – as a person. I mean, I want to, I want to know. I want to be as useful as possible. One of the decisions I took is – I left the company and I just took some months to learn this. Because I have these huge knowledge gaps that I just needed more time to fill. But this is not – you can’t do this with every person. You can’t just say “Okay, everybody leave their jobs and start learning computer science.” So that's something where there are a lot of problems here – there are a lot of possible tools that can save time and try to not require people to learn everything. Just the data scientists focus on making the best models. The DevOps people are trying to make it as smooth as possible. And everybody has their own job that they can just focus on and be the best at. But I don't see… It will be great that everybody knows a lot, in that I think it's not tractable.
**So you've got a bit of a hot take when it comes to why machine learning models and this famous quote that – I have no idea where it came from, but it haunts us in the ML Ops community – that 80% of the models don't make it into production. And I want to hear your take on that. It's really interesting when you talk about this idea of the managers nor data scientists not really understanding the problem that they're attacking and the clear lines of communication aren't there either.**
Yeah. Well, that line is kind of old, but I think it reflects kind of the buzzword where machine learning was some years ago. I don't know if it applies today. But that, as you say, there were a lot of companies that thought they needed machine learning that maybe didn't. And, of course, if you don't need machine learning, how are those models going to end up in production? So maybe there were more jobs or use cases than real problems and machine learning can solve. Maybe that's one of the reasons for that sentence. Another reason is because, well, for machine learning models, you need to run experiments, as we said. You need to get rid of the last ones and just promote the best one. So, yes, I think that even with that, I guess there were some companies that though “Okay, yes. We need machine learning. But we still don't have the mindset and tools to manage these people and to manage these problems, because these people have this academia mindset and these problems have this experimental is idiosyncrasy.” So it's difficult to put everything together. But these were the first years of machine learning, so it's kind of normal.
**Dude, awesome stuff. This is so good. I love this journey that you've taken as a data scientist – going into the software engineering field and basically bypassing (or not bypassing) but you've gone into the software engineering field via the machine learning engineer route, and you're going deeper and deeper. And I know David Aponte – who is also a guest host on here sometimes – he did the same thing. He was a data scientist, then became an ML engineer and now he's going really deep into software engineering. So, it feels like that is a very viable route and it is a very interesting one, but the higher level things that you're talking about here are probably my biggest takeaway. Like, do we need to use machine learning? If so, how can we make sure that we iterate on it as fast as possible in order to see how much value we can get out of it? That's huge. So just want to end us with something that you wrote me when we were talking about coming on here. We didn't really… We kind of talked about it, but I want to make sure that it stays in everyone's head and call it out real clear. And I'll just read it off now. It's basically “going from a data scientist to an ML Ops engineer. First, you should know about agile and iteration. Second – know about continuous integration and continuous delivery. Third – know about DevOps tools like Docker, CI/CD pipelines, REST API. Fourth – know about cloud services. Fifth – check out this website i.am.ai/roadmap. We'll leave a link to that in the description for anybody. And then sixth, comes computer science courses, like algorithms, computer networking, operating systems. That has been your path, and it looks like it's bringing you a ton of success. I really appreciate the insights you've given us here today. Thanks for coming on here, Ale. It's been awesome, man. [outro music]**
Thanks for having me. It's been a pleasure. Yes. Call me whenever you want.
**Great to speak to you**
In this episode
Ale is born and raised in a mid-small town near Malaga in southern Spain. Ale did his bachelor's degree in robotics because it sounded cool and then he got into machine learning because it was even cooler.
Ale worked in two companies as an ML developer. Now he's on a temporary hiatus to study business and computer science and get a motivation boost.
Demetrios is one of the main organizers of the MLOps community and currently resides in a small town outside Frankfurt, Germany. He is an avid traveller who taught English as a second language to see the world and learn about new cultures. Demetrios fell into the Machine Learning Operations world, and since, has interviewed the leading names around MLOps, Data Science, and ML. Since diving into the nitty-gritty of Machine Learning Operations he felt a strong calling to explore the ethical issues surrounding ML. When he is not conducting interviews you can find him making stone stacking with his daughter in the woods or playing the ukulele by the campfire.
Dr. Adam Sroka, Head of Machine Learning Engineering at Origami Energy, is an experienced data and AI leader helping organizations unlock value from data by delivering enterprise-scale solutions and building high-performing data and analytics teams from the ground up. Adam shares his thoughts and ideas through public speaking, tech community events, on his blog, and in his podcast.