Coffee Sessions #44

Autonomy vs. Alignment: Scaling AI Teams to Deliver Value

Setting AI teams up for success can be difficult, especially when you’re trying to balance the need to provide teams with autonomy to innovate and solve interesting problems while ensuring they are aligned to the organizations' strategy. Operating models, rituals and processes can really help to set teams up for success; but, there is no right answer, and as you scale and priorities change your approach needs to change too. Grant shares some of his learnings in establishing a cross-functional team of data scientists, engineers, analysts, product managers, and otologists to solve employment information problems at SEEK, and how the team has evolved as they’ve scaled from a team of 30 in Melbourne Australia to over 100 team members across 5 countries in the past three years.

Take-aways

1.     There is no perfect blueprint for an AI team (or MLOps or AI Ethics), and operating models and ways of working need to evolve as teams grow and objectives change, which requires clear leadership and principles   2.     Balancing the need for alignment AND autonomy is key for successful AI teams, and ML ops. Teams want autonomy, but teams also need alignment to organisation and platform goals to deliver outcomes, and feel a sense of purpose. Product strategy and ambitions, and OKRs can be really helpful to drive alignment without dictating road maps or stifling innovation   3.     High impact AI breaks existing constraints (e.g. reach vs. efficiency in search) to solve information problems in new ways, that in turn solves important jobs to be done for users. When designing solutions to users jobs to be done it’s important to think about the experience and the information problem together, and consider what should be solved via algorithms (e.g. recall & ranking), context (e.g badges and info), and control (e.g filters and product features). AI, product and UX teams tend to skew to a specific solution, and the answer is generally some combination of all three.   A bit more detail on 1 and 2 below, given I don’t get have a blog post to steer you towards:    1.    There is no perfect blueprint for an AI team (or MLOps or AI Ethics), and operating models and ways of working need to evolve as teams grow and objectives change, which requires clear leadership and principles ·      Set clear direction and expectations o  Teams have strong social and status incentives to maintain their key norms. Even when things are not working they will struggle to evolve if you don’t constantly reinforce a clear rationale for change and vision for a better future o  There is no obviously right answer. Leaders need to define a starting point for the culture, principles, rituals & process, governance o  Team health is highly correlated to delivery of outcomes. There will be storming and norming - be decisive and learn fast  o  “The goal is better, not perfect – we will continually adapt and improve” ·      Define boundaries & interfaces o  Set guiding principles that define team boundaries (e.g. end to end accountability, small teams, strategic alignment, capability)  o  Clearly define how interfaces and accountability will be managed (e.g. contracts, shared OKRs, collaboration goals) and actively manage compliance o  Be prepared to change in-line with the principles as market and internal context changes e.g. AIPS phase 1 = know who our customers are and be accountable to them; phase 2 = deliver the core platform and data building blocks for the long term strategy ·      Let constraints surface o  It’s easier to add than take away - start with the minimum required to deliver the ‘thin slice’ end to end o  Focus on capabilities, not existing roles o  When adding capability, consider the trade-offs between dedicated in team vs. common support (e.g. level of specialisation, level of expertise, demand level / certainty) 2.    Balancing alignment and autonomy is critical for successful teams, innovation and ML Ops High performing teams want and require autonomy to be successful, but autonomy without alignment to organisational and team goals is unproductive, and actually quite unfulfilling for teams (they work on interesting stuff they like, but have limited impact and get frustrated). A clear strategy that links to clear team ambitions for the next 2-3 years, and OKRs can be really helpful here to keep alignment without killing autonomy, and line up dependencies across different teams to get things into production. It’s a difficult balance that you never quite get right (see better, not perfect above) The autonomy vs. alignment tension also plays out in broader MLOps objectives, such as common approaches and platforms. ·      Teams typically want to do things their own way, and there is a lot of duplication and re-work across teams disguised as local optimisation. ·      Building a centralised platform or feature set in the hope others will use it typically doesn’t work (in our experience). But, hoping that a standard will emerge also isn’t sufficient. ·      Our current view is that you need to set the goals for common use and collaboration, let teams develop promising approaches and gain followership, but then pick a standard where common vs. custom really matters and force the laggards across. Only getting 60% of teams on the common solution typically means you have all of the cost and very little of the benefit.  

In this episode

Grant Wright

Grant Wright

Director - AI Platform Services & Product Analytics, SEEK Ltd.

Grant heads the Artificial Intelligence & Product Analytics teams at SEEK, where he leads a global team of over 120 Data Scientists, Software Engineers, Ontologists, and AI Product Managers who deliver AI Services to online employment and education platforms across the Asia Pacific and the Americas. Grant has held various strategy and product and tech leadership roles over the past 15 years, with experience in scaling, AI teams to deliver outcomes across multiple geographies. Grant holds a Bachelor of Computer and Information Science (Software Development) and a Bachelor of Business (Economics) from the Auckland University of Technology.

LinkedIn

Demetrios Brinkmann

Demetrios Brinkmann

Host

Demetrios is one of the main organizers of the MLOps community and currently resides in a small town outside Frankfurt, Germany. He is an avid traveller who taught English as a second language to see the world and learn about new cultures. Demetrios fell into the Machine Learning Operations world, and since, has interviewed the leading names around MLOps, Data Science, and ML. Since diving into the nitty-gritty of Machine Learning Operations he felt a strong calling to explore the ethical issues surrounding ML. When he is not conducting interviews you can find him making stone stacking with his daughter in the woods or playing the ukulele by the campfire.

Vishnu Rachakonda

Vishnu Rachakonda

Host

Vishnu Rachakonda is the operations lead for the MLOps Community and co-hosts the MLOps Coffee Sessions podcast. He is a machine learning engineer at Tesseract Health, a 4Catalyzer company focused on retinal imaging. In this role, he builds machine learning models for clinical workflow augmentation and diagnostics in on-device and cloud use cases. Since studying bioengineering at Penn, Vishnu has been actively working in the fields of computational biomedicine and MLOps. In his spare time, Vishnu enjoys suspending all logic to watch Indian action movies, playing chess, and writing.