5 Machine Learning lessons for Product Managers

Anna Buldakova
13 min readSep 5, 2019

AI is the biggest commercial opportunity in today’s economy. What does it mean for us as product managers?

We all use ML-driven products almost every day, and the number of these products will be growing exponentially over the next couple of years. According to Crunchbase, in 2018 there were 5000 startups relying on machine learning for their main and ancillary applications, products, and services. In 2019, just one year later, there have been almost 9000 of them!

ML and, if we go one level higher, AI — coupled with robotics and the Internet of Things — are considered to be the 4th industrial revolution that ever happened in the world. By 2030, it is expected to contribute $15.7 trillion to global GDP, “making it the biggest commercial opportunity in today’s fast changing economy,” according to a recent report by PwC. And, as all industrial revolutions did, it will have a great impact not only on our economy but also all other aspects of our life.

What does it mean for us as product managers? First, as business owners will be realising the impact of AI and integrating it into their key business processes, it will be increasingly important to understand at least some AI basics, even for those who don’t work with AI products. Second, as great product managers are also great capacity builders for their teams, you should start looking for opportunities that AI can present for your product. I’d like to share 5 lessons from my experience that can help you start your own journey.

Lesson #1: Understand the problem you’re trying to solve with ML

Each product development process starts with identifying the right problem to solve: you all remember that users don’t buy a drill for a drill itself or for a beautiful hole that this drill can make, they buy it for a nice dining room that they decorate with a family picture. In the machine learning case though the solution itself is so exciting and novel that it’s tempting to forget to ask yourself why you need it in the first place. Like when a new iPhone comes out, for some people it’s more about joining the hype and standing in the queue rather than about new functionality.

From my experience, problems that ML can help solve would usually fall into one of these buckets:

  • Could we make user experience more tailored and personalised?

Imagine that you are going to a coffee shop. Which one would you prefer: the one where a barista knows your name and your favourite drink, your favourite music is playing and the chair fits your height and constitution, or the one where everything is made for an average customer? For a long time we were building products for the majority, and that’s how mass production works, but in a world where personalisation becomes possible at scale, we actually can and should build for everyone. People have a limited budget, and not only when it comes to money, but attention as well. At Workplace, News Feed is a great example: we help people connect to the most relevant work updates so that when they have only 1 minute to spend, they would read the most important piece of content they should know about

→ To identify these problems, you would usually need to combine observation + data analysis. In the case of the News Feed, do people have a lot of new posts that they don’t go through? If so, ranking could help. Or do they have very few posts in their inventory but there is much more discovery content available? In this case it’s probably a recommendation problem.

  • Could we make user experience safer?

Spam engines are the most famous example here but there is much more. Anomaly detection is used to identify suspicious bank transactions or fake accounts. Integrity classifiers allow to flag harmful or malicious user-generated content. Previously, most of these things required a lot of human effort, and now given the enormous growth of the digital world would be impossible without ML.

→ To identify these problems, you would need to do a very thorough risk analysis and understand potential implications for your product.

  • Could we help users achieve their goals easier or faster?

When I have a task to complete, how can I reduce a number of steps to completion. For example, if I need to write an email, there is an autocomplete feature that enables me to do it faster. If I need to buy food for a week, there is a section “With this product customers usually buy…”. At Workplace, we have an auto translate feature that allows people to read a post in their native language without any effort from the content creator’s side.

→ To identify these problems, we should know what the user journey looks like: what they are trying to achieve with our product and what steps they have to take.

  • Could we create new experience that previously was not possible?

One of my favourite examples is an automatic alternative text feature that is used both in Facebook and Workplace. With more than 39 million people who are blind, and over 246 million who have a severe visual impairment, many people may feel excluded from online conversations around photos. Advancements in object recognition technology allow us to generate descriptions of photos that can be later played by those users through a screen reader so they can join a discussion.

→ The only way to identify these problems is to have a very deep understanding of the user needs and pain points.

Overall, the key challenge there is that users rarely talk about this kind of problems and rarely request features that could help us infer the existence of these problems. That’s why it’s incredibly important to develop a good level of empathy and user understanding not only for yourself but for the whole team. Problem understanding will affect what data we decide to collect, features to build, model to choose and, finally, how to define success so it’s essential to ensure that all team members are on the same page.

Lesson #2: Assess if ML is the best way to solve the problem

When I was working in a startup about hotel-guest communication through in-room tablets, one of the engineers had an idea — to build a chatbot that would help guests find relevant information about their stay faster and also reduce the workload for receptionists who usually have to answer the questions. We talked to receptionists and quickly identified that there were 3 questions that they get asked 85% of the time:

  • When is the check out time
  • When is the breakfast
  • And what is the wi-fi password.

So we built a widget that would answer all these questions as soon as a guest takes a tablet into their hands, and the size of the opportunity shrunk to 15%. Next thing that we did was trying to understand if there was any unique value in answering those 15% of queries with ML. We did a classic “Wizard of Oz” experiment where we installed a chat widget on our tablets but all the questions were handled not by bots but by real people. What we learnt was that most of those queries required human assistance to be resolved (for example, bringing an iron) so there was no value either in using chat or in any complex models to answer these questions.

You should be aware that ML takes time and effort to build in your product. You need good people, good data and a good number of iterations to achieve sufficient quality — sometimes it might take over a year of work or even more. Is it something you are okay with, or a simple, more basic heuristic would be enough?

Here is another example: If we are developing an email client and would like to catch these cases when users may have forgotten to add an attachment, we could simply do a keyword search for “attachment” and “attached”. An ML system would probably catch more mistakes but would be far more expensive to build.

It’s actually a great way to think about a zero state for your product. Data takes time to collect but it shouldn’t prevent you from launching your product. Take a look at Instagram in 2010. In the tab called Popular there is no ML yet, it’s just a list of pictures sorted by overall popularity. Over time this feature evolved and turned into the Instagram Explore tab: through number of experiments engineers turned it into a personalised and exciting experience.

Todd Wickersty, https://www.flickr.com/photos/toddwickersty/5069404490/in/photostream/

Lesson #3: Account for model mistakes and biases

One of the key responsibilities of a PM is to brainstorm how your model might fail, and how to mitigate it at the early stage. Fixing a bias in the model later on might be a more difficult and costly process.

Imagine that you want to build a ranking model for your e-commerce website. Your website is super popular so you take just the last couple of months to train a model. You forget though that your dataset also includes December and Christmas holidays that skew your data due to unusual user activity and behaviour. A mitigation in this case might be to use a wider time range to train the model.

Next example is that you’d like to build a model that would predict good candidates in tech based on their resume. You train your model on 10 years of data, and it shows good results before you realise that it has a strong gender bias that is reflective of male dominance in the industry. How could you have addressed it before training the model? If you had enough data for both genders, probably sampling would help. If there is not enough data, the key mitigation you can apply is simply not to build the model as its predictions will be biased.

While in the first example this mistake would simply lead to less relevant results and lower conversion rate, in the second case it might cause bad decisions. That’s why it’s important to at least try and open the ML “black box” to people so they can understand what is happening and counter react.

In News Feed, for example, adding a comment would give us a signal that users want to see more of similar stories. How could they signal though that they would like to see less of similar stories? Specifically for this case we built a mechanism (“Hide this post”) that allows people to share negative feedback. At the beginning it can be used just for data analysis and model debugging but as the time goes on, it can actually be used to train a separate model or incorporated as a feature in existing models.

Another interesting feature on the picture is “Why am I seeing this post?”. Explaining to users how the ML-driven product works not only gives them more confidence in the system but also allows them to better understand what is and what is not the expected behaviour that, in turn, will make the quality of our product better.

To be honest, opening the black box might be as complicated as building the model itself but the ML world is gradually getting there. For example, researchers from MIT have recently announced that they have developed an interactive tool that lets users see and control how automated machine-learning systems work. So for a PM it’s important to remember 3 things:

  1. If it’s possible, provide people with visibility into what’s happening.
  2. Provide people with a way to signal back and change the situation.
  3. If the consequences of the model failure can be neither fixed (as in the example with hiring) nor mitigated, it probably isn’t worth developing a model in the first place.

Lesson #4: Find counter metrics

The other side of people understanding how the model works is that they will try to game the system. For example, SEO is a way to “game” search algorithms and create content that would be shown on the first position in SERP.

What can we do about that? Let’s go one step back. When we first decide to build an ML-driven product, we start from the problem definition: in the case of Search it will probably be “users would like to be able to find relevant information on the internet quickly”. As ML requires something more specific and operational from us, we are trying to simplify this statement and understand what could quickly tell us if we got it right or wrong — for example, if a user clicked on a search result, they probably found it valuable. We are starting to optimise for CTR but soon enough see that users are starting to churn. Why is it happening? Some people learnt that a sure way to get to the top of our Search is to create a clickbaity title and intriguing description so users would click on it but not find anything relevant. In our interpretation people clicking more is good but in reality they just can’t find what they are looking for. So we need to go back to our problem definition and think what defines “relevant information”.

  • Is it something coming from a trustworthy source?
  • Is the content original and unique?
  • Does this page contain hundreds of pop ups? :)
  • Etc.

For Search we had to come up with a number of qualitative aspects that are constantly manually assessed and summed up in a score. These scores are used to evaluate the quality of a new model and compare it against our online wins. How do you identify qualitative aspects? It’s a combination of user research and expert consultation. Industry experts can provide you with unique insights and domain knowledge that will help you come up with a v1, user research will help you stress test and refine these assumptions.

Another interesting aspect is that over time the system might start gaming itself. It’s called exploration/exploitation problem: if for all your previous queries about “jaguars” you clicked on links about cars, what should we show to you on the first position now — a page about the car or the animal? In most cases a good model would usually go for the former but who knows, probably you’ve never clicked on animals just because we’ve never showed them to you, not because you don’t like them. Randomisation and diversity are another two counter metrics you should be thinking about.

Lastly, just keep up to date. One of the key challenges for ML products is that they are a part of a living organism called your business, and they sensitively react to all changes. In most cases you are not creating a stand alone ML model, you’re designing a feedback loop: if there are some changes happening on the interface, user action or data side, your model might change without you even noticing. Moreover, it’s not only changes in these components that might affect you. In 2006 Netflix organised a one million dollar competition to improve their recommendations: the goal was to improve predictions for how many stars a user would give to a particular movie. The problem was very complex; the winning team had to combine 100 different algorithms and achieved fantastic accuracy — but the solution was never implemented. While the competition was running, the business changed: Netflix moved on to an online streaming model, and it became possible to collect user interactions like clicks or likes — this data was much better for recommendation engines. It became more important if a user would interact with a movie rather than give it a high rating.

Lesson #5: Set correct expectations

In general, it seems like ML product development is not so different from the regular process: you identify the problem, size the opportunity, assess risks, measure the results and monitor regressions. On the other hand, the devil is always in the detail. In machine learning there are multiple moving parts and no universal solution, and it’s very unlikely that you will solve the problem from the first try. So one of the most important things that the ML PM should do is set correct expectations. ML product development is not for sprinters, it’s for marathon runners. You can’t quickly hack a solution in 2 days as it’s a continuous, exploratory and scrupulous work — and both your teammates, external partners and leadership should understand it. Here are a couple of recommendations for you:

  • Clearly communicate the vision and the people problem you’re trying to solve. People often get lost when you’re going into too many technical details, keep it simple and high-level.
  • If ML is a completely new area for your company, share what are the steps in the ML development process and where you are at. It will make it easier for people to track your progress.
  • Don’t commit to numeric goals at the start of the project: it’s the case when the whole is greater than the sum of its parts so you can’t evaluate the impact before you shipped your v1 model. Frame it as a learning opportunity, with clear definitions of success and failure.

For some people, machine learning is a math problem; in my opinion, it’s a behavioural problem. Understanding human behaviours, emotions, decisions is never simple and takes time but, in return, you get an opportunity to build something truly unique. From designing objects and functions, we’ve moved on to design experiences and now, with ML technology, we are able to design the relationship between your user and your product. So don’t be afraid of the commitment: in the end, it’s the best path towards a happy and long product life.

Here are a couple of resources if you’d like to learn more about Machine Learning:

Courses:

  1. AI for Everyone (Coursera)
  2. Machine Learning (Stanford; Coursera)

Books:

  1. Algorithms to Live By
  2. The Master Algorithm
  3. Deep Learning

Newsletter:

Data Elixir

--

--

Anna Buldakova

AI/ML Product manager at Facebook (ex-Intercom, ex-Yandex).