Cassie Kozyrkov – Head of Decision Intelligence at Google- has a great piece on 12 Steps to Applied AI. As usual she’s got lots of great tips. I don’t have anything to add to her more technical steps but I want to add some commentary on Step 0 and Step 1.
Let’s start with her Step 0 thoughts:
Check that you actually need ML/AI. Can you identify many small decisions you need help with? Has the non-ML/AI approach already been shown to be worthless? Do you have data to learn from? Do you have access to hardware? If not, don’t pass GO.
Great focus here on operational decisions (small, transaction-level decisions you make many times) not on big, one-off decisions. Also good to make sure you don’t have a non-ML/AI approach that will work. I would say it differently – “Has the non-ML/AI approach already been shown to be sub-optimal (rather than “worthless)” – as we see a lot of clients where adding some ML/AI boost results without the need to replace the old approach completely.
…leaders who try to shove AI approaches where they don’t belong usually end up with solutions which are too costly to maintain in production. … If you can do it without AI, so much the better. ML/AI is for those situations where the other approaches don’t get you the performance you need.
Yup. Nothing to add.
The right first step is to focus on outputs and objectives.
Imagine that this ML/AI system is already operating perfectly. Ask yourself what you would like it to produce when it does the next task. Don’t worry how it does it. Imagine that it works already and it is solving some need your business has.
We have a game we play to do this. We call it the “if only” game. We ask business owners to fill in the blank in the following sentence “if only we knew BLANK we would decide differently”. This let’s them imagine that some ML/AI algorithm can magically produce the insight they need. Exactly as Cassie suggests, focusing on your objectives and what outputs from ML/AI would help.
Moving on to Step 1: Define your objectives she says
Clearly express what success means for your project. …
How promising does it need to be in order to be worth productionizing? What’s the minimum acceptable performance for it to be worth launching?Pro tip: make sure this part is done by whoever knows the business best and has the sharpest decision-making skills, not the best equation nerdery. Skipping this step or doing it out of sequence is the leading cause of data science project failure. Don’t. Even. Think. About. Skipping. It.
Critical to this step for us is building a decision model. With a model of the decision making – the current approach for sure and perhaps also a model of your intentions – it is much easier to identify specific ML/AI opportunities and to define how promising it has to be – how predictive – and to capture the minimum acceptable performance. We have had some real-world problems were very low levels of accuracy were good enough (“better than 50/50”) and others where it had to be pretty accurate (“if it’s not better than 95% accurate we won’t use it at all”). Don’t guess, build a model and know.
And how do we build these decision models? Well we ask the person who “knows the business best” how they decide. And, as Cassie says, not doing this is the leading cause of failures so don’t skip it! I wrote this post on analytic failures – some are acceptable, even inevitable given the nature of analytics. But many are avoidable just as Cassie says.
For more on this why not check out our brief on Succeeding with AI?