I was reading this article on Have you tested your strategy lately? in the McKinsey Quarterly and I was struck by test 10: Have you translated your strategy into an action plan?
This struck me because one of the most persistent problems I see in corporate strategy is what I call the “make it so” problem. Senior executives hash out a new strategy – increasing customer retention or satisfaction, focusing on cost reduction, targeting the subprime mortgage market or whatever. They “run the numbers” at a macro level and then tell their teams to “make it so”. The problem with this is that the strategic decision they are making requires hundreds of tactical management decisions to be made and thousands or millions of operational customer-centric decisions to be made. And the strategy will only happen if all these decisions are made in a way that is aligned with that strategy. This hierarchy of decision types (more fully described here and in the book Smart (Enough) Systems) and the critical importance of the operational layer is at the heart of Decision Management.
Take one of my favorite examples, customer retention. How many times have you heard an exec on a quarterly conference call say that part of the company’s strategy is to improve customer retention rates in the next quarter? Someone has run the numbers, figured out that retaining more customers at a macro level would boost the bottom line by $X and so this is the new strategy. A new Key Performance Indicator like “customer retention rate improvement” or something has been created and added to the executive dashboard. And as far as many executives are concerned that’s it – all that remains is to yell at people if the KPI does not move in the right direction (because they have a dashboard that simply raises their blood pressure). But if you have read books like Execution by Larry Bossidy and Ram Charan you will know this is not enough – a strategy must be operationalized to be effective: “unless you translate big thoughts into concrete steps for action, they’re pointless”
You need to understand the strategic decision you are making, the tactical decisions that implement that strategy day to day and week to week and the way those strategic and tactical decisions change the parameters of the core operational decisions that impact your customers. In customer retention it means mapping the big strategic decisions on customer retention targets, priorities and budget to the tactical decisions about retention campaigns, retention offer design etc. Even more importantly it means understanding all the operational decisions that impact customers and customer retention – the decision as to which offer to make to retain this customer when they call to cancel, the decision as to which customers to call and proactively retain today with what offer, the decision to recommend an upsell or downsell to a customer in their first month to align their plan with their needs and so on. Each of these operational decisions is about a single customer and is personalized, customized to that customer – it is a micro decision. And it is the cumulative effect of these micro decisions that determines if your strategy is going to be implemented or not, if you are going to move the dial on the new KPI. If you don’t improve the accuracy of the decisions made by a specific call center representative for a specific customer then you won’t improve customer retention rates. If you don’t know how you are going to make such a change, or if it takes you too long to make it, then your strategy is just “noise and fury signifying nothing”.
Of course it is also true that testing a strategy without testing the operational decisions that underpin it is a fundamentally flawed approach also. I call this “simulating aggregations” where you should be “aggregating simulations”. In other words strategic tests are run against aggregated data – estimating the impact of a change in pricing on the whole customer base for instance – where what is required is operational-level testing – estimating the impact of a change in pricing on each customer and then aggregating the results.