There is a great article from Bain and Company from 2013 that Elena Makurochkina (@elenamdata) pointed me to today – Infobesity: The enemy of good decisions. This is not only a fabulous phrase – infobesity feels viscerally correct as soon as you see it – but a great article too. Some quotes:
Companies have overindulged in information. Some are finding it more difficult than ever to decide and deliver…
Useful information creates opportunity and makes for better decisions. Infobesity does not.
These are great. More information – infobesity – will not improve decision-making. Simply overloading decision-makers or decision-making systems with every more data will not get it done. This has been true for a long time – how long have we been talking about “drowning in data” after all?
Big Data makes the problem even greater, making it ever easier to drop more data on a problem and declare victory. We can mitigate this somewhat by using analytics to summarize and increase the value of our data but we run a real risk even so of overwhelming decision-makers.
And frankly decision-makers themselves are part of the problem. Ask them what data they need to make a decision (or that a system would need to make it) and they will rattle off a long list.
So what can we do about it? Well the folks at Bain suggest four things:
- Focus clearly on the data you need
- Standardize data where you can
- Watch the timing of which data, when
- Manage the quantity and source of your data, especially Big Data, to make sure it is relevant to decisions
But how to do this in an analytic project? We have found that decision modeling, especially decision modeling with the Decision Model and Notation standard, is a great tool. A decision model identifies the (repeatable) business decision at issue, decomposes it into its component sub-decisions, identifies the data that must be input to each piece of the decision-making and shows where the know-how to make (or automate) the decision can be found. Plus it gathers business, application and organizational context for the decision. Experience with these models at real customers shows just how these models can tackle infobesity:
- One decision model showed that the data a group of medical experts had requested included lots of data that, while interesting, was not going to actually impact their decision about a patient.
- The same one showed that all the data in the system could not replace one particular piece of data that had to be gathered “live” by the decision-maker.
- Another showed that a large amount of claims history data did not need to be shown to claims adjusters if analytics could be used to produce a believable fraud marker for a provider.
- A third model showed that adding more data, and even analytics, to a decision could not result in business savings because it happened too late in a process and all the costs had already been incurred. A cheaper decision meant an earlier decision, one that would have to be made with less data and less accurate analytics.
- A model showed the mismatch between how management wanted the decision made – their objectives – and how the staff that made the decision were actually incented to make it.
and much more. In every case the clear focus on decisions delivered by the use of decision modeling cured actual or impending infobesity. For more on how you can model decisions for analytics projects, check out this white paper on framing analytic requirements with decision modeling.
I’ll leave the last word to the folks at Bain:
At root, a company’s performance is simply the sum of the decisions it makes and the actions it takes every day. The better its decisions and its execution, the better its results.