≡ Menu

The dangers of scores in decision making

Share

Last week I responded to some concerns raised about the dark side of analytics and this prompted a very thoughtful comment from Will Dwinnell who said

My fear is that much of the nuance about what a predictive model is really saying about airline passenger THX1138 is lost, and the security guard at the gate just see that the poor passenger has been rated as “83? (out of 100) by “the system”. Non-technical people tend to simplify things like this

And this made me think about an example I came across just recently – the BMI or Body Mass Index. For those of you who are overweight you will have been told, I am sure, that your BMI is “too high”. Yet this article on NPR (found via Evidence Soup and @merigruber) points out that the BMI is a completely bogus measure for an individual. Designed (though this is a generous way to describe the hacking it took) as a measure for a population it has a limited meaning for an individual – someone who is obese is very likely to have a high BMI but someone with a high BMI may or may not be obese. Despite the obvious and clearly described flaws, the BMI has become institutionalized by insurance companies, government agencies and even doctors.

So Will’s concern is a very real one – a “score”, no matter how well designed or well intentioned, can and will be misused by those who don’t understand it. Equally, of course, decisions that don’ t use analytics have problems too. People’s snap judgments and use of how someone looks can be inaccurate with things like how people dress, the color of their skin etc all overriding more valuable information. A score does not suffer from these problems – indeed way back when FICO ran an ad campaign for credit scoring under the title “Good credit does not always wear a suit and tie“.

So, like all things, the art is in a balance. I also feel strongly that this is a reason for automating the decision not just the score. Then instead of the security guard making a potentially invalid use of the score, in Will’s scenario, she gets a decision (to search or not search someone) recommended to her based on the score and on rules carefully designed to use the score correctly. And while bias and error can still make it into the rules, these rules are documented, auditable and probably the result of several people collaborating so problems are less likely.

Share

Comments on this entry are closed.

  • Sébastien Derivaux July 27, 2009, 1:39 pm

    The problem is to have a human at the end of the decision process. The next problem is that this human should use the predicted score as what it is, a hint only and not rely too much on it.

    Nevertheless, sometimes it’s hard (or costly) to go against prior probabilities and get more knowledge to make a clever decision.

  • Mark Eastwood July 28, 2009, 6:14 am

    James,

    I think you make a valid point here. The score, by itself, is a piece of data/information. The score has never been, itself, the decision. A score is intended to be used in a context such as granting credit or the airline passenger context you mention. I think you are right on the nose here, incorporating a score into a decision system both helps eliminate using it incorectly and the potential for bias. Both the “rules” and the score work together to make the decision in a systematic way. I’ve seen instances of organizations implementing scoring who didn’t understand how the decision strategy (aka rules) worked together and therefore received poor results or a significantly reduced benefit.

  • daniel July 30, 2009, 4:58 am

    Nevertheless, sometimes it’s hard (or costly) to go against prior probabilities and get more knowledge to make a clever decision.