Eric Charpentier had a nice introduction to scorecards overs on his blog. He does a nice job of describing an additive scorecard, that is a scorecard designed to represent a predictive analytic algorithm (not to be confused with a dashboard-like scorecard). He does not talk much about reason codes – the ability of a scorecard to give you the reasons for a score in terms of those elements that made the greatest difference to the score – but otherwise he hits the key points.
The great thing about scorecards from an analytic perspective is that they allow non technical people to easily review and understand how a predictive score is being generated. Unlike, say, a Neural Network (which is pretty opaque as to why it generates the score it generates for you), a scorecard has high explicability. This makes them popular in regulated industries (things like consumer credit) where, after all, saying “you have been turned down because the machine told us to” doesn’t sit well with regulators.
From a decision management/business rules perspective scorecards are also good. Not only is it easy to write a rule to use a score (true of any predictive score of course), it is easy to describe a scorecard in terms of rules that must be executed to calculate the score. This means that a scorecard metaphor (like those provided by IBM or FICO) allows you to view and edit the scorecard elements and then execute a ruleset under the covers. Calculating the score is then handled by the rule engine as part of the decision making process. This allows the integration and deployment capabilities of the rule engine to be applied to the analytic model, helping ensure that the model will actually get used (something not true, sadly, of many models).
He wrote the introduction as he is about to review the IBM/ILOG Scorecard Metaphor. As you can imagine I blog about scorecards reasonably often – check out the scorecard tag.
Comments on this entry are closed.