A reader sent me an interesting question after watching the ILOG seminar on scorecards and rules in which I participated earlier this week (recording of this rules and scorecards seminar is available). Here’s a summary of what he said:
One immediate comment I would have is that scorecarding seems to insert an extra unnecessary step. Rather than have an extra level of modeling and human intervention, you can directly include data and knowledge and the framework basically generates the model for you in such a way that it guarantees that the information content of the model is equal to the information content of the inputs. Scorecarding represents an opportunity for either loss of useful information, or addition of artificial information. Depending on how you assign attribute score values and how that is then mapped to probabilities, the scorecard would almost certainly have a different information content from the original inputs. That’s important, because the value of decisions is a function of this information about the future. If your model of the future is bogus, so is the value, and you certainly stand to lose value one way or another.
There’s more on this topic here: Scorecards and Shareholder Value.
I think Dave makes some good points in his post, though I think the ability of scorecards and decision rules to be validated by regulators is not one that should be underestimated. Hopefully some of my readers will post some comments here or there and we can get a debate going.
Disclosure: I am an advisor to Provisdom.