Syndicated from International Institute for Analytics
Tom Davenport pointed me to an interesting article recently – Judgment Call or Automated Decision—Or Both? – by Jan Abrams. It’s an interesting article and I go back and forth as I read it in terms of agreeing or disagreeing with Jan.
First, let me say that the use of automated decisioning systems and credit scores as a whipping boy for the recent financial crisis is overblown. Plenty of manual decisions were made and the fact that people’s bonuses and metrics did not incorporate risk, combined with the unwillingness of bank management to accept that loan volume could not be sustained without compromising on risk, had a much bigger role. The automated systems, at least, were not motivated to hit their bonus target by bending the rules!
That said Jan makes some good points in the article about the balance between judgment and automation. However, it is clear from the article that too many people think automated decisioning is equivalent to “just use the credit score”. This is not now and never has been best practice. Good automated systems use “age of the customer, their demographics, what state they’re in” and much more as well as the credit score. There is no reason that a judgmental decision is required if you want to do more than just use a credit score. This is why I always encourage my clients to adopt business rules AND predictive analytics so they can put this kind of “should always be applied” judgment into a decision wrapped around the score. This is the subject of my research and there are some briefs done and some more coming soon.
I really liked the author’s point about approve/decline decisions being too limiting. I often talk about what I call “yes and” or “no but” decisions – explicitly building in gray area responses not just yes/no, approve/decline stuff. Again, a role for rules and increasingly also for optimization. The author talks about this gray area and is correct that this is often where decisions should be referred to a manual process.
I don’t think, however, that we do ourselves any favors when we hand off decisions that are in the gray zone for a manual override. As Assurant found, analytic engines often significantly outperform human decision makers so it seems rash to “dump” the system just because the decision is not completely clear. Instead I think a best practice is to refer the decision with an explanation of why the decision cannot be made. The person can then provide the additional judgment to complete the picture, allowing the decision to be completed – essentially adding their judgment to the (incomplete) automated decision. Otherwise you are relying on the person to review the same data, the same models and apply all the same regulations and policies and that seems unnecessary, given that the system has already done this!
Of course you will always need ways to do manual overrides of decisions and to have processes for reviewing these and incorporating the results of them into a model so that you get better over time. But don’t fall into the trap of assuming that a judgmental decision is automatically better than an automated one and that the only reason for automation is to reduce cost. Automated decisioning can and does significantly improve decision quality too, especially for operational decisions.