≡ Menu

But do experts REALLY do better?


Rob Meredith corrected my post on BI where I accused him of being Curt Monash rather than part of the Monash University team down under! He made a couple of great comments too. While I loved his distinction around business intelligence (being about collecting information not necessarily making better decisions), I do have to take issue with something he said:

No system can make decisions as well as an informed, competent human decision-maker.

In fact I think the data is pretty compelling. Read Super Crunchers or Competing on Analytics (or indeed Smart (Enough) Systems) and plenty of stories can be found where the exact opposite is true. Credit scores do better than loan officers, data mining can do better than doctors, an equation does better than wine experts and on and on. This was the topic of a post in the context of marketing this week also – Paul Barsch had an interesting post titled Glorifying The Gut.

I would agree that there are MANY decisions where Rob’s statement is true but it is clearly not true where there is a lot of relevant historical data that can be used or where the decision is subject to human errors of judgment. I would also point out that it is not relevant for many decisions – there is no-one to make a decision when you interact with a kiosk or ATM or website and no time for a human decision in many real-time transactions. Another part of Rob’s comment did resonate however:

There’s also ethical issues associated with accountability and moral responsibility for individual decisions if no human was involved.

Rightly or wrongly people don’t like the idea of machines taking decisions. Anyone applying decision management needs to remember that.

Technorati Tags: , , , , , ,


Comments on this entry are closed.

  • Rob Meredith October 30, 2007, 10:30 pm

    Hi James,

    I understand your point, but would still argue my own. Yes, machines are good at making certain predictions or classifications, such as credit loan assessments. However, I draw a distinction between analysis, and actually making a decision. Even in these relatively simple cases (little ambiguity, clear-cut decision rules, and plenty of historical data), assuming the same information was given to a human decision maker (ie. the classification results, and so on), the ability to apply common sense trumps the machine’s actual decision making abilities every time. In most cases, a competent decision maker would commit to the same course of action recommended by a computer. In a few cases, though, this should be able to be overridden, and there need to be processes in place to allow for this.

    Indeed, this is exactly what happens in banks: at least in Australia, credit approval is always ultimately authorised by some person, even with the use of credit application analysis systems. This is required by legislation (I assume) as well as for sound business reasons.

    I guess my position is that I have no problem with systems providing recommendations to decision-makers. However, I do have a problem with full automation of a decision-process (even in ATMs – my card’s been eaten enough times to justify that position!) without recourse to human oversight and approval or review of decisions. Full automation is risky, and as alluded to before, morally questionable. The question isn’t ‘can decision processes be fully automated?’, but rather ‘should they be?’ My answer to the first is yes, but only for very limited, simple decision problems. My answer to the second is probably not.

    Interesting stuff, though!