Syndicated from BeyeNetwork
Oz Analytics – The Darker Side Of Analytics was an interested little post discussing the risk of using analytics to, in this case, to profile potential criminals based on past behavior. The use of analytics to predict crime and criminals is certainly growing and, as Steve said in his post, you have to
wonder how many times their ‘digital techniques’ will create false positives and (presumably) false information being sent out?
In my opinion the question is not whether analytics have such risk – clearly they do – but whether the use of analytics increases or decreases the risk of a false positive.
Without analytics, this kind of proactive policing (essential not just to stopping pedophile tourists but also to catching terrorists, for instance) relies on human judgment. Humans, unlike analytics, are prone to prejudices and personal biases. They judge people too much by how they look (stopping the Indian with a beard for instance) and not enough by behavior (stopping the white guy who is nervously fiddling with his shoes say). They tend to be driven by recent results to the exclusion of ones further in the past and much more (see this post on decision making traps). If we bring analytics to bear on a problem the question should be does it eliminate more biases and bad decision making than it creates new false positives. Over and over again studies show analytics do better in this regard (check out some great examples in Super Crunchers). So, personally, I think analytics are ethically neutral and the risk of something going “to the dark side” is the risk that comes from the people involved, with or without analytics.
This post was found through Smart Data Collective syndication.
Comments on this entry are closed.
I’d have to agree – at the end of the day, it’s people who design analytic models, assign weightings, assess the results and decide whether to take action and what action to take. We’ve aimed our DataRush technology at delivering speed and power in analytics to, ideally, support more rapid iterations and facilitate the ability to examine more data over a longer period of time – hopefully avoiding some of the decision-making traps you reference.
Though I am someone who constructs quantitative models for a living, and who is a strong proponent of their use, I will note the following danger: There is frequently a significant knowledge gap regarding the technical nature of these predictive models, between people who construct them and people who consume their output. My fear is that much of the nuance about what a predictive model is really saying about airline passenger THX1138 is lost, and the security guard at the gate just see that the poor passenger has been rated as “83” (out of 100) by “the system”. Non-technical people tend to simplify things like this, and I think it would be a shame if the “83” was, by default, given more weight than the word of a citizen.