Table of contents for Predictive Analytics World 2009
- 5 ways to reduce cost with predictive analytics
- SAS and the art and science of better
- The High ROI of Data Mining for Innovative Organizations
- High-Performance Scoring of Healthcare Data
- Completing the visitor targeting cycle
- New Challenges for creating predictive analytic models
- Predictive modeling and today’s growing data challenges
- The unrealized power of data
- Expert Panel on Challenges and Solutions
- Predictive Modeling for E-Mail Marketing
- Analyzing and predicting user satisfaction with sponsored search
- Some thoughts after attending Predictive Analytics World
Syndicated from Smart Data Collective
I am blogging live from Predictive Analytics World on behalf of SmartData Collective. Hopefully there will also be some podcasts. First up is Eric Siegel, program chair and President of Prediction Impact for the event.
Eric defines predictive analytics as “business intelligence” technology that produces a predictive score for each customer or prospect. A predictive model uses the data you have to create a prediction – you learn from the collective experience your organization has. You can do this in strategic decisions, tactical decisions or in low individual impact but high volume operational decisions (as Neil and I discussed in Smart (Enough) Systems). Predictive analytics can help with response modeling, customer retention, recommendations, credit scoring, ad quality and more. And these solutions can be found in industries from banking to healthcare, consumer services to insurance and retail. The core predictive analytic techniques work consistently across these industries, making it possible to use what one industry learns in another.
Everyone is focused on cutting costs without decreasing business. Eric identified 5 ways to do this along with associated cost cutting bullets:
- Response modeling
Focusing on targeting or marketing to those likely to respond creates “lift” or better results by eliminating the cost of sending direct mail or calling those who are not likely to respond. You can target fewer customers and still get the same response.
Don’t target those who won’t respond.
- Response uplift modeling
But what about the people who responded but who would have responded anyway, even if you had not contacted them. This means having two prediction goals and is sometimes called net lift modeling or incremental modeling. The analytical method is to model on the residual. You model 4 conceptual segments – those who will respond whether you contact them or not, those who will buy if they get an offer, those who won’t buy anyway and those who will buy unless you contact them.
Don’t contact those who would respond anyway.
- Churn modeling
A customer saved is a customer earned. Much less expensive to keep or reactivate a customer than to find a new one. So find out who is likely to stay a customer and don’t spend money on them.
Don’t waste expensive retention offers on those who will stay anyway.
- Churn uplift modeling
Just like #2, uplift modeling can be used to improve churn modeling. In particular, there are people who will stay UNLESS they hear from you – making a retention offer will trigger them to think about their contract.
Don’t trigger those who would otherwise stay.
- Risk modeling
Don’t charge too little for high-risk applicants or give credit to someone likely to default. Personally I like the old saying “there’s no such thing as a bad risk, only a bad price”.
Don’t acquire “loss customers
In fact Eric had more that 5 examples and went on to talk about scoring leads to focus sales resources, detecting fraud and preventing theft, avoiding hiring those who won’t stay and more. He also made the point that while all of these are ways to reduce costs, you can and should also focus predictive analytics on boosting revenue. And once you have deployed predictive analytics, any improvement in your model can make a further difference. For instance, creating an ensemble model using multiple techniques can boost results over a single technique.
Eric made a side point that keeping a control set is critical – keep treating some customers the old way. This helps prove the value you get from the model, helps you monitor the value over time and let’s you perform uplift modeling to improve results. Of these it is particularly important to prove the value, a sentiment with which I could not agree more. Experience is that unless you can prove the value of an analytic deployment it won’t, in the end, matter.