David McMichael of MetLife came next. MetLife Auto and Home is a mid size P&C insurance arm within the overall MetLife group. David runs the Quantitative Research and Modeling team within the actuarial department. Team has grown over the last few years with a diverse set of people, something they find very valuable and expanded their scope from a focus on risk modeling to a broad range of issues where advanced analytics can be combined with business know-how to drive results.
The problem being addressed is claims fraud – hundreds of thousands of homeowner claims are adjusted every year. A depressing number of people think it is OK to defraud insurance companies and would not report people they knew who did. This makes for a complex problem and Special Investigation Unit investigation resources are limited. When the SIU investigates someone it is a big deal – very intrusive, visits, lots of reports ordered and so on. As a result it is important to put the most suspicious claims at the top of the investigators’ list. This involves combining expert opinion, identity search, business rules and predictive models. The basic process for the SIU involves a claim being referred (manually or automatically), then an assessment is made to see if an investigation is called for, this must be approved by a manager before an investigation starts.
Building the model involves some complexity:
- It is essential to keep the decision in mind – always being aware of how the model will be used.
- Building a model involves high volume, multi-dimensional relational data (they may have multiple products, multiple claims etc)
- Select the target variable and be sure you know what you are predicting – in this case they decided to focus on previous investigation being initiated (not fraud found)
- Pick the right model form – balance predictive power and explicability for instance. Sometimes there are regulatory requirements to file models which drives the choice of model. In fraud, however, the models can be powerful but opaque and that’s OK.
- Ensemble and hybrid models should also be considered
- Have to deal with very rare events which limits model choices
- Certain claim types were under-represented by the model which lead to sub-models for each piece of the puzzle
- Poorly documented reason codes and other bad data (text was originally stored only as an aide-memoire for a particular agent) meant that sometimes they had to go with what the business thought rather than what the data appears to show.
- No prior history of using text mining so asked the business for 5-10 aspects of a claim (that might be described in the claim) that increased/decreased risk as a starting point.
The score fraud risk was normalized to a percentile so that the SIU could select the kind of percentage they wanted to deal with. This score was mitigated using the positive or negative terms – they tried to ensure that getting a suspicious phrase would make the list and one with a positive (fraud-reducing) phrase would be at the bottom of the list (though they would not take it out of the list completely). The model combines a decision tree with a logistic regression before applying this text adjustment to rank it. Each claim ran through the decision tree and each terminal node selected in the decision tree was a 1/0 variable that was fed into the logistic regression as a variable as well as variables from the claim. This gave a score that was then adjusted using text mining so that, for instance, something with a phrase showing likely fraud would have its score be adjusted to a certain minimum.
To deploy the model they use a batch scoring process – new claims and claims with new information are scored overnight. SIU then have a UI that let’s them see the list of claims by score as well as the result of the rules and identity search. The deployment takes the model and data transformations and deploys them as a set.
When the models were being assessed MetLife found that standard statistical measures could be helpful but that the best metric usually varies case to case. Important therefore to keep the decision in mind and find the model that will result in the best business outcome. You also have a lot of data for these kinds of problems so use it – use validation and cross-validation as well as a true holdout data set (something never used to build the model). The models showed good results – top 10% of the score hit 60% of the investigations and the top 40% of the score hit 90%. The 10% score was critical as about 10% of the claims get investigated so getting the maximum amount of fraud detected from reviewing 10% really matters.
The use of the test mining identified more than 100 claims per year that should be investigated above and beyond those that would have been flagged only by the structured data. Actual investigations are launched for those flagged by text mining at more than 10 times the rate relative to those referred in other ways.
In the future, David sees MPP making a big difference because it allows bigger data – allowing more external data to be included and more sub-models to be developed for different time horizons (new claims, after 1 week etc). He also sees that in0database modeling, new algorithms and combining established techniques in new ways will make a difference. Big Data, Big Math.
He also referenced the discussion I had with Tom Davenport about the industrialization of analytics, the impact of this on agile analytics and the potential for agile, industrialized analytics. Lessons learned
- Diverse backgrounds for modelers helps
- Keep the decision in mind
- Hybrid and submodels can be effective
- Evaluate competing models carefully
- Don’t wait for the perfect model
- Don’t overestimate the effectiveness of models
- Get started with text analytics
Disclosure: MetLife is a client of Decision Management Solutions
Comments on this entry are closed.