The use of analytics in business decisions, presented by one of InfoCentricity’s customers, was next. In many organizations modelers are busy building predictive models that they then throw over the wall to a business analyst. To bridge this gap you need a collaboration platform that allows modelers to do their thing while allowing business analysts to do theirs. Xeno supports this kind of collaboration. In general this collaboration makes a difference in a number of areas:
- The population of interest
Scorecards tend to exclude those with little history, pre-defined treatments or segments where history is unreliable. On the business side you tend to have hard policy rules that affect treatments. Collaboration on the “big picture” of the population can affect scorecard design, explain data anomalies and allow both sides to re-examine the population with respect to business goals. - Performance outcomes
Understanding how long a model / strategy will be in production and what it is going to be used for help build the right scorecard. If a scorecard is going to be used for a long time while the strategy changes often then a more generic scorecard may be more useful, for instance. - Predictors or decision keys
How do you decide if something is a segment variable or a predictor? Modelers think about statistical measures where business analysts think about policies, customer service impact, business performance. Worth some back and forth between policy, data sources and scorecard. “Hard” rules first, then the softer policies then refinement based on analytics, for instance. Knowing how sophisticated the strategy is likely to be will also prevent over-development of scorecards.
The customer came next and presented on the specifics of how to review policies analytically to streamline the loan review process in originations. Policies come from bad loan review (what went wrong review that generates rules for next time), history and domain expertise. Modelers, meanwhile, can use standard application data, credit variables and policy variables. They also needed to infer the performance of rejected and unused accounts (reject inference) so that this could be used in the model. They mimiced the application flow as a tree and then got modelers and business people to work very interactively to see what existing policies did, try different scenarios and test other variables to see what might be worth including in policies. This helped them find:
- Unproductive policies
Why review those with low acceptance rates/high bad rates or high acceptance rates/low bad rates? - Policies that should be added
Medium bad rates might respond to further splitting to divided high from low bad rates - Places where more precision would be helpful
Existing splits that have similar bad rates for instance could be replaced with splits that work better
In general this approach led to many small changes that had a sizable impact while also increasing the confidence in the automated decision making. Origination takes more than scores, it takes policy rules too. Reviewing these rules analytically makes for better efficiency and more validated changes.