I blogged last week about IBM’s AI approach and one piece was still under NDA – new capabilities around trust and transparency. These capabilities were announced today.
As part of trying to address the challenges of AI, IBM has added a trust and transparency layer to its ladder of AI capabilities (described here). They see five primary personas round AI capabilities- business process owners, data scientists, AI ops, application developer and CIO/CTO. The CIO/CTO is generally the persona who is most responsible. It is them who see the challenges with trust. To use AI, companies need to understand the outcomes – the decisions – are they fair and legitimate?
The new trust and transparency capability is focused on detecting fairness / bias and providing mitigation, traceability and auditability. It’s about showing the accuracy of models/algorithms in the context of a business application.
Take claims as an example. A claims process is often highly populated with knowledge workers. If an AI algorithm is developed to either approve or decline a claim then the knowledge workers will only rely on it if they can trust and understand how it decided.
These new capabilities show the model’s accuracy in terms defined by the users – the people consuming the algorithms. The accuracy can be drilled into, to see how it is varying. For each model a series of factors can be identified for tracking – gender, policy type, geography etc. How the model varies against these factors can be reviewed and tracked.
The new capabilities can be applied to any model – an AI algorithm, a machine learning model or an opaque predictive analytic model such as a neural network. IBM is using data to probe and experiment against any model to propose a plausible explanation for its result – building on libraries such as LIME to provide the factors that explain the model result. The accuracy of the model is also tracked against these factors and the user can see how they are varying. The system can also suggest possible mitigation strategies and allows drill down into specific transactions. All this is available through an API so it can be integrated into a run time environment. Indeed this is primarily about runtime support.
These new capabilities are focused on fairness – how well the model matches to expectations/plan. This is about more than just societal bias but about making sure the model does not have inbuilt issues that prevent it from being fair and behaving as the business would want.
It’s great to see these capabilities being developed. We find that while our clients need to understand their models, they also need to focus those models on just part of the decision if they are actually going to deploy something – see this discussion on not biting off more AI than you can trust.
This capability is now available as a freemium offering.