There exists an AI trust gap among enterprise leaders today as leaders do not understand the basis upon which AI applications and agents make decisions. This acts as a deterrent for businesses to scale AI applications, especially Generative AI (Gen AI) apps, that pose risks such as hallucinations and bias. According to McKinsey, in a survey of The State of AI in 2024, 40 percent of respondents identified the lack of explainability as a key risk in adopting Gen AI. We reviewed research from McKinsey, Harvard Business Review, and talked to our customers as well as our technical experts about this topic. Here is what we learned:
Importance of AI Explainability
- Trust is fundamental for AI adoption
- Without trust, customers and employees won’t use AI systems
- AI Explainability helps understand AI systems’ inner workings and monitor output accuracy
- It’s essential for scaling AI from early use cases to enterprise-wide adoption
The Benefits of AI Explainability
- User Adoption
- Improve user satisfaction and drive adoption.
- Foster innovation and change management.
- Risk Mitigation
- Identify and mitigate bias and inaccuracy.
- Reduce operational failures and reputational damage.
- Continuous Improvement
- Refine AI models based on insights from explainability.
- Enhance model performance and align with user expectations.
- Regulatory Compliance
- Ensure adherence to industry regulations.
- Minimize the risk of non-compliance penalties.
Persona Based Explanations
Explanations should be given at varying levels. AI explainability techniques should be mapped to the needs of different personas according to the use cases. For example, the CTO of a mortgage lending company would want to understand the inner workings of a Machine Learning algorithm to understand how decisions are being made. On the other hand, a loan officer would not want all the technical details.
How to Start with AI Explainability
Some factors that should be considered are to build a Cross-Functional Team, establish a Human-Centered Mindset, Measure Objectives/Iterate, and leverage Explainability tools. Some widely used tools include open-source algorithms such as LIME, SHAP, IBM’s AI Explainability 360 tool kit, and others. Below are 2 links to IBM’s AI Explainability 360 tool:
In conclusion, AI and LLM models are hard to understand. Explaining them will go a long way in humans willing to trust and use AI apps/agents at scale. While there are tools that can help explain AI models, we believe that there is much more that needs to be done in this area.