≡ Menu

How to stop your chatbot from getting you (bad) press

Share

You may have noticed articles about a chatbot recently that got a little out of line – an airline’s chatbot misstated the rules for a fare class (see The Guardian‘s article or The Washington Post‘s). The airline has, of course, been held accountable for its chatbot – just as it would have been for an employee. Two key lessons can be learned from this outcome:

  • You are responsible for everything your chatbots say, even their hallucinations and errors.
    I would have thought this was obvious but apparently the airline’s lawyers thought that blaming the chatbot might work!
  • You don’t want your chatbot making decisions about things like eligibility, pricing, discounts – decisions that are regulated, based on complex and published policies, and that impact customers.

The airline’s intent here was a good one I think – use a chatbot to make it easier for people to get answers to questions about the notoriously complex topic of fares. The power of Large Language Models (LLMs) and Generative AI (GenAI) to power more interactive chatbots is real and is going to change how consumers use your website and understand your intent. They can dramatically improve explicability, making your website/systems easier to use, easier to understand and fundamentally less technical to access.

But there are issues. What AI chatbots say is not always reproduceable. They may hallucinate – sometimes spectacularly and with references! How they work is largely inexplicable – especially to regulators. And even bad answers look like good ones. And, as this story shows, you’re going to be held accountable for them.

The solution is not to dump LLMs/GenAI from your roadmap but to recognize that this technology has no sense of the truth or facts and simply generates the most likely content – it’s not prescriptive. You need to add prescription so you can precisely define what the chatbot should do in which circumstances that is based on ground truth and factual content. While LLMs and GenAI are great for interacting with customers and explaining results, they can’t be trusted to prescriptively make regulated or policy-based decisions.

Adding decisioning based on business rules – explicit decision logic – grounds their behavior in facts and rules. Modern decisioning platforms are great for transparency and consistency, especially when decision modeling is used to manage the logic. Using a decisioning platform to automate decisions like eligibility (for a fare, product, service or benefit), dynamic or complex pricing, risk assessment gives you precise business control over your decisions. Unlike a chatbot, the logic is explicit, explainable and managed.

So why not JUST use decisioning? Decisioning platforms deliver APIs aimed at internal systems. The decisions are compliant, precise, transparent – but not accessible to a customer. Typically, you have to put all the data needed for the decision into forms and processes before you can get an answer. Adding LLMs/GenAI to handle the interaction provides a customer-friendly interface to the decisioning APIs and delivers both a great interaction and reliable, compliant decisions.

This was a topic of a webinar we did with IBM recently – How to achieve more trustworthy Generative AI with Decision Automation [free registration required]. See also this post about using AI to improve interactions and this one on using ML/AI to improve the operational decisioning itself.

If you are interested in learning more about how you can combine AI-driven decisioning with chatbots, drop us a line. Or, if you are based in the NYC area, register for our upcoming event April 10: Unlocking the Power of Automated Decisions: Harnessing the Power of AI/ML for Intelligent Rules

Share