≡ Menu

You may have noticed articles about a chatbot recently that got a little out of line – an airline’s chatbot misstated the rules for a fare class (see The Guardian‘s article or The Washington Post‘s). The airline has, of course, been held accountable for its chatbot – just as it would have been for an employee. Two key lessons can be learned from this outcome:

  • You are responsible for everything your chatbots say, even their hallucinations and errors.
    I would have thought this was obvious but apparently the airline’s lawyers thought that blaming the chatbot might work!
  • You don’t want your chatbot making decisions about things like eligibility, pricing, discounts – decisions that are regulated, based on complex and published policies, and that impact customers.

The airline’s intent here was a good one I think – use a chatbot to make it easier for people to get answers to questions about the notoriously complex topic of fares. The power of Large Language Models (LLMs) and Generative AI (GenAI) to power more interactive chatbots is real and is going to change how consumers use your website and understand your intent. They can dramatically improve explicability, making your website/systems easier to use, easier to understand and fundamentally less technical to access.

But there are issues. What AI chatbots say is not always reproduceable. They may hallucinate – sometimes spectacularly and with references! How they work is largely inexplicable – especially to regulators. And even bad answers look like good ones. And, as this story shows, you’re going to be held accountable for them.

The solution is not to dump LLMs/GenAI from your roadmap but to recognize that this technology has no sense of the truth or facts and simply generates the most likely content – it’s not prescriptive. You need to add prescription so you can precisely define what the chatbot should do in which circumstances that is based on ground truth and factual content. While LLMs and GenAI are great for interacting with customers and explaining results, they can’t be trusted to prescriptively make regulated or policy-based decisions.

Adding decisioning based on business rules – explicit decision logic – grounds their behavior in facts and rules. Modern decisioning platforms are great for transparency and consistency, especially when decision modeling is used to manage the logic. Using a decisioning platform to automate decisions like eligibility (for a fare, product, service or benefit), dynamic or complex pricing, risk assessment gives you precise business control over your decisions. Unlike a chatbot, the logic is explicit, explainable and managed.

So why not JUST use decisioning? Decisioning platforms deliver APIs aimed at internal systems. The decisions are compliant, precise, transparent – but not accessible to a customer. Typically, you have to put all the data needed for the decision into forms and processes before you can get an answer. Adding LLMs/GenAI to handle the interaction provides a customer-friendly interface to the decisioning APIs and delivers both a great interaction and reliable, compliant decisions.

This was a topic of a webinar we did with IBM recently – How to achieve more trustworthy Generative AI with Decision Automation [free registration required]. See also this post about using AI to improve interactions and this one on using ML/AI to improve the operational decisioning itself.

If you are interested in learning more about how you can combine AI-driven decisioning with chatbots, drop us a line. Or, if you are based in the NYC area, register for our upcoming event April 10: Unlocking the Power of Automated Decisions: Harnessing the Power of AI/ML for Intelligent Rules

There’s huge potential in AI, yet far too many AI projects fail to deliver meaningful business value. The technology works, the team has skills, the data is available – and yet it never comes together. To address this critical issue, Eric Siegel has just published The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. In The AI Playbook, Eric has distilled years of experience and a ton of great advice into an easy-to-follow roadmap for success.

Even before its release, the book hit the #1 slot on Amazon’s top 100 Hot New Releases in Technology. Fast Company said “An antidote to overheated rhetoric of all-powerful AI… helpfully lays out the key steps to deploying the technology we’re now all obsessed with.” while The Forecast said it “Separates AI fact from AI fantasy.”

Eric’s approach – BizML – puts deployment and business value at the heart of machine learning and artificial intelligence projects.  Like Eric, I see a focus on the business problem – what I call the decision and Eric calls the deployment goal – as the essential first step. We’re also both big believers in making sure you deploy the model into business operations and focus on continuous improvement. Eric’s book does a great job of outlining where these steps fit and illustrates the whole with some compelling story. If you’re interested, there’s more information, including a nice cheat sheet for the approach at http://www.bizML.com but you should really just buy the book.

Eric Siegel, author of Predictive Analytics and the Chair of Machine Learning Week, had a great article on Harvard Business Review recently – The AI Hype Cycle Is Distracting Companies. You should read it, as he makes a lot of great points about AI hype and its dangers. One comment, in particular, stood out for me though:

Most practical use cases of ML — designed to improve the efficiencies of existing business operations — innovate in fairly straightforward ways. Don’t let the glare emanating from this glitzy technology obscure the simplicity of its fundamental duty: the purpose of ML is to issue actionable predictions

This focus on improving existing business operations in a straightforward way is critical. We see a lot of companies spending a lot of money on ML and AI. Much of it is wasted because the ML/AI team, keen to show how smart they are and to justify the investment, insists on putting all their effort into “transformational” projects or “new businesses”. The potential for ML to improve their current business in meaningful but boring ways is ignored. These ML/AI teams are often more focused on using the coolest technology, so they will be hired by bigger companies and be given bigger budgets, than they are on delivering business value NOW.

In contrast, successful ML/AI teams are ruthlessly focused on incremental improvements – taking well understood problems in the business and using machine learning to improve results in each area in a very focused way. Often the improvement is small at a per-transaction level but the team focuses on high-volume problems, multiplying that small improvement by very large numbers of customers, products or transactions.

Not only does AI hype tend to distract from these very practical problems, it tends to result in a model-first or technology-first mindset. It becomes more important that the project uses AI than that it generates results. As Eric goes on to say:

This exacerbates a significant problem with ML projects: They often lack a keen focus on their value — exactly how ML will render business processes more effective. As a result, most ML projects fail to deliver value.

Our experience is that you really need:

  • A clear understanding of what decision needs to be made differently to generate a result
  • A detailed awareness of how exactly your ML model will influence that decision
  • A sense of what organizational change will need to happen to get from the current decision-making approach to the new one.

We ensure this on our projects using decision modeling and our DecisionsFirstTM Approach to projects. This means we always know how the decision is being made, and can automate most of it, before we start applying ML to improve it.

If this is a topic that interests you, why not come to Machine Learning Week? I’m speaking on the Tuesday to kick off the business track (Step 1: Setting Machine Learning and AI Projects Up for Success) and giving a workshop (Machine Learning Operationalized for Business: Ensuring ML Deployment Delivers Value) on the Monday. Or drop us a line at Decision Management Solutions and learn how we can help you directly.

I got a chance to listen to Mike Gualtieri of Forrester talk about his recent Wave report on AI Decisioning Platforms. This focuses in on a core set of vendors and compares them in detail as a follow-up to his earlier AI Decisioning Landscape report (which included Decision Management Solutions with our DecisionsFirst Modeler product).

AI Decisioning Platforms are a superset of ML/AI Platforms (which Mike also covers) and this wave represents an evolution – it started as a review of Business Rules Management Platforms, evolve to talk about Digital Decisioning Platforms and now focuses on AI Decisioning Platforms to emphasize the value of these platforms to those deploying and exploiting AI.

Mike pointed out that making decisions is the best possible use case for AI – especially as you should consider making a recommendations as a decision. He emphasized that enterprises rise or fail based on the collective efficacy of their decisions. And, while some of those decisions are big, strategic decisions, many more are rapid, transactional and operational. He also pointed out that insights are perishable, real-time insights especially so, meaning that decisioning really matters to the effective use of real-time insights. And as the time to decide shrinks, enterprise need to do more real-time decisioning.

Legacy architectures are not geared to this kind of data provisioning while legacy development approaches – writing code – is not going to keep human experts in control. Mike thinks a focus on “human governed AI” is essential and this means using an AI decisioning platform that combines a broad set of technologies, supports rapid learning loops, and can have industry accelerators.

Before getting into the details of the platform, Mike reminded the audience to begin “Decisions First”, pointing out that before using one of these platforms you need a decision model that combines several elements- rules, ML, AI , optimization. Our experience tracks strongly with Mike’s – you NEED a model first, ideally one built using the Decision Model and Notation (DMN) and a top-down, business-centric approach.

He then identified 9 of the most important criteria that were used in the Wave

  1. Data
    An ability to connect to sources, manage features and pipelines, support data annotation and cleansing.
  2. Provide a range of intelligence technologies
    • Statistics and queryable analytics
    • Pure math
    • Constraint based optimization / Operations Research / Mixed Integer Programming
    • Machine Learning
    • Human decision logic as rules, policies knowledge and processes. Capturing this business expertise is an essential feature of an AI Decisioning Platform he said.
  3. Low/no code.
    Tools for business experts e.g. decision modeling, abstraction as well as productivity tools for data engineers, data scientists and developers.
  4. Composability and reuse to drive enterprises decision agility, strong collaboration tools
  5. Trust, understanding and transparency. Business simulation is a critical element.
  6. Management (several layers up on top of Kubernetes)
  7. Model Ops – not just MLOps but a more holistic Ops function that deploys your whole decision model
  8. Multiple deployment options
  9. Scalability

Mike wrapped by pointing out that AI Decisioning can’t deliver itself – business users need to define the strategy criteria. Business experts MUST decide what and how to decide!

You can get reprints of the report directly from Forrester (if you are a subscriber) or from vendors like FICO who might offer it for free. If you want help selecting an AI Decisioning Platform or maximizing the value of one, drop us a line – that’s what we do.

In March 2023, three U.S. banks failed. This triggered a sharp decline in global bank stock prices and swift response by regulators to prevent potential global contagion. Banks across the US scrambled to respond to the crisis.

Join me on May 24th for a discussion on how you can protect your business and be prepared for the next crisis. You’ll learn how to achieve the flexibility and agility you need to navigate rapid market change. You’ll understand the tools you need for dynamic impact analysis.

During the webinar, we will discuss:

  • How to recession-proof your business by ensuring your infrastructure is prepared for rapid change in market conditions and regulatory environment.

  • How to simulate the impact of market, interest rate and regulatory changes for loss forecasting and stress testing.

  • Managing and automating loan origination to maximize value, reduce defaults and safely stay in the market.

It’s time to modernize your platform with digital decisioning. Even if you already own the right infrastructure, it’s time to make it more nimble and responsive.

Register Now

Our friends at IBM are running ran a webinar on May 9 that is a great opportunity to see decision automation in action:

As expectation grows for faster and more personalized digital experiences, business decisions are increasingly important – and often more complex. Intelligent decisions that fuse predictions and policies can deliver more effective decisions that help every element of your businesses run more smoothly. Join us for an insightful webinar that explores this new domain:

  • Why intelligent decisions
  • Anatomy of intelligent decisions
  • Examples of intelligent decisions by industry
  • How you can build intelligent decisions

Register to join here and get your calendar invite

Watch the On-Demand Webinar Today

The slides are available to download below and please share any of your questions here.

I worked on a paper for IBM called “Operationalizing AI: Beyond AI Pilots with Digital Decisioning” on the same topic. This is very much top of mind with our customers these days – how do they use a rules-based decision platform to effectively operationalize advanced machine learning and AI models.

Check out the paper and the webinar!

Our CTO Ryan Trollip is presenting with Scott Horwitz from FICO in a great webinar coming up on April 20th:

Insurance claims management is a complex business. Customers want their claims processed and approved quickly. Insurance providers need to manage risk, improve scalability, retain institutional knowledge when staff changes, reduce overhead costs of management, and comply with myriad government regulatory requirements.

To optimize operations and meet all of their compliance requirements, many insurance companies have a goal to increase the percentage of claims that can be processed and adjudicated with no human decision-making involved. In other words, increasing their rate of straight-through processing.

Drawing from lessons learned from our customers, Decision Management Solutions and FICO will share how organizations can:

  • Simplify the management of the decision-making capability by automating it
  • Visually provide clarity and elicit valuable knowledge from working with decision models
  • Clarify the What and Why when streamlining and automating the decision making

Register here.

We also have a great white paper on Next Generation Claims Systems that you can download here.

It’s that time again – time to take the Rexer Analytics Data Science Survey

Rexer Analytics has been conducting this survey since 2007! Each survey explores the analytic behaviors, views, and preferences of data scientists and analytic professionals. This year Karl is working with Eric Siegel and Machine Learning Week to design, promote, and analyze the survey.

Summary reports from previous surveys are available FREE to download from the Rexer Analytics website – and are fascinating! Karl Rexer and Eric Siegel will present highlights of the 2023 survey at the Machine Learning Week conference in Las Vegas in June 2023 (I’m speaking and teach there too) and a full summary report will be available for download from the Rexer Analytics website later in 2023.

It’s completely confidential and not being conducted for any vendor. The survey should take approximately 10 minutes to complete. So go take it…

https://s-9ed913-i.sgizmo.com/s3/i-Paem5JH2y2xvCPNGxn-5888579/?sguid=Paem5JH2y2xvCPNGxn

I got my hands of a copy of Krishna Pera’s new book, Big Data for Big Decisions recently. I met Krishna several years ago when he published some articles on being decision-driven not data-driven and on why it’s essential to prioritize decisions for your analytic efforts. He’d found some of my articles on being decision-centric and we connected. Now, one pandemic later, I’m delighted to be able to review the book that resulted from his experience in this topic.

The book’s subtitle is “Building a Data-Driven Organization” and it covers how to begin the journey, how to focus on the right (“big”) decisions, the challenges in getting value from analytics, data strategy and much more. His focus throughout is on building a robust roadmap and an enterprise-level plan. Crucially, he wants those establishing data-driven organizations to focus on the decisions that add the most value to the organization. This specific focus on decisions and on selecting the right decisions is key to the book and, indeed, to succeeding at becoming data-driven.

He encourages an assessment of your current state and the development of a coherent roadmap. A new operating model is going to be required to become insights-driven and understanding this new model will give you a sense of where to make strategic investments. He focuses immediately on explicitly assessing and improving decision-making (rather than asserting that better data will lead inexorably to better decisions as so many do). Furthermore, he emphasizes tying improvements in this decision-making to concrete business value.

His prescription for creating the organization begins, as it should, with a discussion of decisions and the importance of beginning “with the decision in mind” when considering data and analytics. He dives right in, pointing out that most organizations lack any clear understanding of their decisions, except perhaps for purchasing and investment decisions. They don’t really know what their decisions are, who has what role in those decisions, how those decisions are made or how they could be improved. To address this, he recommends an immediate investment in understanding these decisions by modeling them (ideally using the Decision Model and Notation or DMN standard).

His chapter on finding the “big decisions” of the title is particularly worthwhile. He provides some good insights on how to prioritize decisions – comparing high impact but rare decisions with those that offer low value per decision but very high volumes for instance. The key, he says, is to find a core set of decisions that offer your organization the most value. If you can make those decisions data-driven, you’ll realize most of the analytic value available to you.

He goes on to discuss the potentially elusive value of analytics in decision making. To address this, he encourages a focus on incremental improvement to known problems rather than pure research-oriented analytic projects. Then you can prioritize decisions based on both the potential for analytic improvement and the likely cost and complexity of data-driven improvement.

With the value of analytic improvements in decisions clearly identified, he transitions to discussing data challenges and an IT strategy to support data driven decision-making. There are many elements to such a strategy, and he does a nice job of outlining how these elements come together to support a data-driven organization. I particularly liked the discussion of an information supply chain and his ideas around mapping system and IT maturity to the data and analytic needs identified. He wraps up with solid chapters on data strategy, data governance and on data-driven marketing as an example.

One of the great things about Krishna’s book is that he cites a huge number of books, papers and articles, giving you a rich set of information to drill into. He also leverages established ways of documenting business, data and IT plans and strategy. He shows how these established techniques can be used to benchmark, analyze and re-design an organization to become data-driven and apply (big) data and analytics to critical decisions.

If you are trying to make your organization data-driven and striving to use analytics, machine learning and AI to improve your business decisions, this book should be on your bookshelf.

Here at Decision Management Solutions, we’re helping our clients become truly data-driven through decision automation. Automating and improving the most common decisions in your organization creates immediate business value because these operational decisions literally run the business. While some of our clients have already identified the best use cases for decisions automation, we often help clients with an assessment to help them prioritize their investments in decision automation and data infrastructure. Krishna outlines an approach very similar to the one we use – so I am confident it works!

Buy Krishna’s book to help you put a plan together and get in touch if you need some help!

We do a lot of work in insurance and we seem many companies spend heavily automating their claims process. Their intent is to improve their loss ratio – both by reducing losses due to fraud or bad claims and by reducing processing costs. But it often doesn’t make much difference because they are focused on handling a claim efficiently not effectively. Digitizing claims documentation and the claims workflow might help pay a claim more cheaply, but it does little to ensure that the right claims are paid or that the right approach is taken. And if most claims still need to be manually reviewed, it won’t do that much to reduce processing costs either.

In contrast, our customers invest in digital decisioning to make sure that their claims handling decisions – which claims to pay, when to investigate, when to fastrack – are digitized. This focus reduces fraud and waste, assigns the right people at the right time and reduces manual work for a really significant bottom-line impact on loss ratios

To illustrate what this means, I wrote three blog posts over on the company blog about real customer stories:

Check them out. You can also watch a webinar our CTO, Ryan Trollip, recently gave a on claims automation with our partner Red Hat.

If you’re struggling with your claims loss ratio, drop us a line.

An old friend, Guilhem Molines, has been working with some colleagues on a new book – Intelligent Automation with IBM Cloud Pak for Business Automation – and I got a chance to read it recently. The book covers all the components of IBM’s Cloud Pak for Business Automation. Decision Management Solutions is an IBM Business Partner and we regularly help clients adopt and use Cloud Pak for Business Automation. IBM Cloud Pak for Business Automation is expansive set of software components used by large enterprises to automate their day to day operations. Here’s how IBM describes it:

IBM Cloud Pak for Business Automation is a modular set of integrated software components, built for any hybrid cloud, designed to automate work and accelerate business growth. This end-to-end automation platform helps you analyze workflows, design AI-infused apps with low-code tooling, assign tasks to bots and track performance.

Each component can do a lot, so describing how to use the whole Cloud Pak effectively is a significant challenge – but one that this book meets.

The book itself has three parts.

  • Part 1 has an overview of the Cloud Pak and a brief discussion of each element’s key components.
  • Part 2 discusses a set of use cases and associated best practices – task automation and chatbots with RPA, workflow automation, decision automation, content management, document processing, business applications and workforce insights.
  • Part 3 covers some deployment considerations.

The book begins with a simple but well described scenario showing how the pieces fit together and then drills into each of the core technologies. A series of brisk but thorough overviews of each technology cover key UIs and architectural patterns. These overview chapters include some good tips on approach – unusual for a technical book – that help put things in context and provide some best practices. The book strikes a nice balance between different styles of development – process modeling and process mining are contrasted along with a description of how to use both in conjunction with task mining for instance. Where necessary, as in the content management and document processing chapter, some history is shared to show readers how we got to where we are and put newer capabilities into context.

The use case and best practices drill into 8 topics in more detail. Each has a reasonably detailed walkthrough of configuring and programming the example, with some embedded best practices and observations to help you learn the software. While not every feature is described, and some descriptions are quite cursory, the chapters give a good sense of how functionality could be developed and delivered. Source code for the examples is also available so you can work on them and extend them yourself.

The book concludes with some good notes of the various installation and operation options and topologies for the Cloud Pak and a discussion of CI/CD options.

Despite having multiple authors with their own focus areas, the book is well-leveled with a similar level of detail on each piece. There’s no way even a relatively long book like this one could cover all the functionality in the Cloud Pak but the team does a great job or outlining the core functionality, showing you how to develop modern systems using this functionality and providing a nice set of best practices. Highly recommended.

You can buy the book here


Notes:

The book refers to standards like Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN). I wouldn’t read the specs as a way to find out more about these standards. OMG groups write the specs for implementors of software products that support the standards – not for those intending to use them to build information systems. If you want to use BPMN and DMN, OMG recommends you buy books or pay for classes. There are many great books on BPMN and I wrote one on DMN with Jan Purchase – Real World Decision Modeling with DMN.

Because of the way IBM organizes its product portfolio, Cloud Pak for Business Automation does not include IBM’s machine learning tools. Any serious attempt to automate business operations today is going to consider how best to develop and integrate machine learning models. The detailed sections of the book do show how you can integrate machine learning with rules-based decisioning through decision models. Overall, though, discussion of machine learning is a little limited because of the focus on the specific functionality available in Cloud Pak for Business Automation.

The folks at CNET posted “Please Get Me a Live Human: Automated Phone Menus Are the Absolute Worst” – just the latest article I have seen on this topic. They make some good points (standard way to get a human, stop talking about changed menu options, don’t suggest the website, call backs etc) but they fail to address the number one thing. These automated phone menus are just poorly built systems. They almost always have three problems.

  1. They assume everyone is the same.
    The menu is always the same, even though different customers are likely to have different options. For instance, a customer with a home policy but no auto policy doesn’t need a menu option to discuss their auto policy. Make the menu like the current customer is the only customer.
  2. They prioritize standardization over targeting.
    Related to #1, the focus is on standardizing the menu not on making it responsive to the current user. Menus might be grouped from most to least likely but this is across the whole population rather than focused on the current customer. This means an option can be buried even when it is obvious that its the most likely for this customer. The choices they have made so far and what else you know about them could be used to predict what’s likely on their mind. Target your customers.
  3. They don’t really let you DO anything (any more than most chatbots do).
    All the important things your customers want to do require an approval or calculation – a decision we would say. If a customer wants to return something or get a discount or know the price of something or check on a claim then your system will have to make a decision if it is going to help them. Often the automated phone tree or chatbot can tell the customer what the policy is, but not how that policy is applied to this customer’s concern. An effective automated system needs the power to act on behalf of your customers.

If you own an automated phone system or IVR, you should be using digital decisioning to decide what menu options to present and to decide how to proceed with the call as it progresses. You should be backing it up with decisioning systems that let these same automated systems (and your chatbot and your call center staff) act on your customers’ behalf by approving and calculating what matters to them in-real-time.

Should you make it easier for customer-people to talk to company-people? Yes. Should be make your automated systems smarter, definitely.

Drop us a line any time to chat.

I was talking to a customer the other day about a particular decisioning problem they have. There’s an operational decision that they take several thousand times a year. Not a transactional one but a pretty high volume one. Sometimes these decisions have a large financial impact but often they have a smaller one. Today the focus is on the instances with a large financial impact with most of the smaller ones being made somewhat abruptly with little analysis. They are working to improve the accuracy of this decision and thinking about how data and analytics can be used to improve it. Naturally enough, they planned to start by looking at the instances of this decision with the highest financial impact.

I made a contrarian suggestion:

Start by modeling and then automating the decisions with the smallest impact

Specifically, I suggested that they build a decision model that would handle most if not all of their small value decisions. This could then be automated, allowing these decisions to be made using a best practice approach (captured in the model) for even their lowest value decisions – the ones that are rushed through today.

So why start with the lowest value decisions? Well, four reasons:

  1. The focus on the lowest impact decisions would make it easier to get approval of the decision model and easier for the business team to get behind automation. This reduces the time to value.
  2. Automation of these decisions would take them off everyone’s list, allowing the staff to focus on the important ones without distraction while knowing that these decisions would not be neglected.
  3. The decision model and the automation would generate insight about how decisions were really made – what really made a difference – and this insight could be used to improve the decision-making approach.
  4. Over time, the business will get more confident in the decision automation allowing pieces of it to be used for more impactful decisions, reducing the effort of making those decisions and improving their accuracy.

This start-simple-and-improve mindset has served us well in many projects. It replaces a “the computer is going to replace me” mindset with one focused on how automation can help human decision makers do their job better and spend more time on higher value activities.

You can have a strategic impact without starting with your most strategic decisions.

I recently bought a Specialized e-bike and it’s great. Every time I ride it I like it more.

However, every time I get an email from Specialized, I get more irritated and less likely to recommend them to someone.

Why? Well, let’s consider this week’s email newsletter. It’s clearly aimed only at e-bike riders and I am sure the marketing team think that means its “personalized” and showing “customer excellence” in some way. However this particular email newsletter focuses on two things – a great new software feature for the bike (which I REALLY want) and the fact that I can get this feature in an Over-The-Air update. Fabulous.

Except MY bike can’t run that feature and MY bike doesn’t support OTA updates. #fail

And Specialized knows this. It knows because I only get the email newsletter because I registered my bike so the same account that they are using to get permission to market to me is one which tells them these messages are going to be unpopular.

If they were focused on deciding what should go in each newsletter based on what they know about that customer they would never have sent me the newsletter. They would have decided which features to highlight based, in part, on my bike and would have encouraged me to go to my dealer to get those features installed. And as I am a recent purchaser, they might have reminded me that its a good idea to get a first service/check up at 100 miles. They would have applied their domain expertise, their business rules, and perhaps some simple analytics about me to make a personalization decision. But they didn’t. They just put in a bucket of “e-bike owners” and spammed me.

So don’t be like Specialized, deliver customer excellence (and personalization) in every channel. #decisionsfirst.

Learn how in this week’s webinar on Delivering Customer Excellence in a Complex, Multi-Channel World – Tuesday 2pm Eastern. Just 30 minutes. See you there.

Over the years, Decision Management Solutions has helped many leading insurance businesses modernize. We’ve helped them radically improve their claims handling, driving high rates of straight through processing with less fraud and less waste. We’ve helped them improve their top line sales numbers with automated cross-sell/up-sell, Next Best Offer or Next Best Action systems. We’ve worked with their agents and agency management teams to improve the productivity and effectiveness of their agents, both tied and independent. Plus a whole variety of other projects from pricing and underwriting to maturity, prospecting, digital channel support and more.

In every case we’ve helped insurers with their #digitaltransformation by applying business rules, machine learning, predictive analytics and other artificial intelligence technologies using our DecisionsFirstTM approach.

We have our opinion about which of these is the best place to apply digital decisioning technology but I’d really like to know what you think too. Where do you see digital decisioning and digital transformation making the biggest impact on your business? Let us know by voting on our LinkedIn Poll.

Thanks! And, as always, drop me a line if you have questions about how you can succeed with digital decisioning.

Like many of you, I am sure, I am fan of xkcd. After all, any site that is both humorous and has a wiki to explain WHY it’s humorous (explainxkcd.com) must be good. A recent one struck a chord:

We do a lot of work with companies that have been investing heavily in digitizing their business processes and so the humor behind this one – that much process latency can be explained by one particular task – really resonated!

However, in our experience, the issue is no longer that someone has to copy and paste data from one thing into another thing – this has largely been solved with tools like RPA (Robotic Process Automation) and the prevalence of decent APIs and business process management (BPM) suites. Where latency comes from NOW is mostly decision-making.

A process spends, to riff on the xkcd cartoon, 800ms on a bunch of automated steps to assemble data, then spends many minutes (or hours or days) waiting while a human is asked to make a choice or a decision using that data, and then spends another 200ms pushing the outcome of that decision into several downstream systems.

The moral of this is that you should invest some time thinking about how to automate those decisions, not just wrap automation around those decisions. A digital process or system using digital data is a great foundation. A digital process that can use a digital decision is likely to be must more effective, more efficient and faster one.

We’ve helped dozens of companies automate thousands of decisions to eliminate the latency in their processes. We could help you too! Get in touch.

I am super-excited to announce that an article I have been working on with Michael Ross has just been published on Harvard Business Review – Managing AI Decision-Making Tools

The nature of micro-decisions requires some level of automation, particularly for real-time and higher-volume decisions. Automation is enabled by algorithms (the rules, predictions, constraints, and logic that determine how a micro-decision is made). And these decision-making algorithms are often described as artificial intelligence (AI). The critical question is, how do human managers manage these types of algorithm-powered systems. An autonomous system is conceptually very easy. Imagine a driverless car without a steering wheel. The driver simply tells the car where to go and hopes for the best. But the moment there’s a steering wheel, you have a problem. You must inform the driver when they might want to intervene, how they can intervene, and how much notice you will give them when the need to intervene arises. You must think carefully about the information you will present to the driver to help them make an appropriate intervention.

The core of the article is to discuss the different ways people and automated decision-making can interact – is the human in the loop, on the loop or out of the loop?

We build a lot of decisioning solutions for clients and I’ve been working in this space a long time. Our DecisionsFirstTM approach emphasizes continuous improvement, and how the human managers of the domain interact with the system, to ensure deep and ongoing business enablement. We have found that making choices about the best management options is key to success with automating these kinds of micro decisions and to the use of artificial intelligence (AI) and machine learning (ML) more generally.

Enjoy the article. Drop me a link or connect to me on LinkedIn if you have questions!

It’s been a while since I did a product review on the blog, but I recently caught up with the team at Zoot and thought a blog post was in order.

Zoot, for those of you who don’t know them, deliver capabilities and services for automated decisioning across the customer credit lifecycle. They’ve been at this a while – 31 years and counting – and focus on delivering reliable, scalable and secure transactions in everything from customer acquisition, to fraud detection, credit origination, collections and recovery. They have some very large financial institutions as clients as well as some much smaller ones and a number of innovative fintech types.

Zoot’s customers all run their systems on Zoot infrastructure. Zoot has 5 data centers (2 in the US, 2 in the EU and a new one in Australia) for regional support and redundancy – though each is designed to be resilient independently and is regularly reviewed to make sure it can support 10x the average daily volume.  These data centers run the Zoot solution framework – tools and services supporting a variety of capabilities including data access, user interfaces, decisioning and more.

The core of the Zoot platform is the combination of the WebRules® Live execution service and the WebRules® Builder configuration tools. These cover everything from designing, developing and deploying workflow and user interfaces to decisioning, attributes and source data mapping. Zoot’s focus is on making these tools and services modular, on test-driven development, and on reusability through a capabilities library. The same tools are used by Zoot to develop standard capabilities and custom components for customers and by customers to extend these and develop new capabilities themselves. Most clients begin with pre-built functionality and extend or customize it, though some are starting to use Zoot in a Platform as a Service way, building the whole application from scratch to run on the Zoot infrastructure.

Zoot’s library consists of hundreds of capability-based microservices across 7 broad areas:

  • Access Framework, functions as a client gateway and makes it easy to bring real-time data into the environment and manage it.
  • User interface, to define responsive, mobile-friendly UIs that create web-based pages for customer service and other staff.
  • System automation, to handle background and management tasks.
  • Data and Service Acquisition, to integrate third party data into the decisioning from a wide range of providers and internal client systems.
  • Decisioning, to apply rules to the data and make decisions throughout the customer credit lifecycle.
  • Data Management, to manage the data created and tracked through the workflow, store it if necessary and deliver it to the customer’s environment.
  • Extensions, to fulfill the unique needs for clients, such as machine learning and AI models.

One of the key differentiators for the Zoot platform is the enormous range of data sources they provide components for. Any data source a customer might reasonably want to access to support their decisioning is integrated, allowing data from that source to be rapidly pulled into decisions without coding. Even when clients come up with new ones, Zoot says they can quickly and easily add new sources to the library.

WebRules Builder is a single environment for configuring and building all kinds of components. A set of dockable views can be used to manage the layout and users can flag specific components as favorites, use search to find things across the repository and navigate between elements that reference each other.

A flow chart metaphor is widely used to define the flow of data and logic. Components can be easily reused as sub-flows and the user can drill down into more detail when needed. Data is managed throughout the flows and simple point and click mapping makes it easy to show how external data is mapped into the decisioning. Flows can be wrapped around inbound adaptors to handle errors, alternative data sources etc. Libraries exist, and custom versions can be created with a collection of fields, flows, reports and other elements. These can be imported into specific projects, making the collection of assets available in a single action.

Within these flows the user can specify logic as either rules or decision tables. Decision tables are increasingly common in Zoot’s customers, as in ours. A partner region allows for external code to be integrated into the client’s processes – for instance a machine learning model or external decisioning capability. An increasing number of clients are using this to integrate machine learning with their decisioning – though some of this is parallel running to see how these more opaque models compare to the established approaches already approved by regulators. Debugging tools show the path through the flows for a transaction and all the data about the flow of transactions – which branch was taken, which rules fired – can be recorded for later analysis outside the platform.

Sample data for testing can be easily brought in and Zoot provides sample data from their third party data interfaces also to streamline this process. APIs and interfaces can be tested inside the design tools, with data entered being run through the logic and responses displayed in situ. Unit tests can be defined, managed and executed in the environment. Clients can handle their production data entirely outside Zoot, passing it in for processing, but a significant minority of clients use database capabilities to store data temporarily on the Zoot infrastructure. System scripts are used to make sure that all the data ends up back in the client’s systems of record and data lake when processing is complete.

Zoot occupies an interesting middle ground among decisioning providers. Everything is hosted on their infrastructure – clients don’t have the option to run it on their own infrastructure – and Zoot has invested heavily in providing a robust infrastructure to support this. Yet Zoot is not trying to “own” the customer data or do multi-customer analysis, as many SaaS and PaaS companies are – their customers own their own data. Indeed, Zoot makes a point of pointing out that all the data gets pushed out to the client nightly or weekly. This gives clients a managed infrastructure without losing control of their data, an interesting combination for many I suspect.

More on the Zoot platform at https://zootsolutions.com

Today’s businesses are heavily digitized and must be structured so that while interacting through digital channels, the value gained from customer experience between humans and digital touchpoints can be handled seamlessly and effectively. As a result, the line between business and technology is gradually blurring. In addition, most organizations still face many challenges with the agility, flexibility, and customer-driven focus of business systems and processes needed to support new digital business models.

By fusing business and technology in the digital age, the automation of digitized decision making or Digital Decisioning integrates analysis methods that go beyond data-driven to integrate data with predictive analytics and support for AI applications. The approach in the book is described to be easily understandable for business professionals with examples.

As one of those who have worked in the field of rule-based AI for many years, James Taylor’s approach includes easy-to-understand, rule-based AI, integration with analysis methods, sophistication added by machine learning AI, and a continuous improvement loop. Please read it and use it as a reference book for system planning, development, and use.

eBook available at: http://www.contendo.jp/dd

Download the book summary flyer in Japanese here.

The single most critical and most neglected aspect of artificial intelligence (AI) projects is problem definition. All too often, teams start with data, determine what kind of machine learning (ML)/AI insights they can generate, and then go off to find someone in the business who can benefit from it. The result? Lots of successful AI pilots that can’t make it into production, and they don’t end up providing viable and positive business outcomes.

It’s estimated that 97% of enterprises have invested in AI, but is it really serving the business?1

Gartner’s 2019 CIO survey points to the fact that, although 86% of respondents indicate that they either have AI on their radar or have initiated projects, only 4% of projects have actually been deployed.2

Susan Athey, Economics of Technology Professor at Stanford Graduate School of Business, calls out the gap between ambition and execution when it comes to AI projects: “Only one in 20 companies has extensively incorporated AI in offerings or processes. Across all organizations, only 14% of respondents believe that AI is currently having a large effect on their organization’s offerings.”3

So what’s the problem? For one thing, many AI projects are technology-led, focusing on algorithms or tools that teams are familiar with. Others start with whatever data the team happens to have available. But data is frequently siloed and difficult to access, so is it the right and relevant data? While it’s true that data, tools, and algorithms are vital for the success of AI projects, putting the focus on the technical aspects is risky. Combining readily available data with known tools and algorithms is certainly likely to produce an AI-driven result more quickly—but there’s no guarantee it will have business value.

There’s a better way. Though it may sound counter-intuitive, AI teams need to work backwards to get their projects into business production. In other words, they need to pinpoint where they want to end up and then figure out how to get there. For a more practical and rewarding payoff, they need to focus on decision-making and on what a better decision looks. By collaborating with business units to define the decision-making that needs to be improved, identifying the kinds of ML/AI that would really help, and only then going to look for data, AI project teams will drive true business value.

So how does your team step out of its comfort zone and learn to work backwards? Advisory Data Scientist at IBM Aakanksha Joshi and Decision Management Solutions CEO James Taylor will show you how to achieve success with your next AI project. They will be offering five lightning rounds at the IBM Digital Developer Conference, where you’ll gain data and AI skills from IBM experts, partners, and the worldwide community. You’ll have the opportunity to participate in hands-on experiences, hear IBM client stories, learn best practices, and more.

Data & AI 2021

June 8, 2021 | 24-hour conference begins: 10:00 am AEST

Free and on demand

Register today

We look forward to seeing you there!

1 Building the AI Powered Organization, HBR July-2019
2 2019 CIO Survey: CIOs Have Awoken to the Importance of AI
3 MIT Sloan Management Review September 06, 2017