≡ Menu

It’s that time again – time to take the Rexer Analytics Data Science Survey

Rexer Analytics has been conducting this survey since 2007! Each survey explores the analytic behaviors, views, and preferences of data scientists and analytic professionals. This year Karl is working with Eric Siegel and Machine Learning Week to design, promote, and analyze the survey.

Summary reports from previous surveys are available FREE to download from the Rexer Analytics website – and are fascinating! Karl Rexer and Eric Siegel will present highlights of the 2023 survey at the Machine Learning Week conference in Las Vegas in June 2023 (I’m speaking and teach there too) and a full summary report will be available for download from the Rexer Analytics website later in 2023.

It’s completely confidential and not being conducted for any vendor. The survey should take approximately 10 minutes to complete. So go take it…

https://s-9ed913-i.sgizmo.com/s3/i-Paem5JH2y2xvCPNGxn-5888579/?sguid=Paem5JH2y2xvCPNGxn

I got my hands of a copy of Krishna Pera’s new book, Big Data for Big Decisions recently. I met Krishna several years ago when he published some articles on being decision-driven not data-driven and on why it’s essential to prioritize decisions for your analytic efforts. He’d found some of my articles on being decision-centric and we connected. Now, one pandemic later, I’m delighted to be able to review the book that resulted from his experience in this topic.

The book’s subtitle is “Building a Data-Driven Organization” and it covers how to begin the journey, how to focus on the right (“big”) decisions, the challenges in getting value from analytics, data strategy and much more. His focus throughout is on building a robust roadmap and an enterprise-level plan. Crucially, he wants those establishing data-driven organizations to focus on the decisions that add the most value to the organization. This specific focus on decisions and on selecting the right decisions is key to the book and, indeed, to succeeding at becoming data-driven.

He encourages an assessment of your current state and the development of a coherent roadmap. A new operating model is going to be required to become insights-driven and understanding this new model will give you a sense of where to make strategic investments. He focuses immediately on explicitly assessing and improving decision-making (rather than asserting that better data will lead inexorably to better decisions as so many do). Furthermore, he emphasizes tying improvements in this decision-making to concrete business value.

His prescription for creating the organization begins, as it should, with a discussion of decisions and the importance of beginning “with the decision in mind” when considering data and analytics. He dives right in, pointing out that most organizations lack any clear understanding of their decisions, except perhaps for purchasing and investment decisions. They don’t really know what their decisions are, who has what role in those decisions, how those decisions are made or how they could be improved. To address this, he recommends an immediate investment in understanding these decisions by modeling them (ideally using the Decision Model and Notation or DMN standard).

His chapter on finding the “big decisions” of the title is particularly worthwhile. He provides some good insights on how to prioritize decisions – comparing high impact but rare decisions with those that offer low value per decision but very high volumes for instance. The key, he says, is to find a core set of decisions that offer your organization the most value. If you can make those decisions data-driven, you’ll realize most of the analytic value available to you.

He goes on to discuss the potentially elusive value of analytics in decision making. To address this, he encourages a focus on incremental improvement to known problems rather than pure research-oriented analytic projects. Then you can prioritize decisions based on both the potential for analytic improvement and the likely cost and complexity of data-driven improvement.

With the value of analytic improvements in decisions clearly identified, he transitions to discussing data challenges and an IT strategy to support data driven decision-making. There are many elements to such a strategy, and he does a nice job of outlining how these elements come together to support a data-driven organization. I particularly liked the discussion of an information supply chain and his ideas around mapping system and IT maturity to the data and analytic needs identified. He wraps up with solid chapters on data strategy, data governance and on data-driven marketing as an example.

One of the great things about Krishna’s book is that he cites a huge number of books, papers and articles, giving you a rich set of information to drill into. He also leverages established ways of documenting business, data and IT plans and strategy. He shows how these established techniques can be used to benchmark, analyze and re-design an organization to become data-driven and apply (big) data and analytics to critical decisions.

If you are trying to make your organization data-driven and striving to use analytics, machine learning and AI to improve your business decisions, this book should be on your bookshelf.

Here at Decision Management Solutions, we’re helping our clients become truly data-driven through decision automation. Automating and improving the most common decisions in your organization creates immediate business value because these operational decisions literally run the business. While some of our clients have already identified the best use cases for decisions automation, we often help clients with an assessment to help them prioritize their investments in decision automation and data infrastructure. Krishna outlines an approach very similar to the one we use – so I am confident it works!

Buy Krishna’s book to help you put a plan together and get in touch if you need some help!

We do a lot of work in insurance and we seem many companies spend heavily automating their claims process. Their intent is to improve their loss ratio – both by reducing losses due to fraud or bad claims and by reducing processing costs. But it often doesn’t make much difference because they are focused on handling a claim efficiently not effectively. Digitizing claims documentation and the claims workflow might help pay a claim more cheaply, but it does little to ensure that the right claims are paid or that the right approach is taken. And if most claims still need to be manually reviewed, it won’t do that much to reduce processing costs either.

In contrast, our customers invest in digital decisioning to make sure that their claims handling decisions – which claims to pay, when to investigate, when to fastrack – are digitized. This focus reduces fraud and waste, assigns the right people at the right time and reduces manual work for a really significant bottom-line impact on loss ratios

To illustrate what this means, I wrote three blog posts over on the company blog about real customer stories:

Check them out. You can also watch a webinar our CTO, Ryan Trollip, recently gave a on claims automation with our partner Red Hat.

If you’re struggling with your claims loss ratio, drop us a line.

An old friend, Guilhem Molines, has been working with some colleagues on a new book – Intelligent Automation with IBM Cloud Pak for Business Automation – and I got a chance to read it recently. The book covers all the components of IBM’s Cloud Pak for Business Automation. Decision Management Solutions is an IBM Business Partner and we regularly help clients adopt and use Cloud Pak for Business Automation. IBM Cloud Pak for Business Automation is expansive set of software components used by large enterprises to automate their day to day operations. Here’s how IBM describes it:

IBM Cloud Pak for Business Automation is a modular set of integrated software components, built for any hybrid cloud, designed to automate work and accelerate business growth. This end-to-end automation platform helps you analyze workflows, design AI-infused apps with low-code tooling, assign tasks to bots and track performance.

Each component can do a lot, so describing how to use the whole Cloud Pak effectively is a significant challenge – but one that this book meets.

The book itself has three parts.

  • Part 1 has an overview of the Cloud Pak and a brief discussion of each element’s key components.
  • Part 2 discusses a set of use cases and associated best practices – task automation and chatbots with RPA, workflow automation, decision automation, content management, document processing, business applications and workforce insights.
  • Part 3 covers some deployment considerations.

The book begins with a simple but well described scenario showing how the pieces fit together and then drills into each of the core technologies. A series of brisk but thorough overviews of each technology cover key UIs and architectural patterns. These overview chapters include some good tips on approach – unusual for a technical book – that help put things in context and provide some best practices. The book strikes a nice balance between different styles of development – process modeling and process mining are contrasted along with a description of how to use both in conjunction with task mining for instance. Where necessary, as in the content management and document processing chapter, some history is shared to show readers how we got to where we are and put newer capabilities into context.

The use case and best practices drill into 8 topics in more detail. Each has a reasonably detailed walkthrough of configuring and programming the example, with some embedded best practices and observations to help you learn the software. While not every feature is described, and some descriptions are quite cursory, the chapters give a good sense of how functionality could be developed and delivered. Source code for the examples is also available so you can work on them and extend them yourself.

The book concludes with some good notes of the various installation and operation options and topologies for the Cloud Pak and a discussion of CI/CD options.

Despite having multiple authors with their own focus areas, the book is well-leveled with a similar level of detail on each piece. There’s no way even a relatively long book like this one could cover all the functionality in the Cloud Pak but the team does a great job or outlining the core functionality, showing you how to develop modern systems using this functionality and providing a nice set of best practices. Highly recommended.

You can buy the book here


Notes:

The book refers to standards like Business Process Model and Notation (BPMN) and Decision Model and Notation (DMN). I wouldn’t read the specs as a way to find out more about these standards. OMG groups write the specs for implementors of software products that support the standards – not for those intending to use them to build information systems. If you want to use BPMN and DMN, OMG recommends you buy books or pay for classes. There are many great books on BPMN and I wrote one on DMN with Jan Purchase – Real World Decision Modeling with DMN.

Because of the way IBM organizes its product portfolio, Cloud Pak for Business Automation does not include IBM’s machine learning tools. Any serious attempt to automate business operations today is going to consider how best to develop and integrate machine learning models. The detailed sections of the book do show how you can integrate machine learning with rules-based decisioning through decision models. Overall, though, discussion of machine learning is a little limited because of the focus on the specific functionality available in Cloud Pak for Business Automation.

The folks at CNET posted “Please Get Me a Live Human: Automated Phone Menus Are the Absolute Worst” – just the latest article I have seen on this topic. They make some good points (standard way to get a human, stop talking about changed menu options, don’t suggest the website, call backs etc) but they fail to address the number one thing. These automated phone menus are just poorly built systems. They almost always have three problems.

  1. They assume everyone is the same.
    The menu is always the same, even though different customers are likely to have different options. For instance, a customer with a home policy but no auto policy doesn’t need a menu option to discuss their auto policy. Make the menu like the current customer is the only customer.
  2. They prioritize standardization over targeting.
    Related to #1, the focus is on standardizing the menu not on making it responsive to the current user. Menus might be grouped from most to least likely but this is across the whole population rather than focused on the current customer. This means an option can be buried even when it is obvious that its the most likely for this customer. The choices they have made so far and what else you know about them could be used to predict what’s likely on their mind. Target your customers.
  3. They don’t really let you DO anything (any more than most chatbots do).
    All the important things your customers want to do require an approval or calculation – a decision we would say. If a customer wants to return something or get a discount or know the price of something or check on a claim then your system will have to make a decision if it is going to help them. Often the automated phone tree or chatbot can tell the customer what the policy is, but not how that policy is applied to this customer’s concern. An effective automated system needs the power to act on behalf of your customers.

If you own an automated phone system or IVR, you should be using digital decisioning to decide what menu options to present and to decide how to proceed with the call as it progresses. You should be backing it up with decisioning systems that let these same automated systems (and your chatbot and your call center staff) act on your customers’ behalf by approving and calculating what matters to them in-real-time.

Should you make it easier for customer-people to talk to company-people? Yes. Should be make your automated systems smarter, definitely.

Drop us a line any time to chat.

I was talking to a customer the other day about a particular decisioning problem they have. There’s an operational decision that they take several thousand times a year. Not a transactional one but a pretty high volume one. Sometimes these decisions have a large financial impact but often they have a smaller one. Today the focus is on the instances with a large financial impact with most of the smaller ones being made somewhat abruptly with little analysis. They are working to improve the accuracy of this decision and thinking about how data and analytics can be used to improve it. Naturally enough, they planned to start by looking at the instances of this decision with the highest financial impact.

I made a contrarian suggestion:

Start by modeling and then automating the decisions with the smallest impact

Specifically, I suggested that they build a decision model that would handle most if not all of their small value decisions. This could then be automated, allowing these decisions to be made using a best practice approach (captured in the model) for even their lowest value decisions – the ones that are rushed through today.

So why start with the lowest value decisions? Well, four reasons:

  1. The focus on the lowest impact decisions would make it easier to get approval of the decision model and easier for the business team to get behind automation. This reduces the time to value.
  2. Automation of these decisions would take them off everyone’s list, allowing the staff to focus on the important ones without distraction while knowing that these decisions would not be neglected.
  3. The decision model and the automation would generate insight about how decisions were really made – what really made a difference – and this insight could be used to improve the decision-making approach.
  4. Over time, the business will get more confident in the decision automation allowing pieces of it to be used for more impactful decisions, reducing the effort of making those decisions and improving their accuracy.

This start-simple-and-improve mindset has served us well in many projects. It replaces a “the computer is going to replace me” mindset with one focused on how automation can help human decision makers do their job better and spend more time on higher value activities.

You can have a strategic impact without starting with your most strategic decisions.

I recently bought a Specialized e-bike and it’s great. Every time I ride it I like it more.

However, every time I get an email from Specialized, I get more irritated and less likely to recommend them to someone.

Why? Well, let’s consider this week’s email newsletter. It’s clearly aimed only at e-bike riders and I am sure the marketing team think that means its “personalized” and showing “customer excellence” in some way. However this particular email newsletter focuses on two things – a great new software feature for the bike (which I REALLY want) and the fact that I can get this feature in an Over-The-Air update. Fabulous.

Except MY bike can’t run that feature and MY bike doesn’t support OTA updates. #fail

And Specialized knows this. It knows because I only get the email newsletter because I registered my bike so the same account that they are using to get permission to market to me is one which tells them these messages are going to be unpopular.

If they were focused on deciding what should go in each newsletter based on what they know about that customer they would never have sent me the newsletter. They would have decided which features to highlight based, in part, on my bike and would have encouraged me to go to my dealer to get those features installed. And as I am a recent purchaser, they might have reminded me that its a good idea to get a first service/check up at 100 miles. They would have applied their domain expertise, their business rules, and perhaps some simple analytics about me to make a personalization decision. But they didn’t. They just put in a bucket of “e-bike owners” and spammed me.

So don’t be like Specialized, deliver customer excellence (and personalization) in every channel. #decisionsfirst.

Learn how in this week’s webinar on Delivering Customer Excellence in a Complex, Multi-Channel World – Tuesday 2pm Eastern. Just 30 minutes. See you there.

Over the years, Decision Management Solutions has helped many leading insurance businesses modernize. We’ve helped them radically improve their claims handling, driving high rates of straight through processing with less fraud and less waste. We’ve helped them improve their top line sales numbers with automated cross-sell/up-sell, Next Best Offer or Next Best Action systems. We’ve worked with their agents and agency management teams to improve the productivity and effectiveness of their agents, both tied and independent. Plus a whole variety of other projects from pricing and underwriting to maturity, prospecting, digital channel support and more.

In every case we’ve helped insurers with their #digitaltransformation by applying business rules, machine learning, predictive analytics and other artificial intelligence technologies using our DecisionsFirstTM approach.

We have our opinion about which of these is the best place to apply digital decisioning technology but I’d really like to know what you think too. Where do you see digital decisioning and digital transformation making the biggest impact on your business? Let us know by voting on our LinkedIn Poll.

Thanks! And, as always, drop me a line if you have questions about how you can succeed with digital decisioning.

Like many of you, I am sure, I am fan of xkcd. After all, any site that is both humorous and has a wiki to explain WHY it’s humorous (explainxkcd.com) must be good. A recent one struck a chord:

We do a lot of work with companies that have been investing heavily in digitizing their business processes and so the humor behind this one – that much process latency can be explained by one particular task – really resonated!

However, in our experience, the issue is no longer that someone has to copy and paste data from one thing into another thing – this has largely been solved with tools like RPA (Robotic Process Automation) and the prevalence of decent APIs and business process management (BPM) suites. Where latency comes from NOW is mostly decision-making.

A process spends, to riff on the xkcd cartoon, 800ms on a bunch of automated steps to assemble data, then spends many minutes (or hours or days) waiting while a human is asked to make a choice or a decision using that data, and then spends another 200ms pushing the outcome of that decision into several downstream systems.

The moral of this is that you should invest some time thinking about how to automate those decisions, not just wrap automation around those decisions. A digital process or system using digital data is a great foundation. A digital process that can use a digital decision is likely to be must more effective, more efficient and faster one.

We’ve helped dozens of companies automate thousands of decisions to eliminate the latency in their processes. We could help you too! Get in touch.

I am super-excited to announce that an article I have been working on with Michael Ross has just been published on Harvard Business Review – Managing AI Decision-Making Tools

The nature of micro-decisions requires some level of automation, particularly for real-time and higher-volume decisions. Automation is enabled by algorithms (the rules, predictions, constraints, and logic that determine how a micro-decision is made). And these decision-making algorithms are often described as artificial intelligence (AI). The critical question is, how do human managers manage these types of algorithm-powered systems. An autonomous system is conceptually very easy. Imagine a driverless car without a steering wheel. The driver simply tells the car where to go and hopes for the best. But the moment there’s a steering wheel, you have a problem. You must inform the driver when they might want to intervene, how they can intervene, and how much notice you will give them when the need to intervene arises. You must think carefully about the information you will present to the driver to help them make an appropriate intervention.

The core of the article is to discuss the different ways people and automated decision-making can interact – is the human in the loop, on the loop or out of the loop?

We build a lot of decisioning solutions for clients and I’ve been working in this space a long time. Our DecisionsFirstTM approach emphasizes continuous improvement, and how the human managers of the domain interact with the system, to ensure deep and ongoing business enablement. We have found that making choices about the best management options is key to success with automating these kinds of micro decisions and to the use of artificial intelligence (AI) and machine learning (ML) more generally.

Enjoy the article. Drop me a link or connect to me on LinkedIn if you have questions!

It’s been a while since I did a product review on the blog, but I recently caught up with the team at Zoot and thought a blog post was in order.

Zoot, for those of you who don’t know them, deliver capabilities and services for automated decisioning across the customer credit lifecycle. They’ve been at this a while – 31 years and counting – and focus on delivering reliable, scalable and secure transactions in everything from customer acquisition, to fraud detection, credit origination, collections and recovery. They have some very large financial institutions as clients as well as some much smaller ones and a number of innovative fintech types.

Zoot’s customers all run their systems on Zoot infrastructure. Zoot has 5 data centers (2 in the US, 2 in the EU and a new one in Australia) for regional support and redundancy – though each is designed to be resilient independently and is regularly reviewed to make sure it can support 10x the average daily volume.  These data centers run the Zoot solution framework – tools and services supporting a variety of capabilities including data access, user interfaces, decisioning and more.

The core of the Zoot platform is the combination of the WebRules® Live execution service and the WebRules® Builder configuration tools. These cover everything from designing, developing and deploying workflow and user interfaces to decisioning, attributes and source data mapping. Zoot’s focus is on making these tools and services modular, on test-driven development, and on reusability through a capabilities library. The same tools are used by Zoot to develop standard capabilities and custom components for customers and by customers to extend these and develop new capabilities themselves. Most clients begin with pre-built functionality and extend or customize it, though some are starting to use Zoot in a Platform as a Service way, building the whole application from scratch to run on the Zoot infrastructure.

Zoot’s library consists of hundreds of capability-based microservices across 7 broad areas:

  • Access Framework, functions as a client gateway and makes it easy to bring real-time data into the environment and manage it.
  • User interface, to define responsive, mobile-friendly UIs that create web-based pages for customer service and other staff.
  • System automation, to handle background and management tasks.
  • Data and Service Acquisition, to integrate third party data into the decisioning from a wide range of providers and internal client systems.
  • Decisioning, to apply rules to the data and make decisions throughout the customer credit lifecycle.
  • Data Management, to manage the data created and tracked through the workflow, store it if necessary and deliver it to the customer’s environment.
  • Extensions, to fulfill the unique needs for clients, such as machine learning and AI models.

One of the key differentiators for the Zoot platform is the enormous range of data sources they provide components for. Any data source a customer might reasonably want to access to support their decisioning is integrated, allowing data from that source to be rapidly pulled into decisions without coding. Even when clients come up with new ones, Zoot says they can quickly and easily add new sources to the library.

WebRules Builder is a single environment for configuring and building all kinds of components. A set of dockable views can be used to manage the layout and users can flag specific components as favorites, use search to find things across the repository and navigate between elements that reference each other.

A flow chart metaphor is widely used to define the flow of data and logic. Components can be easily reused as sub-flows and the user can drill down into more detail when needed. Data is managed throughout the flows and simple point and click mapping makes it easy to show how external data is mapped into the decisioning. Flows can be wrapped around inbound adaptors to handle errors, alternative data sources etc. Libraries exist, and custom versions can be created with a collection of fields, flows, reports and other elements. These can be imported into specific projects, making the collection of assets available in a single action.

Within these flows the user can specify logic as either rules or decision tables. Decision tables are increasingly common in Zoot’s customers, as in ours. A partner region allows for external code to be integrated into the client’s processes – for instance a machine learning model or external decisioning capability. An increasing number of clients are using this to integrate machine learning with their decisioning – though some of this is parallel running to see how these more opaque models compare to the established approaches already approved by regulators. Debugging tools show the path through the flows for a transaction and all the data about the flow of transactions – which branch was taken, which rules fired – can be recorded for later analysis outside the platform.

Sample data for testing can be easily brought in and Zoot provides sample data from their third party data interfaces also to streamline this process. APIs and interfaces can be tested inside the design tools, with data entered being run through the logic and responses displayed in situ. Unit tests can be defined, managed and executed in the environment. Clients can handle their production data entirely outside Zoot, passing it in for processing, but a significant minority of clients use database capabilities to store data temporarily on the Zoot infrastructure. System scripts are used to make sure that all the data ends up back in the client’s systems of record and data lake when processing is complete.

Zoot occupies an interesting middle ground among decisioning providers. Everything is hosted on their infrastructure – clients don’t have the option to run it on their own infrastructure – and Zoot has invested heavily in providing a robust infrastructure to support this. Yet Zoot is not trying to “own” the customer data or do multi-customer analysis, as many SaaS and PaaS companies are – their customers own their own data. Indeed, Zoot makes a point of pointing out that all the data gets pushed out to the client nightly or weekly. This gives clients a managed infrastructure without losing control of their data, an interesting combination for many I suspect.

More on the Zoot platform at https://zootsolutions.com

Today’s businesses are heavily digitized and must be structured so that while interacting through digital channels, the value gained from customer experience between humans and digital touchpoints can be handled seamlessly and effectively. As a result, the line between business and technology is gradually blurring. In addition, most organizations still face many challenges with the agility, flexibility, and customer-driven focus of business systems and processes needed to support new digital business models.

By fusing business and technology in the digital age, the automation of digitized decision making or Digital Decisioning integrates analysis methods that go beyond data-driven to integrate data with predictive analytics and support for AI applications. The approach in the book is described to be easily understandable for business professionals with examples.

As one of those who have worked in the field of rule-based AI for many years, James Taylor’s approach includes easy-to-understand, rule-based AI, integration with analysis methods, sophistication added by machine learning AI, and a continuous improvement loop. Please read it and use it as a reference book for system planning, development, and use.

eBook available at: http://www.contendo.jp/dd

Download the book summary flyer in Japanese here.

The single most critical and most neglected aspect of artificial intelligence (AI) projects is problem definition. All too often, teams start with data, determine what kind of machine learning (ML)/AI insights they can generate, and then go off to find someone in the business who can benefit from it. The result? Lots of successful AI pilots that can’t make it into production, and they don’t end up providing viable and positive business outcomes.

It’s estimated that 97% of enterprises have invested in AI, but is it really serving the business?1

Gartner’s 2019 CIO survey points to the fact that, although 86% of respondents indicate that they either have AI on their radar or have initiated projects, only 4% of projects have actually been deployed.2

Susan Athey, Economics of Technology Professor at Stanford Graduate School of Business, calls out the gap between ambition and execution when it comes to AI projects: “Only one in 20 companies has extensively incorporated AI in offerings or processes. Across all organizations, only 14% of respondents believe that AI is currently having a large effect on their organization’s offerings.”3

So what’s the problem? For one thing, many AI projects are technology-led, focusing on algorithms or tools that teams are familiar with. Others start with whatever data the team happens to have available. But data is frequently siloed and difficult to access, so is it the right and relevant data? While it’s true that data, tools, and algorithms are vital for the success of AI projects, putting the focus on the technical aspects is risky. Combining readily available data with known tools and algorithms is certainly likely to produce an AI-driven result more quickly—but there’s no guarantee it will have business value.

There’s a better way. Though it may sound counter-intuitive, AI teams need to work backwards to get their projects into business production. In other words, they need to pinpoint where they want to end up and then figure out how to get there. For a more practical and rewarding payoff, they need to focus on decision-making and on what a better decision looks. By collaborating with business units to define the decision-making that needs to be improved, identifying the kinds of ML/AI that would really help, and only then going to look for data, AI project teams will drive true business value.

So how does your team step out of its comfort zone and learn to work backwards? Advisory Data Scientist at IBM Aakanksha Joshi and Decision Management Solutions CEO James Taylor will show you how to achieve success with your next AI project. They will be offering five lightning rounds at the IBM Digital Developer Conference, where you’ll gain data and AI skills from IBM experts, partners, and the worldwide community. You’ll have the opportunity to participate in hands-on experiences, hear IBM client stories, learn best practices, and more.

Data & AI 2021

June 8, 2021 | 24-hour conference begins: 10:00 am AEST

Free and on demand

Register today

We look forward to seeing you there!

1 Building the AI Powered Organization, HBR July-2019
2 2019 CIO Survey: CIOs Have Awoken to the Importance of AI
3 MIT Sloan Management Review September 06, 2017

Don’t jail your logic in code

Our friends at Data Decisioning forwarded an article from The Register recently – Inflexible prison software says inmates due for release should be kept locked up behind bars.

The basic building blocks of this story is that there is a module calculating release dates for prisoners that was clearly implemented exactly the same way the rest of the system was coded.

This is what we would call a “worst practice” because decisions are not the same as the rest of your system and should be implemented differently. Deciding things about customers, about transactions or, in this case, about prisoners is not the same as workflow or data management. Decision-making should be implemented as stateless, side-effect-free decision services using decisioning technology (decision models, business rules). Not code.

Why? Well decisions are different.

  • They are rich in business understanding (in this example, legal understanding).
  • They are prone to regular change (in this case because the political and social environment is changing).
  • And they are often required to be transparent (in this case to demonstrate that all the correct laws and regulations have been followed).

Code is impossible for business experts to verify, time consuming and expensive to change, and opaque. So writing code to implement decisions is a terrible idea. It’s like taking your business know-how and locking it in jail!

Back to our example. In this case the code of this module “hasn’t been able to adapt” to new regulations even though it has been nearly 2 years! And regulations are generally signaled well in advance so they’ve really had more than two years. If they had recognized that decision-making should be implemented using decisioning technology, this would have been easy to fix.

The most revealing part of the story is the attitude of those who wrote the software. They dispute the use of the term “bug” to describe the system’s “lack of adaptability”. I guess being hard to change is a “feature”? This of course is nonsense. No-one is writing code for a completely rigid, defined, unchanging world. Lacking necessary adaptability is therefore definitely a bug.

What’s worse is that they knew all along that they would have to make changes! They say

It is not uncommon for new legislation to dictate changes to software systems. This translates to a change request and not a bug in the system.

No, wrong. It’s not a change request, its normal operations. New regulations are normal. Therefore change to regulatory rules is normal. Therefore being able to change the rules is normal too, and not a change request. The idea that this kind of change might require 2,000 hours of programming is nonsense. Leaving aside the apparently outrageous rates, this is terrible design and shows either a total disregard for their customer or a total lack of awareness of best practices (or perhaps both).

What this company did was take business domain know-how, business logic, and lock it away in opaque, hard to change, hard to manage code. And then when the inevitable happened and the rules changed, they failed their customer who now has to add workarounds and fixes outside the system – I can see the yellow stickies all over the terminals in my mind’s eye….

So, don’t be like those bozos. Identify the decisions in your systems. Model them so you understand them. And then implement them in a Business Rules Management System so your business partners can make their own changes when the regulations change, the market changes, customer change, their business changes or the world changes. Because it will.

Keep your logic out of code jail.

Bart de Langhe and Stefano Puntoni recently published a great article in the MIT Sloan Management Review called “Leading With Decision-Driven Data Analytics.” In contrast to so much of the literature that focuses first on data, they focus on decision-making. In fact they go so far as to say that:

“Leaders need to make sure that data analytics is decision-driven.”

They describe how focusing on data and on insights can lead companies down blind alleys and is not really a way to become “data-driven” at all. We like to say that companies should do analytics backwards. The authors focus on the purpose of data:

“Instead of finding a purpose for data, find data for a purpose. We call this approach decision-driven data analytics.”

They contrast this decision-centric approach to traditional data-centric ones very nicely:

“Data-driven decision-making anchors on available data. This often leads decision makers to focus on the wrong question. Decision-driven data analytics starts from a proper definition of the decision that needs to be made and the data that is needed to make that decision.”

This has been our experience as well. Companies that focus on the decision they want to improve before doing their analytic work are much more likely to succeed in operationalizing an analytic or data-driven approach. Bert and Stefano are focused on management decisions where we focus on operational ones, but the conclusions are the same.

They identify three steps to success:

  1. Identify the alternative courses of action.
  2. Determine what data is needed in order to rank alternative courses of action.
  3. Select the best course of action.

I would add to this only that building a decision model is a critical step between 1 and 2, especially for decisions you are going to make more than once. Defining a decision as a question and possible (alternative) actions is the right first step. To get from that to the data and analytics you need often involves breaking down the decision into sub-decisions and considering each of them independently. This is what a decision model is particularly good at. Applying their steps 2 and 3 to each sub-decision naturally leads “up” the model to a successful step 3 for the main decision.

It’s a great paper and you should definitely read it. You might also enjoy these papers on decision modeling and on framing analytic requirements using decision modeling.

We help a lot of clients select, install and adopt a Business Rules Management Systems (BRMS). These clients are looking to get automate decision-making with transparency, deliver business control of their critical decision-making logic and establish an ability to drive continuous improvement through simulation and impact analysis. Adopted correctly, these benefits ensure that a BRMS delivers a great ROI.

To maximize this ROI our clients are looking to get the benefits of their BRMS faster, spend less on implementing their BRMS and increase the size of the benefit they see. Here are some tips based on our experience:

Faster

The best way to get benefit from a BRMS faster is to get to a working decision service faster. More than anything, our experience shows this means capturing the requirements for that service faster.

For this we use decision modeling and the Decision Model and Notation (DMN) standard as well as our decision modeling software, DecisionsFirst™ Modeler. Experience is that this can reduce the time to get your decision requirements and business rules right by 5-10x, getting you to an ROI months earlier than traditional rules-first analysis approaches.

Cheaper

Decision modeling also dramatically reduces the amount of re-work by getting the requirements right the first time. This lowers cost too. More importantly, it creates the kinds of rules that business users can maintain themselves, reducing IT costs by eliminating the need for projects to make rule changes. It let’s you take more advantage of simulation tools in your BRMS, reducing the need for and cost of testing.

Small, regular changes also cost less than waiting until there are enough changes to justify a project. And these updates are themselves much cheaper because a decision model makes it easier to tell what change is needed.

Bigger

Bigger ROI comes from using the BRMS on a larger scope, something that getting faster, cheaper projects will help ensure. But it also comes from creating an environment in which the business can truly take advantage of rapid business rules updates – something a BRMS is really good at but that goes unused all too often. The role of a decision model in creating an environment where this kind of rapid iteration is the norm really can’t be overstated – it’s the key.

So, if you want a bigger, faster, cheaper ROI from your BRMS, don’t forget to add decision modeling. Check out our recently updated white paper Maximizing the Value of Business Rules for more. If you’d like our help with selecting or adopting a BRMS, drop us a line.

POSITION FILLED

Decision Management Solutions is growing and looking for a Delivery Manager for its projects.

The Delivery Manager will be an experienced hybrid agile project manager and will be responsible for managing several concurrent, discipline based, high visibility projects using agile, and fixed milestone methods in a fast-paced environment that may cross multiple internal business divisions and services engagements.

Goals

  • Deliver agile projects that provide exceptional business value to users
  • Achieve a high level of performance and quality, and
  • Further the delivery of discipline and supporting methodologies

The team at Machine Learning Week/Predictive Analytics World has announced the schedule for 2021 (virtual conference, May 24-28, 2021) and issued their call for speakers. This is a great conference and will be a great opportunity to present. As always those with case studies and real experience will be particularly welcome!

I will once again be chairing a business-oriented track focused on operationalization of models, business management of machine learning and best practices for extracting real business value from machine learning, AI and predictive analytics. So if you’d like to talk about THOSE issues, I’d really like you to apply! Feel free to reach out to me directly with questions but I encourage you to apply.

Topics you might think about presenting on:

  • Success stories on how you build analytic models that added real business value
  • Horror stories on how to build models that don’t add value
  • Project management approaches to engage the business and IT in analytic projects
  • What other technology you use besides your favorite ML/analytic workbench and why it helps you get to production
  • What you’ve learned about hiring, developing and training analytic talent
  • How your company learns and improves when it comes to machine learning and analytics – communities, wikis etc.
  • Rollout best (and worst) practices
  • Experience with ML Ops and other operationalization steps

Plus of course anything around best practices and experience actually building the models is always welcome!

Deadline is November 6, 2020! Sign up here.

Eric Siegel and I had a great discussion about doing Machine Learning BACKWARDS recently – you can watch the recording below or on our YouTube Channel. Eric, if you don’t know, is the founder of Predictive Analytics World, a leading consultant, and author of “Predictive Analytics“. You can also check out Eric’s new Coursera class.

This discussion was prompted by Eric and I talking about the rate of failure in Machine Learning projects. For instance, one survey said that 85% or more of machine learning projects fail to add business value and that number has gone up, not down, in recent years. Our premise is that the best way to avoid these failures is to do machine learning backwards – to begin with the outcome you want, an improved decision, and work back to the models you need and the data that will let you build them.

At the end we took some questions and one of the questions we got was:

How do you recommend getting senior executives engaged?

First, we said, you need to focus the discussion on the value from deploying a solution not the core technology. This means you might want to avoid using the words “model” or “predictive model” or “machine learning model”. Instead, focus on is exactly which decisions within which large scale operations are going to be improved and to what degree they could potentially be improved. Then you can start to talk about probabilities such as that these people are much more likely to cancel and how these probabilities are going to help make decisions more profitably. After all, you can place customers into at least two very different groups based on those probabilities and treat them accordingly generating differentiation.

I discussed one useful exercise we have done with executives. We start by asking them how they are measured – how they measure their own personal success, which metrics they care about because those metrics drive their bonus. Then we’ll ask them to identify the decisions that get made in the organization that have an impact on those metrics. The first few are always big strategic decisions that the executive team make.

If you keep pushing on it, though, gradually they realize that there are decisions made by all sorts of people in the organization and indeed by bits of software infrastructure that matter to the metric also. And while they trust their own judgment – they don’t need analytics – they are much less sure about the judgment further down the organization or in the IT department. Once they realize that machine learning is not about improving their personal decisions but about improving the quality of decision making at the operational frontline they get much more excited.

Machine learning teams often feel pressure to make a strategic difference to the company. They mistakenly assume that the way to do this is to have machine learning influence the company’s executives and executive-level decisions. This is a mistake. Better, instead, to work with executives to find the high volume, repeatable decisions that make a difference and use machine learning to improve them. Because these decisions are made so often, even small improvements multiply to give you a strategic impact.

Lots more good tips in the video. If you are interested in how we approach this why not read our white paper on Framing Analytic Requirements.

A few months back, Scott Adams posted a great Dilbert that I have been meaning to write about for a while (click on the image to see the original).

In the strip, Dilbert says “You don’t go to war with the data you need. You go to war with the data you have.”

Now Scott Adams was being funny but in fact there is a kernel of truth here. We come across many companies that are failing to apply data to their decision-making, delaying building predictive analytic models or postponing their adoption of machine learning because they don’t have the data they “need”. It’s not integrated enough, clean enough, precise enough or just not as good as it “will be soon”. This is a mistake. You should do as Dilbert advises, and “go to war with the data you have“.

The trick is to start with the decision you want to improve, rather than with the data. Understand the decision, model how you think you make that decision today, work with those who make the decision every day to capture your current approach. This decision making is possible with the data you have – it must be, as this is how you decide right now.

Now you can ask some interesting questions like:

  • What would help you make this decision more accurately?
  • Which pieces of the decision give you the most trouble?
  • Where do you spend your time in this decision?
  • Is the data you need to make this decision presented the way you use it in this decision?
  • Which pieces of this decision are data analysis – places where you decide something about the data so you can base some other decision on that analysis?

Sometimes the answer to these questions will lead you to new data or identify that your data needs to be improved. If it does, at least you can show exactly WHY you need that new data and so calculate an ROI. But often it reveals that you need to use the data you have in different ways.

The biggest benefit comes from identifying possible predictive models. Because you know how the decision is made, you will be able to see how accurate a predictive model must be to be useful. Often this is a lot lower than you think. We have had clients realize they only needed a model that as a little better than a coin flip and others who only needed 70-80% accuracy. You might need 99.99% but you probably don’t.

Until you know, you can’t answer the question if your data is good enough or not. Without a business-driven target for accuracy, your data team will assume something must be really accurate to be useful and they could easily overshoot. Plus many predictive models cope with missing and bad data quite well or can at least degrade gracefully when the data is of poor quality, allowing reasonable predictions even when data is less good.

So, don’t wait for the data you think you need, start improving decisions with the data you have. It’s noble, its heroic and it works.