≡ Menu

Register today!
Wednesday, September 25, 2019 10:00:00 AM PDT – 11:00:00 AM PDT

McKinsey recently reported that “Most carriers are struggling to meet their cost of capital, and productivity has barely moved over the past decade” and that “The insurance industry is facing a serious structural challenge … the majority of carriers are not making their cost of capital.” Growing productivity by improving operations is an essential ingredient in insurance carriers’ business strategy. Digital technologies in insurance have been focused on digitizing data and processes. They have made little impact on productivity because insurance is a decision-centric industry. Without digital decisions, productivity will remain flat.

Hear insurance industry expert Craig Bedell and Decision Management expert James Taylor discuss the importance of digital decisioning to improving insurance productivity. Learn digital decisioning integrates your existing technology investments with machine learning and AI to drive increased productivity in everything from underwriting to claims handling, and pricing to next best offer.

Register today! This is a free webinar. Attend to ask your questions live or, if you would like to attend but can’t make the webinar time, register to receive a copy of the presentation and a link to the recording.

In January of this year, Wired Magazine published an article about a collaboration between The Department of Veterans Affairs (VA)  and Google parent Alphabet’s DeepMind unit to create software powered by artificial intelligence that attempts to predict which patients in the intensive care unit (ICU) are likely to develop acute kidney injury (AKI). The article stated that more than 50% of adults admitted to an ICU end up getting AKI, which is life-threatening.

According to the article, the Department of Veterans Affairs contributed 700,000 medical records to the project. The goal of the project was to test whether artificial intelligence could be developed to help doctors better predict which patients were at risk for developing AKI so preventative measures could be taken sooner.

Fast forward to the August Volume of the journal Nature and the article “A clinically applicable approach to continuous prediction of future acute kidney injury”. It looks like artificial intelligence may in fact be able to help doctors identify which patients in the ICU are at serious risk of AKI. This study shows that artificial intelligence can predict kidney failure up to two days before it occurs. During the research study, the software was able to predict nearly 56% of all serious kidney problems and approximately 90% of those problems serious enough to require dialysis.

The work is still in the early stages — there were two false positives for every true positive — but it certainly advances what’s known about how deep learning may be helpful in clinical healthcare practice.

According to Dr. Dominic King, DeepMind’s health lead and coauthor of the research paper, kidney issues are particularly tricky to identify in advance. Today, doctors and nurses are alerted to acute kidney injury via a patient’s blood test, but by the time that information comes through, the organ may already be damaged; making him hopeful for the long-term value of these types of predictive solutions. The team hopes a similar model can be developed to identify other major causes of disease and deterioration, including life-threatening infection sepsis.

I live in Palo Alto – within walking distance of Google and the home of the VA hospital that an article in the Financial Times identified as the planned location for a clinical trial of this algorithm. I also have a particular interest in how to develop effective clinical decisioning systems. At Decision Management Solutions we have done a couple of interesting prototypes and recently written a paper for the Department of Health and Human Services on this topic.

The value of a prediction like this is that it could help medical professionals better triage patients and get those who require intervention on a treatment plan right away. Doing so could potentially save hundreds of thousands of lives each year and lessen the need for invasive, uncomfortable treatments such as dialysis or kidney transplant. The prediction itself cannot do this – it’s just a prediction – but acting on the prediction can. Making the prediction readily available to medical professionals MIGHT change their behavior. If they understand the prediction, if the prediction is clearly explained, if there is nothing about the patient that triggers their own personal experience, if their first few cases aren’t false positives… if, if, if.

To take advantage of this kind of prediction, it needs to be embedded into a clinical decision support framework. Working with clinicians, you can develop a model of how they do triage for patients today and how they select appropriate treatments for a patient. This model will be different in each clinical setting – the VA is likely to do this differently from your local hospital network, for instance. The availability of facilities, distance to them, organization of specialties and much more go into this decision. And if the decision about triage requires medical judgment that can’t be automated, this too can be input to the system.

With a clear understanding of the decision, you can improve it using the prediction. The medical professionals can see how they would change the decision given the prediction, its accuracy and its false positives. Instead of simply showing them the prediction and hoping they will change their decision, the system can change its recommendation in alignment with their approach.

Remember, AI, Machine Learning and predictive models don’t DO anything – they just make predictions. If you want to save lives (or engage customers, prevent fraud, manage risk or anything else), then you have to make decisions with those predictions.

Maybe I should drop by the VA and tell them…

In May, 2019 the analytics community lost a true pioneer. I met Robert Hecht-Nielsen in 2001 when the company he co-founded, HNC Software, purchased the Blaze Advisor business unit of Brokat Technology, where I worked. The following year, HNC Software was acquired by the company now known as FICO. He was well known to all for his red suspenders, bald head, and beaming smile.

Robert was a pioneer in the field of neural networks. He wrote the first textbook on the subject, Neurocomputing, in 1989. In 1986, he co-founded HNC Software, a neural networking startup in San Diego, based on his breakthrough work in predictive algorithms. HNC was perhaps the first start-up in the neural network/machine learning space to be significantly financially successful. The company’s flagship product, Falcon® is used all over the world to detect fraudulent debit and credit card transactions. Falcon was, I think, the earliest example of automated decisioning as a service for fraud detection using neural networks and in-memory profiles.  I believe it was first commercially available in 1994 and based on its success, the company went public in 1995.

In July, many of my former colleagues attended a celebration of Robert’s life. I was unable to attend, due to a trip to Asia, but was touched by some of the stories and wanted to share a summary here. In addition to his many years at HNC, Robert was also an adjunct professor of electrical engineering at the University of California San Diego. The memorial was a mix of family, HNC alumni, and former graduate student advisees. All of the stories from Robert’s oldest granddaughter to his students and his former employees shared these common words of advice:

  1. Study a hard science like math or physics; you’ll learn business when you’re doing it
  2. Learn how things work; don’t just put gas in your car, learn how thermal dynamics works
  3. Read the economist (he quizzed his students on it regularly)
  4. If you have the option to take the easy path or the hard path, take the hard path because you’ll learn so much more.

Everyone spoke about how generous Robert was with his time. There was one story in particular that summed Robert up for me. A gentleman got up to speak at the celebration. Here’s what he said.

Hi. My name is Quinton. Most of you don’t know me. I met Robert in 1998 when I was an air conditioning technician at the HNC office. I had just gotten out of the Marines and I was thinking about going back to school. I was up in the vents when Robert came back into his office. I shouted down an apology that I was just finishing up. When I got down out of the vent, he invited me to sit down. We talked for two hours (everyone in attendance laughed). He passed along all of the advice everyone has mentioned today. He literally changed the trajectory of my life. I didn’t speak to him again until 2012. I saw an article about him and sent him an email at his UCSD address to thank him for that day in his office. He emailed me back and invited me to his home in Del Mar. Since that day in Del Mar we met at least once a quarter for the rest of his life. I am now an entrepreneur, something else that Robert was very passionate about. I own a medical device company. I have no idea what my life would look like today if I hadn’t met Robert.

One of the key benefits of decision management is its focus on operational decisions. Diving into these operational decisions further, you can typically identify several micro-decisions that, when improved, will dramatically affect business performance.

“Micro-decisions” are the decisions made transaction by transaction, customer by customer. Companies often fail to recognize these as individual decisions, instead lumping them into bigger ones. For instance, instead of identifying that each price offered to each customer is a micro-decision, they consider “pricing” as one big, strategic decision. Failing to consider these micro-decisions as separate decision-making opportunities means you are unable to fully leverage the personalization or targeting of these micro-decisions to individual customers. This in turn means you cannot take advantage of all that lovely data you have about customers, or reward customers based on loyalty metrics, or….

Even if you don’t want to completely automate a micro-decision, it would be wise to think about using Decision Management to reduce the range of options available to a human decision-maker, or at least rank the available options by likely effectiveness. A decision service can be used to either fully automate a micro-decision or provide support for human decision-making, as described. It is also important to match these micro-decisions to objectives and to measure them so they can be improved over time.

Micro decisions are made often, typically thousands of times a day. This means that any improvement in the effectiveness of these decisions has an outsized impact. Even a small increase in profitability, customer retention or net promotor score resulting from a micro decision scales up -its value is multiplied by the thousands of such decisions you make. The total impact often exceeds much more “strategic” decisions.

Decision Management and micro-decisions are a great way to gain a macro impact.

Like many of you I am awash in digital photos and have been trying to find a good way to manage them. For various reasons I picked Amazon Photos. One of the key benefits was the family vault – a way to let several people upload photos to a shared space. Of course, another reason is to take advantage of Amazon’s facial recognition AI.

So the good news is that both of these seem to work. The shared files are managed nicely on AWS (no surprise that Amazon can do this) and the AI does a pretty good job of recognizing faces and letting you edit this when it makes mistakes. So far so good.

But the facial recognition is account by account and there’s no way to use it in the family vault:

I can look at my photos and identify faces in them. The other members of my family can look at their photos and do the same.

But we can’t say that these two people are the same. I can’t even use the faces they have identified when looking at a family member’s photos – only they can.

Plus there’s no way to see all the photos the family has of a particular person. Which was kind of the point of the family vault.

So why do AI programmers smart enough to do facial recognition fail to realize that they should make this work across a family vault? I suspect because like far too many AI programmers they are focused on the technology. Their success criteria is to make the technology solve a technical problem – identifying faces – not solve a business problem – managing a photo collection.

If you want your AI programmers to focus on the business problem and not the technical problem, check out this paper on our DecisionsFirst™ approach for success with AI.

I noticed the other day that my phone, which used to have a relatively sophisticated (if horribly complicated) “do not disturb” approach had reverted to only allowing a simple version. Online, many users complain that now they can’t really use it because it’s too “dumb” regarding when it rings and when it doesn’t. How did it come to this? Why did the feature get removed? Because business rules broke it.

The old approach let you specify business rules. You started by specifying rules about being asleep or in meetings and then you started adding exceptions. But this gets complicated quickly. How does a phone do all the things real people need it to do?

For example, don’t ring:

  • When I am in meeting, unless the caller is my significant other and it’s the second time they have called in the last 5 minutes
  • Unless it’s one my kids and this the third time they have called, unless I am driving because then I can’t switch to the incoming call from the dial in anyway
  • When I am at home, unless it’s someone that I call regularly or a favorite, unless I am asleep
  • If I am driving and on my headset
  • Unless I’m not in a meeting; but if someone important calls, answer, but otherwise don’t
  • Unless it’s my emergency contact calling for the third time in five minutes, then ring even if I am not using my headset

You get the picture. Too hard to do effectively manage in a phone interface. Trying to use rules to solve this problem broke the feature.

But what if you thought about decisions instead of rules? Check out this decision model I came up with:

  • The phone needs to decide what to do (ring or not)
  • To do that it needs to decide where you are, what’s happening, who’s calling and how persistently are they calling
  • It can do all of these by itself using things it knows (sensors), basic settings (sleep hours, calendars), contact info that you already have and the call log
  • All you would have to do is specify a simple decision table to say given the set of conditions you wanted it to ring or not ring

Of course there’s some UI design to see how best to capture this decision logic (a table, reasons to mute then reasons to ring or what) but the logic would be simple because of the sub-decisions.

To learn more about Creating Agility and Operational Efficiency with Decision Modeling, read this whitepaper.

Ron Ross recently posted a question “What Happens When Behavioral Business Rules and Decision Logic Collide?” in which he asks whether a behavioral rule or decision logic should “win” when they disagree. The problem is that this is the wrong question.

Take his example about a city charging for its facilities. The behavioral rule is “A senior citizen must not be charged a recreational fee for use of facilities.” The decision logic is a table shown in this image:

But this table is clearly wrong – if Senior Citizens should not be charged recreational fees than this logic is incomplete/inaccurate. Asking if this should “win” relative to the behavioral logic is a pointless question – the logic is broken. This table is just the default or generic calculation and the point of a decision is to decide for any SPECIFIC transaction not for a GENERIC one.

The right question is to ask how the city decides on recreational fees. One of the decisions in this model would be to calculate the base hourly fee. Others would be to identify exclusions (I bet Senior Citizens are not the only exception) and discounts. Each decision would be described by appropriate decision logic. The overall decision model would then decide on the fee correctly in all circumstances.

Ron argues that adding exceptions to decision tables complicates them. Well it can, especially if you do it wrong or build one great big decision table. A decision model handles them just fine.

So, my answer to Ron is simple. Build the right decision logic and then you won’t have to answer the question.

I recently wrote a blog over on Decision Management Solutions’ blog called 5 Reasons to fire your rules consultant to highlight the worst offences of business rules consultants. Here’s the list:

  1. They want to do rule harvesting
  2. They call it a rule engine
  3. They put off business user enablement to phase 2
  4. They smush process and rules in one project
  5. They want to use ORs, ELSEs, ELSEIFs…Oh My!

Read the full post over on the Decision Management Solutions blog and subscribe to our newsletter for more articles like this in the future.

Ryan Trollip, CTO of Decision Management Solutions, and Charlotte DeKeyrel, one of our experienced decision modeling consultants, and I are all going to be at IBM THINK February 11th – February 14th, 2019 in San Francisco. Ryan and I are speaking on Thursday and you’ll find us at events involving Decision Management and IBM’s decisioning products like ODM, Watson Studio etc.

If you are going to be there and would like to talk Decision Management and digital decisioning, get in touch – info@decisionmanagementsolutions.com. We’re doing some great stuff with the IBM product stack and would love to share our DecisionsFirst™ approach to see if its a fit for your problems.

I am speaking with Ryan Trollip, CTO of Decision Management Solutions, and Stephane Mery of IBM on Delivering Excellent Customer Experiences with Analytics and Automation at IBM THINK.

Many customers are struggling to deliver consistent digital experiences across customer channels and touch points. Adopting a decision-first approach is a step in the right direction to provide this consistency and decisioning support over the entire customer journey. Operational decisions are mixing business rules, analytics and machine learning. This talk will share some use cases and illustrate how decision modeling can be used as a framework to inject AI into your business operations.

Come hear us discuss a proven approach to delivering digital decisioning that operationalizes predictive analytics and positions you for success with ML and AI.

I have known Tom for many years and enjoyed his books. He recently sent me a copy of his latest one – The AI Advantage: How to Put the Artificial Intelligence Revolution to Work (Management on the Cutting Edge).

Tom does his usual excellent job of introducing a technical topic – AI and machine learning – and focusing on what business leaders need to know about it. While he has a chapter on the various approaches to adopting AI technology, the book’s key message is that a technology-first approach to AI is a bad idea. Instead of technology-led “moonshots”, enterprises should use AI to solve practical, immediate, operational problems.

The book begins with a discussion of the role of AI in the enterprise and surveys what companies are doing today – both successes and failures. It was particularly refreshing to see failures discussed and Tom does a nice job of using some of these failures to illustrate how best to approach AI. He believes AI is going to transform companies, albeit perhaps more slowly than some believe, and encourages companies to identify a coherent AI strategy. He provides some good material on the elements of an AI strategy and outlines how different companies might take different approaches.

The biggest takeaway from the book is that success will not come from “moonshots” but from a more practical approach. As Tom says:

There are relatively few examples of radical transformation with cognitive technologies actually succeeding, and many examples of “low hanging fruit” being successfully picked

In our experience this is critical. Taking a big “we’ll just use AI” approach rarely works. Developing a comprehensive approach to a decision that mixes and matches AI with other technologies like descriptive analytics, predictive analytics and business rules works much better. Tom recommends that companies develop a series of less ambitious applications in the same area that in combination have a substantial impact. Each is less risky than a moonshot and you will have time to adapt to each piece. But, when combined, the overall impact is high.

We like to do this by building a decision model to break down a specific collection of closely related business decisions into their component sub-decisions. Some of these sub-decisions will be best done by people, some can be codified as business rules and some will need ML or AI. This lets you identify a set of smaller, less ambitious AI decisions and shows you how they will contribute to an overall more effective decision.

As Tom says:

Given all the media and vendor hype in the cognitive technology space, companies often feel pressure …to take on a cognitive project. It’s much better for a company to try and see beyond marketing blandishments about AI and to create the best fit with the organization’s strategy, business model and capabilities.

That requires that you really understand how you make decisions and how (and where) AI can help. Our experience is that a design thinking approach to decisions – DecisionsFirst Design Thinking as we call it – let’s you redesign the decision-making and take advantage of AI. Too many companies have used AI to “pave the cow path” by automating existing work process (particularly true with RPA technology). Really taking advantage of AI will require structured and controlled re-thinking of your decision-making.

Usefully for business executives, he provides a very accessible survey of the capabilities of AI:

  • Create highly granular predictive and classification models
  • Perform structured digital tasks (RPA)
  • Manipulate information (OCR and data integration)
  • Understand human speech and text
  • Plan and optimize operations
  • Perceive and recognize images
  • Move purposefully and autonomously around the world
  • Assess human emotions

For each of these he provides a concrete discussion of how they might reasonably be used, not in the future but now – discussing in passing how work is likely to need to be redesigned to take full advantage of these technologies.

He also discusses how, while ML and AI represent an extension of the world of advanced analytics, they also differ from traditional data mining and predictive analytic approaches in 3 ways:

  • They are usually more data intensive and detailed, limiting when they can be applied to scenarios where there is a lot of data
  • They need therefore to be trained on a subset of the data because there is so much available.
  • They can learn continuously as data is fed through the resulting algorithm, rather than waiting for the next formal update.

AI and ML can be used in many of the same circumstances that predictive analytics and data mining can be used. Think of them as both an extension of these techniques and as something distinctly different.

In later chapters he talks about jobs and skills in a world where AI is increasingly pervasive and about some of the social and ethical issues such as transparency and bias. As he says:

As cognitive technologies are developed, organizations should think through how work will be done with a given new application, focusing specifically on the division of labor

He has a good discussion of transparency and our experience has been that considering AI as one part of the solution, mixing more opaque AI with more transparent business rules for instance, really helps. In addition, new technologies such as LIME and AI Open Scale help explain AI model results in a way that can be combined with the explanations produced by these other more transparent technologies.

One of the themes in the book is the challenge of getting AI to really affect an organization’s core business operations. In a cognitive-aware executive survey conducted by Deloitte, for instance, 47% said it was “difficult to integrate cognitive projects with existing processes and systems”. As Tom points out, this integration is essential if you want to make a real impact. This matches a recent McKinsey survey that found analytic “leaders” investing much more heavily in this “last mile” integration than others. A Decision Management platform -a digital decisioning platform as some call it – is a key ingredient in tying advanced analytics, ML and AI into your day-to-day operations.

Finally, from a Decision Management perspective, Tom has some great illustrations of the value of a decision-centric approach and of how AI is integrated into an overall approach. Early in the book he quotes Jeff Bezos’ Letter to Amazon Shareholders from 2017 (my emphasis added):

But much of what we do with machine learning happens beneath the surface. Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more. Though less visible, much of the impact of machine learning will be of this type – quietly but meaningfully improving core operations.

“Quietly but meaningfully improving core operations since 2002” could be a motto for Decision Management! That’s the focus of Decision Management and always has been. In that sense, machine learning (ML) and AI represent powerful new tools for doing what we have always done rather than something requiring a radically different approach. Tom even makes this point, identifying how rules-based systems have been addressing many of the scenarios for which AI is being considered. Like Tom, we see potential in combining these rules based and machine learning approaches to produce adaptive systems.

I will end with one of Tom’s quotes from early in the book:

The businesses and organizations that succeed with AI will be those that invest steadily, rise above the hype, make a good match between their business problems and the capabilities of AI, and take the long view.

This is not a book that is going to add to your technical knowledge about AI but it’s a great book for business executives and for those who want to think more deeply about how AI will change their business. You can buy it here The AI Advantage: How to Put the Artificial Intelligence Revolution to Work.

 

We’ve blogged recently about some of the challenges in analytics and AI – How More Companies Can Maximize the Potential of Analytics and 80% of insurance carriers aren’t delivering high impact analytics – building on some great McKinsey research. They recently published another article, this time targeted at Chief Analytics Officers – Rebooting analytics leadership: Time to move beyond the math.

This was a great article. First, I loved that it made it clear Chief Analytics Officers should think about AI and Analytics together. This is key as it focuses AI on decision-making (where analytics is already focused) not just conversational AI. It’s not that chatbots aren’t useful, they’re just REALLY different from decision-making AI and are better thought of as user experience technology.

I was also glad to see them identify the things you can’t rely on to drive analytics/AI success like being “born digital”, having an analytical CEO or being in an existential crisis that drives people to act. You need to be able to succeed even when these things are NOT true.

The whole paper for me was summarized by this one great customer quote:

just getting the math right doesn’t drive the change

So true! McKinsey had some great suggestions that align with what we have seen work in our customers

  • Build a coalition of equals across the business/operations, analytics and IT.
    • This means you need ways to talk about the problem everyone understands – not math but decision models.
  • Put business value front and center and align analytics opportunities and innovation with the business unit’s vision and priorities.
    • Identify the business’ key metrics, find the decisions that will improve these metrics if they are improved. Focus on those decisions.
  • Focus on IT as a strategic partner rather than simply as an execution arm.
    • Engage them early to think about the last mile deployment of analytics.
  • Heavily invest in integrating advanced analytics into the workflow.
    • We use decision-centric design thinking to get the decision model the business need, build analytics to support that and then use the decision model and a modern Business Rules Management System to push that decision into systems and processes.
  • Be a change agent and advise boards and executives on what’s possible.
    • Don’t let those executives throw-up their hands and say “I don’t understand AI so do what you want”. Make them work for it!

Our DecisionsFirst approach is exactly what you need to succeed as a Catalytic CAO so if you are a Chief Analytics Officer, or want to become one, check out our Chief Analytics Officer resource page or contact us to chat.

Recently we have posted on the Decision Management Solutions blog about a couple of interesting pieces of McKinsey research that discuss the unfortunate truth – most companies are NOT succeeding with advanced analytics.

First, there’s this general research How More Companies Can Maximize the Potential of Analytics:

“Senior executives tell [McKinsey] that their companies are struggling to capture real value. The reason: while they’re eking out small gains from a few use cases, they’re failing to embed analytics into all areas of the organization.”

McKinsey identifies three key challenges:

  1. Aligning on strategy
  2. Building the right foundations of data, technologies, and people
  3. Conquering the last mile by embedding analytics into decision making

Second is a piece specific to insurance, but likely typical of companies in other industries, that identifies that 80% of insurance carriers aren’t delivering high impact analytics (and I suspect most the others are only doing so very narrowly).

Why aren’t they succeeding? 38% of those surveyed cited a failure to integrate analytics into the frontline – more than cited poor data quality or a lack of strategic support.

What these research reports have in common is the identification of the importance of not just developing analytic insight, but actually embedding it in front-line workflows and systems – by using it to drive better operational day-to-day decision-making.

We have found that a straightforward, three step approach addresses this:

  1. Begin with decisions, not with data
  2. Begin with operational decisions, not strategic ones
  3. Begin with an agile analytic deployment platform, not with visualization

Check out the blog posts and the underlying research. And when you’re ready to succeed with advanced analytics, contact us.

Frontline Solvers has been in business for over 25 years and focused on democratizing analytics for the last five years. They identify themselves as an alternative to analytic complexity with a focus on leveraging broadly held Excel skills and a large base of trained students. They offer several products for predictive and prescriptive analytics and have sold these products to over 9,000 organizations over the years. Their customers are both commercial and academic with hundreds of thousands of students using the tool and 500,000 cloud analytics users. Their commercial customers include many very large companies, though generally they sell to several distinct business units rather than at a corporate level.

Frontline began with their work in solvers (prescriptive analytics) and have worked “backward” into predictive analytics. Their approach is very focused on avoiding analytic complexity:

  • Smart small and keep it simple, with a focus on rapid ROI.
  • Recognize that companies have more expertise than they think – Excel and programming skills for instance plus all the students who have used the software in MBA classes.
  • “Big Data” and more complex ML/AI technologies are not essential for success – ordinary database data is often enough.

Frontline is focused on decision support today but rapidly moving into decision automation – decision management systems.

Their core products are modeling systems and solvers for optimization and simulation, used to build a prescriptive model, rather than the analysis of lots of data (though they have data mining routines too). These kinds of optimization models are often called prescriptive analytics because they recommend – prescribe – specific actions for each transaction. Prescriptive analytics can, of course, also be developed by combining predictive analytics and business rules – driving to a recommended action using the combination. Frontline recognizes this and envisions supporting business rules in their software.

Solver-based prescriptive analytic solutions generally focus on many transactions in a set not a single transaction – what Frontline call coordinated decisions. Sometimes these decisions also have no data, no history, so a human-built model is going to be required not one based on data analysis. Indeed, any kind of prescriptive analytic approach to decision-making is going to require human built models – either decision models to coordinate rules and analytics or a solver model (or both, as we have seen in some client projects).

Frontline argues Excel is the obvious place to start because Excel is so familiar. Their RASON language allows you to develop models in Excel and then deploy to REST APIs. They aim to make it easy for business domain experts to learn analytic modeling and methods, to provide easy to use tools and then make it easy to deploy. Working in Excel, they provide a lot of learning aids in the product that popup to help users. They also have an online learning platform (solver.academy) with classes and there are over 700 university MBA courses using Frontline’s software to introduce analytics methods.

The core products are:

  • Analytic Solver – a point and click model builder in Excel, including the cloud-based Excel version which they have been supporting since 2013.
  • RASON – modeling language that can be generated from the Excel-based product or edited directly.
  • SDK – supporting models in written in code, developed in RASON and/or Excel and deployed as REST APIs.

Their base solver is built into the desktop Excel (OEMed by Microsoft). As the cloud Excel does not have this, they have built online apps for optimization, simulation and statistics that work across Excel Online and Google Sheets. The latest version of Excel Online is now ALMOST able to support the full Analytic Solver Suite and this is expected to be complete in Q1 2019, allowing them to unify the product across desktop and online Excel.

To bring data in to the solver, they use the Common Data Service as well as standard Data Sources for data access. This makes it easy to connect to data sources. They also use the Office Workbook model management tools (discovery, governance, audit) which are surprisingly robust for those with corporate licenses to Excel.

The engine has four main capabilities:

  • Data mining and forecasting algorithms.
  • Conventional optimization and solver.
  • Monte Carlo Simulation and decision trees.
  • Stochastic and robust optimization.

For very large datasets (such as those used in data mining), the software can pull a statistically valid sample from, say, a big data store. The data can be cleaned, partitioned into training and validation sets and various routines applied. The results are displayed in Excel and PMML (the standard Predict Model Markup Language) used to persist the result. Obviously the PMML is executable both in third party platforms and in their own RASON language.

RASON is a high-level modeling language that allows the definition of data mining models, constraints and objectives for optimization, and distributions and correlations for simulation. A web presence at rason.com allows this to be written in an online editor and executed through their REST API. RASON is JavaScript-like and can embed Excel formulas too. RASON can be executed by passing the whole script to the API using a JavaScript call. An on-premise version is available too for those who wish to keep execution inside the firewall.

The Solver SDK has long supported coding of models. Since 2010 the SDK has been able to load and run the Excel solver models. The RASON service came in 2015 and in 2017 they added integration with Tableau and Power BI, and this year to Microsoft Flow. These integration steps involve generating apps from inside the Excel model using simple menu commands. Behind the scenes they generate the RASON code and package that up in a JavaScript version for consumption.

You can get more information on Frontline Solvers here.

Cassie Kozyrkov, the Chief Decision Intelligence Engineer at Google wrote an article recently titled  Is your AI project a nonstarter in which she identified 22 check list items for a candidate AI project. It’s a great article and you should definitely read it. In particular you should note the quote at the top:

Don’t waste your time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.

And what it will do for you is often help your organization make better decisions.

We always begin customer projects by building a decision model. Working directly with the business owners, we elicit a model of how they want to decide and represent it using a Decision Model and Notation (DMN) standard decision requirements model. This shows the decision(s) they want to make and the requirements of those decisions – the sub-decisions (and sub-sub-decisions), the input data and the knowledge sources (policies, regulations, best practices and analytic insights) that describe their preferred approach.

These models address several of Cassie’s early points (1. Correct delegation and 2.Output-focused ideation) by focusing on the business and on business decision-making. We also link this decision model to the business metrics that are influenced by how those decisions are made, which addresses couple of her key points on metrics (18. Metric creation and 19. Metric review).

This decision model is often a source of analytic inspiration, as business owners say “if only…”- “if only we knew which emails were complaints”,” if only we knew who had an undisclosed medical condition”, “if only we knew if this text document described the condition being claimed for”…. These are the analytic and AI opportunities in this decision. Like Cassie, we often find that existing data mining and description analytics projects can be used to see how a decision could be improved with AI/ML (3.Source of inspiration).

Now the decision model has sub-decisions in it that are either going to be made by a person or by an AI algorithm. Because you know what a better decision looks like (thanks to the link to business metrics), you can make sure an AI algorithm is likely to help (20. Metric-loss comparison) and you can consider if the specific decision you identified is a good target for AI (4. Appropriate task for ML/AI). Critically we find that often the whole decision is not suitable (there are too many regulations or constraints) but critical sub-decisions ARE suitable.

When it comes to putting the resulting AI algorithm or ML model into production, the decision model makes it clear how it will be plugged in and how it will be used in the context of the business decision (5. UX perspective and to some extent 8. Possible in production). Keeping the end – the decision – in mind in this way means that project teams are must more focused on how they will operationalize the result of the algorithm than they would be otherwise.

If you automate the decision model, as we do, using a BRMS then you will also be able to simulate the decision against historical data (17. Simulation). The decision model means you can simulate the decision with and without your AI/ML components to prove the ROI too.

Finally, this focus on decision-making means you know when the AI/ML model will be used (other sub-decisions are likely to address eligibility and validity of the transaction, for instance, narrowing the circumstances in which the AI must work) and you can see what accuracy is required. This is often much lower than AI/ML teams think because the decision model provides such a strong frame for the algorithm. (21. Population and 22. Minimum performance).

Decision models are a really powerful way to begin, scope, frame and manage AI and ML projects. Of course, they don’t address all Cassie’s 22 points and the others (6. Ethical development, 7.Reasonable expectations 9. Data to learn from, 10. Enough examples, 11. Computers, 12 Team, 13 Ground truth, 14 Logging sanity, 15 Logging quality, 16 Indifference curves) will need to be considered decision model or not. But using a decision model will help you frame analytic requirements and succeed with AI.

IBM has been developing Decision Composer since 2017 and is releasing it as part of its core Business Rules Management System, Operational Decision Manager, in December 2018. Decision Composer is a browser-based tool, currently available on the IBM cloud, that uses a decision model metaphor to design decision logic and deploy it as a decision service. You can think of it as a free, streamlined (and simplified) version of ODM. Decision Composer allows you to define multiple projects, each containing a decision model for a particular decision service. It has components for modeling data, modeling decisions, specifying decision logic (business rules) and testing.

  • The design page shows a decision model for the project. Using a subset of the Decision Model and Notation (DMN) format, the design includes Decisions and Input Data shapes that can be linked using Information Requirements or dependencies. The diagram supports the normal kinds of features and layout is automated with a limited number of ways to rearrange things. The result of each decision is defined, using a standard datatype or a type defined for the project.
  • From each decision you can access the Decision Logic Editor. This allows you to either edit a decision table or write a business rules. The decision logic is edited in the same language as ODM uses with a natural language-like option (Business Action Language) as well as a tabular decision table. Multiple rules can be written for a single decision or a decision table can be defined. The conditions in the rule or table match to the requirements drawn in the decision model. Decision tables support merged fields, calculated columns, column reordering etc. and provide a description of the rule implied by each row in the table. Overlap and gap checking are provided also. Rules and tables can be controlled through interaction policies, similar to DMN’s hit policies. Supported interactions include sequential, first rule, smallest or greatest value, sum and collect. For example, collect gathers the results of applicable rules as a list.
  • Each Input Data represents a Type. The types can be imported from existing data definitions or defined on the fly in the design environment. Types can use the normal data types, can be lists, can have structure, allowed values and restrictions as you would expect. When complex types with multiple fields are defined, the user can determine which specific fields are used by a decision when writing the decision logic.
  • A simple test interface is provided to make it easy to submit sample data and confirm results. Multiple test datasets can be defined and saved. Executed rules and their respective input and output values are visible to help understand how the logic was processed.
  • Decision projects can be shared with others. The different parties can then collaborate on the same decisions, with simple version control.

New versions can be saved, undo is supported etc.

Decision services can be deployed as REST services in the IBM cloud and using the ODM infrastructure. and the deployed services have well defined signatures. Lazy deployment to IBM Cloud allows you to put the service immediately into production – the code will be deployed the first time someone tries to use the service.

Decision Composer also allows you to import sample projects for a quick start tutorial experience. You can also import an XSD, JSON file, swagger URL (for an existing service) or a Watson Conversation Workspace to create an initial data model, allowing you to rapidly develop decision logic against a known data set.

For those wondering about the differences between ODM and Decision Composer, ODM offers much more scalable execution, more extensive governance and permission management, more robust testing and powerful simulation tools.

You can try Decision Composer at ibm.biz/DecisionComposer and the IBM team have a blog post about the new version of Decision Composer on ODMDev. You can also check out this post on how DecisionsFirst Modeler, Decision Composer and IBM ODM can all be used together – Decision Modeling and DMN for IBM Customers – New Brief.

.

We are running Jan Vanthienen’s highly reviewed, Decision Table Modeling with DMN training again December 11-13, 9:30am-11:30am Pacific each time.

When representing and analyzing business decisions in real business situations and processes, decision tables have always proven a powerful approach. Decision table methodology, however, is more than putting some rules in a few tables. Hear all about proper methodology, table types, notations, best practices and the Decision Model Notation (DMN) standard, based on years of experience in modeling decisions.

This 3-part online live training class taught by leading decision table expert Jan Vanthienen of the University of Leuven will prepare you to model decision logic, business rules, using decision tables.

You will lean the concepts, objectives and application areas of decision tables for business analysis and business process management. You will see how to model and normalize decision table models and how they can simplify business processes. Mainly you will get many lessons from a long experience on how to build, analyze, verify and optimize decision table models according to simple guidelines.

Prerequisite: Decision Modeling with DMN or experience with decision modeling.

Each session is an interactive online event and is recorded so you can view it again or catch up on things you miss. More details here and you can register here. Early bird pricing and team discounts are available.

We are running our regularly scheduled, and highly reviewed, Decision Modeling with DMN training again December 4-6, 9:30am-11:30am Pacific each time.

Decision modeling with the new Decision Model and Notation (DMN) standard is fast becoming the definitive approach for building more effective processes and for specifying requirements for business rules and predictive analytic projects. With decision modeling, you can:

  • Re-use, evolve, and manage business rules
  • Effectively frame the requirements for analytic projects
  • Streamline reporting requests.
  • Define analytically driven performance dashboards
  • Optimize and simplify business processes.

This 3-part online live training class taught by leading expert James Taylor, CEO of Decision Management Solutions, will prepare you to be immediately effective in a modern, collaborative, and standards-based approach to decision modeling. You will learn how to identify and prioritize the decisions that drive your success, see how to analyze and model these decisions, and understand the role these decisions play in delivering more powerful information systems.

Each step is supported by interactive decision modeling work sessions focused on problems that reinforce key points. All the decision modeling and notation in the class is based on the DMN standard, future-proofing your investment in decision modeling.

Each session is an interactive online event and is recorded so you can view it again or catch up on things you miss. More details here and you can register here. Early bird pricing and team discounts are available.

EIS OpenL Tablets is a product of EIS Group focused on using a spreadsheet paradigm to manage business rules. I last spoke to OpenL Tablets in 2013 and recently got an update on the product. EIS OpenL Tablets is available as open source and in a commercial version. EIS Group is an insurance innovation company founded in 2008 with over 800 employees worldwide.

EIS OpenL Tablets has the usual Business Rules Management Systems components – a web studio, an engine and a web services deployment framework. It also has a plug-in for Maven, templates and support for logging to a NoSQL database. It uses a spreadsheet paradigm for rule management and this is key to engaging business users. They find that 90% of the business rules that business people need to write can be represented in spreadsheet-like decision tables (Decision Management Solutions finds the same on our consulting projects). As a result they focus on creating all the rules needed in Excel and then provide a single web based environment (demonstration available here) for validation and life cycle management.

The web studio allows a user to work with multiple projects and allows various groups of users to be given different privileges and access to various projects. Each project contains Excel files, deployment details and other details. Each Excel file can have multiple tabs and each tab can have multiple decision tables as well as simple look up tables, formulas, algorithms and configuration tables. All these can be viewed and edited in the web studio (where there are various wizards and automatic error checking capabilities) or can be opened in Excel and edited there.

Decision tables can have formulas in actions and conditions, supporting a complex set of internal logic. The user can also define a sequence of calculations using the tables as well as datatypes and a vocabulary (allowed values for inputs), sample data/test cases etc. Test cases can be run directly in the web interface and the rules can be traced to see which ones executed in a specific test case. There is an optional element to store a detailed execution log to a NoSQL database in production also.

Ongoing changes are tracked for an audit trail and changed versions can be compared in the web studio. A revision can be saved to create a snapshot at which point all the small changes are removed from the log and the snapshot is stored. These larger snapshots can also be compared.

Projects can contain service definitions for which rule services should be defined. Various deployment options are supported, and the user can specify which rules can be exposed as entry points. A swagger UI is generated for exposed service calls.

The commercial version of OpenL Tablets supports a dynamic web client, integration with Apache Spark and analytics/advanced modeling. The Apache Spark integration allows very large numbers of transactions to be run through the rules for impact simulation and what-if analysis.

More details on OpenL Tablets available at http://openl-tablets.org/

I blogged last week about IBM’s AI approach and one piece was still under NDA – new capabilities around trust and transparency. These capabilities were announced today.

As part of trying to address the challenges of AI, IBM has added a trust and transparency layer to its ladder of AI capabilities (described here). They see five primary personas round AI capabilities- business process owners, data scientists, AI ops, application developer and CIO/CTO. The CIO/CTO is generally the persona who is most responsible. It is them who see the challenges with trust. To use AI, companies need to understand the outcomes – the decisions – are they fair and legitimate?

The new trust and transparency capability is focused on detecting fairness / bias and providing mitigation, traceability and auditability. It’s about showing the accuracy of models/algorithms in the context of a business application.

Take claims as an example. A claims process is often highly populated with knowledge workers. If an AI algorithm is developed to either approve or decline a claim then the knowledge workers will only rely on it if they can trust and understand how it decided.

These new capabilities show the model’s accuracy in terms defined by the users – the people consuming the algorithms. The accuracy can be drilled into, to see how it is varying. For each model a series of factors can be identified for tracking – gender, policy type, geography etc. How the model varies against these factors can be reviewed and tracked.

The new capabilities can be applied to any model – an AI algorithm, a machine learning model or an opaque predictive analytic model such as a neural network.  IBM is using data to probe and experiment against any model to propose a plausible explanation for its result – building on libraries such as LIME to provide the factors that explain the model result. The accuracy of the model is also tracked against these factors and the user can see how they are varying. The system can also suggest possible mitigation strategies and allows drill down into specific transactions. All this is available through an API so it can be integrated into a run time environment. Indeed this is primarily about runtime support.

These new capabilities are focused on fairness – how well the model matches to expectations/plan. This is about more than just societal bias but about making sure the model does not have inbuilt issues that prevent it from being fair and behaving as the business would want.

It’s great to see these capabilities being developed. We find that while our clients need to understand their models, they also need to focus those models on just part of the decision if they are actually going to deploy something – see this discussion on not biting off more AI than you can trust.

This capability is now available as a freemium offering.