≡ Menu

Customers, IBM says, are moving to the cloud but they are transitioning through a hybrid solution. IBM is investing heavily in its cloud in terms of partnerships, technology, patents, volume, data centers etc. They announced two new partnerships this week – Cloudlfare and New Relic.

The One Cloud architecture is particularly focused on AI and analytics enablement – cloud infrastructure that assumes you want to use the data on the cloud to drive analytics and AI. It’s also very API-centric and designed to be managed programmatically. Plus the Watson APIs are fully integrated along with the various data capabilities IBM has been developing for its cloud.

IBM Cloud Private is IBM’s key platform for modernizing applications. They are adding capabilities around application transformation, developer tools and the data cloud. Integration across multiple clouds and deployment automation /management are focus areas also.

Transformation Advisor scans existing applications to assess the complexity of migrating an existing application to a container-based environment. If possible it will automate the transformation to containers. Once containerized the IBM Cloud Private catalog allows these applications and standard ones to be deployed to multiple instances and provides monitoring for them once deployed. Applications can also be pushed to public clouds and monitored there also. Plus of course there’s a command line interface for all this.

All good stuff. Of course you should also think about replacing all that hard-wired code with decision-centric business rules too….

 

Building a data-driven culture, where evidence-based decisions support bottom-line business objectives and AI is embedded into workflows across your organization. Ensure data is secure and accessible, wherever it lives, and get insights from data and turn them into competitive advantage. Use the entire spectrum of data science, artificial intelligence and machine learning to lay a foundation for a fast-approaching future where AI isn’t just an advantage, it’s essential.

Rob Thomas, GM Analytics for IBM, kicked off a session on putting data to work with AI. Rob began talking about the impact standard shipping containers had on the shipping industry and how a similar move is required in data – something that will make it easy to combine and analyze data in a standard way. And that only this kind of data landscape can support systematic application of analytics and AI.

ING came on stage to talk about their information architecture – one that addresses regulatory issues but also makes it possible for everyone to access, understand and use data for better decisions. They pulled all their data into a data lake architecture and then mapped the core of this to a standard set of corporate data models/vocabulary based on industry models. On to this they layered governance etc. Plus this supports the application of AI both to improve the data and its metadata AND to improve decision-making.

IBM has a new solution offering – IBM Cloud Private for Data. This is designed to provide an out of the box environment for managing an organization’s data and supporting its broad an deep application of AI and analytics. It makes it easy to bring on-premise and cloud data, tracks machine learning models running against the data and provides integrated search and preview across the metadata for all this data.

Beth Smith came on stage to add Watson and AI into this mix. Lots of organizations lack the AI skills they need so IBM is launching IBM Watson Studio to help AI teams collaborate around the data an organization has, working easily with the new IBM Cloud Private for Data. It’s open, supporting open source as well as IBM-specific AI capabilities like the pre-trained Watson APIs. It’s underpinned by a catalog that combines data and any analytics you have built against it. It also supports and automates many of the experimentation and training runs that good ML and AI models require – helping reduce the manual load on data scientists – while providing a rich visual interface for much of the work. It’s designed to make it easier to build, easier to run and easier to share the tasks needed for AI.

IBM has also been investing in the services support that companies need and launching the Data Science Elite Team to deliver initial free workshops to help companies get over the hump and get started with more sophisticated analytics and AI.

Nice to see the investment in making AI and analytics easier. Wish IBM would include its Business Rules Management System Operational Decision Manager as part of this stack – would make operationalizing the result much easier.

Ginni Rometty kicks off the main event with her opening keynote focusing on putting smart to work. Her premise is that everything could be changing now because business and technology architectures are changing at the same time – something that does not happen very often. The opportunity is for exponential change across all businesses thanks to the combination of data and AI. And she further argues that the fact that so much of the data that is needed is INSIDE companies makes it possible for established companies to compete – to disrupt and not just be disrupted.

Digital Platforms are the key to this. She emphasized that multiple platforms are going to matter -no-one is going to use just one. These platforms will allow you to embed intelligence in every process across the organization.  She feels that AI is going to be in combination with people when it is used most effectively. And she encourages companies to go on offense – to use this intelligence to  not just fix things but to really grow exponentially. Plus IBM’s business model is not to monetize their clients data but to help their clients do so.

Social disruption is possible too – everyone needs to focus on trust, jobs/skills and inclusion. If AI is a complement to human intelligence then IBM thinks that all jobs will be disrupted – some will be eliminated, some will be created, all will be changed.

Lots of announcements coming she says around cloud, especially making it easy to integrate private cloud into public ones, around strategic partners, and around Watson, especially around making it easier to use Watson and embed it in work.

Customers up next to help reinforce Ginny’s points. Verizon CEO first talking about 5G, about strategic partners in the API economy. In particular they want to build better ecosystems around their core transmission capability. He also emphasized the importance of data management and trust, especially for a network. Key point – building a platform but partnering to build things on top of it.

Maersk – one of the world’s largest container companies – came up next to talk about how they worked with IBM to use blockchain to disrupt the way shipping works. Shipping companies are coming on board to digitize the way they share information about containers and vessels/vehicles. Using blockchain to make it easier to share and update information in a trusted way. And the organizations participating include government agencies, insurance, ports and much more. A good example of the value of making an open platform not just a company one.

RBC – Royal Bank of Canada – came up next. One of the biggest changes, the CEO says, is the way people look for the financial services they need – they go online where before they would have come to a bank branch. Mobile and internet payment platforms mean that people don’t see the brand any more – they set the card up in an app once. And mobile is changing the way they run their back office systems. All of this puts pressure on their ability to develop everything – especially AI – so they are partnering and moving to cloud. And of course because its money you can’t just push something out there and see if it works – people want it fast but they want it secure and reliable too. In particular, RBC sees using AI to really improve customer service and customer engagement.

Ginny came back to re-emphasize that this an inflection point as simultaneous business architecture and technology architecture changes create a once in a lifetime opportunity to become “an incumbent disrupter”.

DecisionCAMP 2018 in in Europe – Luxembourg to be precise – September 17-19. This is a great event and well worth your time if you are interested in the nuts and bolts of decisioning technology, Decision Management or decision modeling. Last year’s event in London was great with a wide range of presentations and lots of great content. Plus you get to meet with a bunch of folks really committed to decision-making approaches and technologies.

Anyway, its time to submit papers – the Call for Papers is here. If you have something to say about decision modeling, the use of business rules and analytic or AI technology for decision automation, optimization, how decision management and blockchain can deliver smart contracts, or really anything else interesting and decision-centric, please go ahead and send a proposal. Like the rest of the committee I am looking forward to seeing some great topics again this year.

Get those submissions in by March 25 if you can – or at least let us know you plan to!

The Decision Management Community is trying to establish a most influential people list for Decision Management specifically. The plan is to have members of the community vote based on nominations provided here. So if you have someone you think has demonstrated leadership, engagement and innovation in the Decision Management community, why not go ahead and nominate them? And if you are not already a member of the DM Community, why not register so you can vote and stay in touch with the articles and news the site collates.

AI is a decision-making technology. A focus on decisions, not a separate AI initiative, delivers business value and a strong ROI.

A recent HBS survey of executives adopting Artificial Intelligence (AI) provides critical context for companies considering how best to invest in AI:

  • Few companies have made much progress to date — most are experimenting. You still have time to consider how best to invest in AI.
  • AI works best in companies that have already invested in digitizing their business as it enhances digital channels, digital decisions and digital processes.
  • While there is plenty of hype, AI works when it is implemented correctly.

As companies invest in AI technologies, it is clear a technology-led approach does not work. To get business value from AI, companies should focus AI efforts on improving business decisions. We have just published a new brief that lays out a clear, straightforward approach to succeeding with AI by leading with business decisions. It can be applied if you have not yet begun or to focus and reset efforts that aren’t making the progress you desire.

Get the brief here.

OneClick.ai is a company taking advantage of the fact that many AI problems use similar approaches to reduce the time and cost of individual AI projects. It was founded and received its initial funding in 2017, and launched the product last year. The company has a core team of 8 in the US and China with 40 active enterprise accounts supporting over 20,000 models.

OneClick.ai uses AI to build AI and so help companies get into AI more quickly and more cheaply. The intent is to get them fault-tolerant scalable APIs for custom-built AI solutions in days or even hours instead of weeks and months. They aim to automate the end-to-end development of AI solutions based on deep learning. They use meta-learning to design and evaluate millions of deep learning models to find the best ones. They are also working on capabilities to explain how those models work, to address one of the concerns of deep learning, the lack of interpretability.

The product is aimed at non-technical users with a chatbot interface to allow experts to interact with the trained models. Users can choose from public cloud, private cloud or hosted versions and software vendors have the access to an OEM version to integrate the technology into customized solutions. A wide range of AI use cases are supported, including classic predictions (weekly and monthly sales or equipment failure) to image recognition (recognize brands in shelf images to see how much shelf space they have), classification (putting complaint emails into existing categories and identifying new problems) and semantic search (find the most helpful supporting material for a fault). Several of their existing customers were already trying to use AI and have found OneClick.ai significantly quicker to get to an accurate model.

The tool is browser-based and supports multiple projects. Each project has a chatbot that can answer data science questions. Data is provided by uploading flat files that contain a learning data set – numeric, categorical, date/time, text or images. Raw data is enough but users can add domain-specific features if they have domain knowledge that a feature will likely be helpful. Users can develop classification, regression, time-series forecasts, recommendations or clustering models and target various measures of precision depending on the type of model – accuracy, mean absolute error etc.

The engine builds many models and presents the best from which the user can select the one they prefer (based on their preferred metric and the latency of the deployed model, which is calculated for each model). The engine automatically keeps 20% out for testing and uses the other 80% for training. Under the covers, the engine keeps refining the techniques it uses based on the previous training results. Once built the chatbot can answer various questions about the models such as usage tips and model comparison. Users can deploy the models as an API for real-time access with few clicks. A future update will also allow model updates and deployment through an SDK.

You can find out more here.

March 27-29 I am teaching a 3-part online live training class that will prepare you to be immediately effective in a modern, collaborative and DMN standards-based approach to decision modeling.

ExampleYou’ll learn how to identify and prioritize the decisions that drive your business, see how to analyze and model these decisions, and understand the role these decisions play in delivering more powerful information systems.

Each step in the class is supported by interactive decision modeling work sessions focused on problems that reinforce key points. All the decision modeling and notation in the class conforms to the DMN standard, future-proofing your investment in decision modeling. DMN-based decision modeling works for business rules projects using a BRMS, predictive analytic or data science projects, manual or automated decisions and even AI.

Click here for more information and registration. Early bird pricing is available through March 1, 2018 so book now!

I have written before on how a decisions-first approach is ideal for success with AI. After reading David Roe‘s article 11 Questions Organizations need to Ask Before Buying into AI I thought a few more comments were in order:

If you focus on decisions first and on how you must/could/want to make the decision, you can rapidly tell if you really need AI at all. Often business rules and simpler analytics are enough – but you need to know what decision you are trying to make before you can tell. Similarly if you don’t know what else, besides the AI, is going into the decision then you won’t be able to tell how much impact AI is going to have. It’s easy to have a compliance or policy constraint undermine the “lift” you get from AI.

The business case for most AI is “better decisions”. If you don’t know which decisions, and what counts as better, then your AI is just a gimmick. Know what decisions you are trying to improve and how before you begin to ensure your AI has a real business case.

Decision models are great for showing you what else goes into a decision besides AI. This let’s you see how exposed you are when the AI gets it wrong, how good your predictions need to be to be helpful and much more. Understand the context first and it’s easier to manage, and get support for, your AI plans.

Lastly integrate AI into your decisioning stack -make sure your business rules, predictive analytics, machine learning and AI can be integrated to deliver a single, better decision (based on a decision model).

If you want to learn more about decision modeling, contact us or come to our live online decision modeling with DMN training in March.

 

Back in November I posted a humorous Thanksgiving guest decision model to LinkedIn. I just repeated the exercise with a decision model to help you assess a New Year’s Resolution.

While these are just for fun, I thought it might be worth sharing how I built this one. Normally we like to work top-down talking to business experts but in this case I did not have any to work with so I had to start bottom-up with research.

  1. I started with some articles – found using google – and each became a Knowledge Source in the diagram.
  2. I looked over each and identified the things it implied you should decide about a New Year’s resolution to help you decide if it was a good one or not.
  3. As I added these Decisions to my model, I connected them to the Knowledge Sources that related to them (some Decisions recurred in several articles, of course).
  4. One set of Decisions – deciding if a resolution met the five criteria to be SMART – could be grouped as sub-decisions of a higher-level Decision.
  5. Others were grouped based on thematic elements – a common approach where there is not a specific structure driven by regulation or similar.
  6. This gave me a structure – the Decision I was trying to model, some high-level sub-decisions and logically grouped sub-decisions.
  7. Cleaning up the diagram required putting copies of the Knowledge Sources on the diagram (though these point to the same instance in the underlying repository).

In this case I didn’t deal with Input Data as the model seemed useful with just Decisions and Knowledge Sources. To finish it, we would need to identify data elements and write decision logic (or develop predictive models) for each element.

If you are interested in decision modeling, why not register for our upcoming live online Decision Modeling with DMN class in March.

Happy New Year.

SAS Decision Manager is SAS’ platform for decision automation and is getting a significant update in December 2017. I wrote a product review of SAS Decision Manager in 2014 and a number of things have changed in the new release, which is on the new SAS Platform and leverages new SAS Viya technologies.

SAS Decision Manager is aimed at an analytics ecosystem that is a moving target these days with cloud-enabled analytics that are more open and API-driven,  more people doing data science, and different kinds of data coming to the fore. Meanwhile IoT is adding new data streams and demanding decision-making at the edge while machine learning and AI are hot trends and offer real possibilities.

“If analytics does not lead to more informed decisions and more effective actions, then why do it at all”
Mike Gualtierei, Forrester.

This quote embodies the need to operationalize these analytics and enable faster decision making. SAS believes, as we do, that one must put analytics into action, operationalize your analytics, to get value. You need to go from data to discovery and to deployment. In this context, SAS Decision Management is a Portfolio to create and manage decisions:

Overall architectural view

  • SAS Model Manager – import and govern models, monitor and retrain models, deploy models. And increasingly any kind of models including R, Python…
  • SAS Decision Manager – build business rules, build decisions that use analytics and rules in a decision flow, deploy as decision services. The SAS Business Rules Manager product has been subsumed into the new SAS Decision Manager product to create a single environment.
  • SAS Event Stream Processing Studio – SAS Event Stream Processing Studio is now in the SAS Decision Management portfolio so that decisions can be injected into the streaming data environment – real time as micro services but also into streams.
  • Execution – covers Cloud Analytic Service (Viya) for testing and deployment as well as model training, Micro Analytic Service for REST ESP for streaming data, and in-database or in-Hadoop.
  • Plus, open APIs to allow REST, Python, Lua, Java and CLIs to access the platform. R and PMML can be brought into the modeling tools too.

SAS Decision Manager wraps business rules, analytic models, flow logic (and soon Python) into services while linking to Model Manager to access the models being used. These models are developed in the new SAS VDMML Model Studio. The new release of SAS Decision Manager is built on the new SAS Platform, which brings the benefits of the new platform around cloud readiness, multi-tenant etc. This release also combines the Business Rules Management offering in SAS Decision Management.

Key elements overall include:

  • Visual Decision Modeling – decision simulation and path tracing, model and business rule integration and streamlined business rules management
  • Unified publishing to ESP, Cloud Analytic or Micro Analytic services, in-database or in-hadoop
  • Model Manager integration to make it easier to share models and support for more kinds of models as well as managing publishing of models to multiple end points (e.g. in IoT) and automating updates etc.
  • Open APIs from Viya, workflow etc.

Some specific improvements for SAS Model Manager

  • Common Model Repository with GUI and REST interfaces to manage content and search to find the right models
  • Can register models from SAS VDMML Model Studio and import models from PMML, Python, Zip files, etc.
  • Model publishing to various defined targets from in-DB, In-Hadoop, SAS, streaming or real-time with SAS micro analytic service.
  • Model compare in terms of statistics and plots as well as the definition of champion and challenger.
  • Version control with revert, tracking, creation of new versions

SAS Decision Manager

  • Decision inventory in a common repository with access to the models in the model repository as well as to the rules available. All these elements are versioned.
  • New graphical decision flow editor that brings analytic models from model manager, rules and specific branching logic.
  • The testing environment shows how data flows through the decision flow to show which paths were most heavily used. Data can be brought in dynamically or from existing data sets.
  • New editor allows direct access to the model or rules from the flow and get access to repository information as the diagram is edited. Rules are managed directly in the same repository
  • Can create temporary information items on the fly for use in rules
  • Can bring in lookup tables from the SAS data environment
  • Ruleset editor allows data to be pulled in as the vocabulary (copying from another or accessing the data layer) and then rules can be written.

Test data results showing which elements of the decision flow have the most transactions.

In addition to the December released, the plan is to move to more regularly update the product with a 6 month cycle for with new algorithms, more integrations, more use of the Viya APIs etc.

You can get more information on SAS Decision Manager here.

All industry standards offer interchange. Successful standards offer skills interchange not just a technical interchange format.

The Decision Model and Notation (DMN) decision modeling standard has a published XML interchange format, of course, and several of the committee’s members are working really hard to iron out the remaining issues and make the XML interchange more robust. The ability to interchange decision models between vendors is a valuable one but the opportunity that DMN offers for skills interchange is, if anything, even more valuable.

DMN offers two critical kinds of skills interchange – it offers those working with business rules or decision logic a way to transfer their skills between products and it offers business analysts a way to transfer skills between different kinds of decisioning projects.

The vast majority of the business logic in a decisioning system can be defined using the two core DMN components:

  • Decision Requirements Diagrams structure decision problems, break them into coherent pieces. They show where data is used and what knowledge assets (policies, regulations, best practices) are involved.
  • Decision tables specify the logic for most of the decisions on the diagram using simple constructs.

You don’t get 100% of the execution defined using these two elements -you need to add “glue” of various kinds – but almost 100% of the business content is defined using these them. This means someone who knows DMN can transfer these skills between DMN tools. But it also means they can transfer these skills between business rules products too as the approach of decomposing a decision problem into a Decision Requirements Diagram before writing logic is totally transferable and frankly most decision tables look and work the same even if they don’t support DMN yet.

The second kind of skills interchange comes because decision modeling works for lots of different kinds of projects. We have used decision modeling and DMN to:

  • Define business rules / decision logic for automation
  • Frame requirements for predictive analytics and machine learning project
  • Orchestrate a mix of packaged and custom decisioning components including business rules,  predictive analytics, AI and optimization
  • Model manual decision-making for consistency, mixing manual and automated decision making
  • And more – see Decision Modeling has value across many projects

This means that business analysts who learn decision modeling can apply that skill across lots of projects.

Learn decision modeling and learn DMN. It’s a great skill that let’s you express business decision problems and one that is transferable -interchangable – across projects and products.

BPMInstitute.org would like to get your insights on how you’re using Digital Decisioning and Analytics in your organization. Your feedback will help shape articles and focus at BPM Institute for 2018.

Digital Decisioning and Analytics survey

  • Are you using analytics and reporting to innovate business functions and models?
  • What is the state of your analytics efforts as they relate to processes?

Share your insights with BPMInstitute.org and you’ll be entered into a random drawing to win one free OnDemand course of your choice

 

 

AI is a hot topic and we get asked a lot by clients how they can succeed with AI or cognitive technology. There’s often a sense of panic – “everyone is doing AI and we’re not!” – and a sense that they have to start a completely separate initiative, throw money at it and hope for the best. In fact, we tell them, they have some time – they need to keep calm and focus on decisions.

The folks over at HBR had a good article about adopting AI based on a survey of executives. This is well worth a read and makes a couple of critical points.

  1. AI really does work, if you use it right. There’s plenty of hype but also plenty of evidence that it works. But like all technologies it works when it works, it’s not a silver bullet.
  2. Not everyone is using AI – in fact hardly anyone is doing very much with it. Most regular companies are experimenting with it, trying it out in one small area. Despite what you read there’s still time to figure out how to use AI effectively in your organization. Stay Calm.
  3. AI works better if you have already digitized your business. Of course AI is a decision-making technology, so what matters here is that you have digitized decision-making.  Focus AI on digital decision-making.

To succeed with AI we have a concrete set of suggestions we give to customers, many of which overlap with the HBR recommendations as you would expect:

  • Get management support
    The best way to do this is to know which decisions you are targeting and show your executives how these decisions impact business results. Being able to describe how improving a particular decision will help an executive meet their objectives and exceed their metrics will get their attention.
  • DON’T put technologists in charge
    Like data analytics, mixed teams work best for AI. Make sure the team has business, operations, technology and analytics professionals from day 1. For maximum effectiveness, use decision modeling with DMN to describe the decision-making you plan to improve as this gives everyone a shared vision of the project expressed in non-technical terms.
  • Focus on the decision not AI
    You will want to mix and match AI with other analytic approaches, explicit rules-based approaches and people-based approaches to making decisions. Most business decisions involve a mix:

    • Rules express the regulatory and policy-based parts of your decision
    • Data analytics turn (mostly) structured data into probabilities and classifications to improve the accuracy of your decisions
    • People make the decisions that involve interaction with the real world and poorly scoped or defined ones
    • And AI handles natural language, image processing, really complicated pattern matching etc.
  • Make sure you focus on change management
    Change is always a big deal in Decision Management projects – as soon as you start changing how decisions are made and how much automation there is you need to plan for and manage change. AI is no exception – it will change roles and responsibilities and change management will be essential for actual deployment (distinct from a fun experiment).

AI is a decision-making technology. As such it is a powerful complement to Decision Management – something to be considered alongside business rules and analytics, and integrated into a coherent decision model. Here’s one example, for a company that needed to automate assignment of emails. This depended on who it was from, what it was about and how urgent it was:

  • Deciding which client an email was from involved rules run against the sender and sender’s domain.
  • Deciding on the subject of an email involved rules about senders (some automated emails always use the same sender for the same subject) and rules about subject lines (some are fixed format).
  • This left too many unclassified, however, so the subject and body of the text were analyzed using text analytics to see which products were mentioned in the email to identify them (analytically) as the subject of the email.
  • Urgency was hard too. Historical data about the client was analyzed to build customer retention model. This analytic score was used to increase the urgency of any email from a client who was a retention risk.
  • Finally AI was used to see what the tone of the email was – was the email a complaint or a problem or just a description? The more likely it was to be a problem or complaint, the higher the urgency.
  • Each of these sub-decisions used different technologies but were orchestrated in a single decision model to decide how to assign the email.

AI is certainly new and different, but success with it requires the same focus on decisions and decision-making. Put decisions first.

My friends at Actico recently had me record some videos on Decision Management and decision modeling with DMN. Here’s the first – 3 reasons why financial and insurance companies should adopt Decision Management.

Enjoy.

As part of the build up to Building Business Capability 2017 I gave an interview on transforming the business. Check it out.

If you want to come to BBC 2017, there’s still time to register with code SPKDMS for a 10% discount.

If you are coming to BBC 2017, don’t forget to register for my tutorial Decision-Centric Business Transformation: Decision Modeling. See you there.

I have been working on Decision Management since we first started using the phrase back in 2002 – I’m probably the guilty party behind the phrase – and Decision Management Solutions (the company I run) does nothing but Decision Management. This gives us a unique perspective on new technologies and approaches that show up. One of the most interesting developments in Decision Management recently has been the use of decision modeling – especially the use of decision modeling with the Decision Model and Notation (DMN) standard.

In DMN a decision is the act of determining an output or selecting an option from inputs. In this context we mean a repeatable decision, allowing us to define a decision model for our decision-making approach. If we use DMN then we:

  • Document a decision and its sub-decisions, the components of decision-making
  • Capture specific questions and allowed answers for each decision and sub-decision
  • Identify the data required and how it is used by these decisions
  • Document sources of knowledge about decision-making
  • Define relationships between decisions, metrics, organizations and processes

Our experience with DMN is both broad and deep – we have trained nearly 1,000 people and used decision modeling on dozens of real-world projects. We have seen how valuable it is on these projects and we particularly notice how many different kinds of projects it is valuable for.

Unlike some DMN proponents, we don’t think that defining executable decision models is the only reason for using DMN. Here are some other reasons you might use decision modeling and DMN on your projects:

  • Eliciting decision-making requirements using DMN decision models is more productive, more fun, and much more rapid than traditional approaches to business rules or requirements. This is true no matter how you plan to implement the requirements – as an executable DMN model, as business rules in a BRMS, as analytics or as a decision-support environment.
  • DMN decision models  make impact analysis and traceability really work at a business level. You can answer questions like “who needs to know if I change these rules” or “who has to believe this analytic” and see how changes will impact your business results.
  • DMN decision models let you mix and match analytics, AI, rules, manual decision-making, machine learning, optimization and all other decision-making approaches in a single model. As the world moves beyond explicit logic to data-driven decision-making, this is critical future-proofing for your business.
  • DMN decision models let data science teams see what the requirements are for their analytic models, helping focus their efforts and ensure that the results will have a clear path to implementation and business impact.

 

Specific projects we have used decision modeling on have shown us that decision modeling with DMN is:

  • More Rapid
    It’s much faster than traditional approaches with customers telling us that in 1 hour we developed an understanding of the problem “that would have taken 10”
  • More engaging
    Business SMEs participate more fully in decision modeling and our experience is that they get so into it that they start freeing up their schedule so they can participate more!
  • Really enlightening
    Decision modeling clarifies the real requirements for data and logic in a decision. So much so that we regularly hear from experienced SMEs that they learn something from building the model. Some training departments in our customers have taken to using the decision model to train people….
  • Much more open
    Because human and automated decisions can be modeled together you can use DMN decision modeling when dealing with any kind of data, any kind of decision. That makes it easy to adopt as part of your standard approach.
  • Better at finding reuse
    Because business users can clearly express their problem and share this with others you can get agreement and discussion about reuse and common/shared decision-making long before you get to implementation

We have had great success with decision modeling and are helping many organizations adopt it right now by delivering a business value pilot that goes from a business need to a working pilot in a few weeks. Get in touch if we can help you.

Last week we completed the main phase of a proof of concept project at a client, one based in Jakarta Indonesia. After the report out, I tweeted

Loving watching a Dr at one of my clients explain (in Bahasa) the decision model for claims handling they built #dmn #decisionmgt

Tweets are great for this kind of quick heartfelt observation but I thought a blog post to explain why this is such a breakthrough was called for.

First, some background. This project was to develop a proof of concept decision service for claims handling. The client was already processing claims and integrating a Business Rules Management System. Our project was to build on that, introduce decision modeling, and show them how this decision-centric approach could help them maximize the value of their BRMS investment by engaging business owners, clarifying requirements, and enabling continuous improvement of the decision. Two things made this project exciting for me:

  • By this point in the project, the business SMEs have not only worked with us to create the model, they have also been using the model to discuss their own business. What makes a claim complex or risky? How do we want to handle this type of claim from this type of customer? This has brought clarity to their decision-making, making it possible for us to automate it but also enabling them to improve the manual aspects of the decision-making and the role of the decision-making in the overall claims process.
    In our experience, decision models provide a unique vehicle for this kind of debate and engagement among business SMEs. They give SMEs a way to deeply understand and build agreement about they way they want to make decisions.
  • Having worked with this decision model, these SMEs can now stand up in front of a crowd of their peers and colleagues and explain the decision model, discussing exactly what it meant for how claims are going to be processed. This is transformative: This is not a group of business users who understand their requirements but a group that understand the system, the algorithm that will drive their decision making.
    The sense of ownership, the clarity, the degree of detail with which they understand this new system component are amazing. This is not a black box to them. They understand and can explain how they will monitor and improve it. This understanding and their willingness to assign one of their number to monitor and improve the rules in this decision service every week – is truly inspiring.

I am looking forward to seeing this in production, and looking forward further to working with this very impressive group moving forward. Now if only I spoke Bahasa…

Drop us a line if this is something you would like to know more about.

Some other notes on the project for the more geeky among you:

  • We worked directly with the subject matter experts to build a decision model using the Decision Model and Notation (DMN) standard and our DecisionsFirst Modeler and methodology. These models are very effective at eliciting requirements for decision-making systems, helping the subject matter experts clarify their approach
  • We (partially) implemented this model in a commercial BRMS to show how this would work. All the artifacts created were linked back to the decision requirements we modeled. This enables the business users to easily find the rules they want to change, take advantage of the impact analysis of a decision model and still use the governance, version control and security features of a modern BRMS.
  • The decision model identified clear opportunities for predictive analytic models, modeling the decision-making that would leverage those predictions. This means that the analytics team has a clear goal as it analyzes data and knows exactly how to operationalize models they build.
  • A dashboard was then designed to show how the data produced by this could be used to monitor and continuously improve the decision and the rules that automate it. The data created by executing the decision model -all the sub-decision outcomes – is exactly the data needed to analyze the overall decision-making approach when it fails to deliver the business outcome desired. No additional design work is required for the dashboard – all the work has been done by designing the decision

Neuro-ID TM started about 7 years ago with the premise that by monitoring how people use their keyboards and mice one could identify the confidence level of the person filling out the form. And that this could be done without any personally identifiable information. They developed a Neuro Confidence ScoreTM  (Neuro-CS TM) which they have patented. Their focus is on the questions companies ask of their customers. Many of these questions are risk-related – they are asked to help the business establish how risky someone or something is – and companies lack confidence in the answers they get.  Neuro-ID likes to say “Smarter Questions, Better Bottom Line TM”.

There is an inherent tension in how organizations design surveys or online forms. Making an online form “frictionless” is a good objective as it makes it more likely people will fill it out. But it is hard to do this if one is also concerned about compliance. A focus on compliance can lead organizations to ask for too much detail and so create friction while a frictionless experience can easily fail to check on someone.

Neuro-ID’s technology delivers prescriptive analytics that score someone’s behavior in terms of the confidence with which those questions are answered (as well as some supporting attributes). As an example, consider declarations in financial applications. The technology monitors the session to see how people answer, how they move their mouse, what options they pick, which things they change. A baseline is created for each person as they interact and subsequent actions are compared to assess their confidence. The confidence of their movements reflects whether they are concealing something or don’t understand the question or are just not sure what the right answer is.

The technology sits behind existing forms and does not collect any personal information or PII. Because it compares the user to themselves, a lack of language skills or poor eyesight does not impact the score. Forms can add baseline questions before asking risk-related questions or can treat all questions as both baseline and risk relevant questions. It also detects meaningful edits, allowing it to ask questions like “did you overstate your income”. Experience is that this often triggers better behavior.

The technology generates a confidence level on each question. It has an interactive mode allowing loan officers or others to replay the interaction and everything is available programmatically through an API. A decision id is used at the Neuro-ID end that the company has to match to a particular applicant, allowing the technology to store detailed records without knowing how someone is. While mouse and keyboard are the most common environment, the technology also handles touch screens by assessing hesitations and changes.

Neuro-ID can be used for risk mitigation, fraud prevention or user experience design depending on the situation. An initial target are traditional banks and FIs working in the Prime segment. These organizations need a clean, quick online onboarding process that is self-directed yet does not expose them to unnecessary risk. It’s also effective with credit-invisible customers as it can send additional questions to check for third party verification when confidence is low.

A really interesting technology in my opinion. You can find more at www.neuro-id.com.

Silvie Spreeuwenburg of LibRT came up after lunch to talk about a rules-based approach to traffic management. Traffic management is a rapidly changing area, of course, thanks to IoT and self-driving cars among others.

When one is considering traffic, there are many stakeholders. Not just the road user, also businesses reliant on road transport, safety authorities etc. The authorities have a set of traffic priorities (where they need good flow), they have delays and they have restrictions for safety or legal issues. They manage this today and expect to keep doing so, even as technology evolves.

To manage this they create lots of books about traffic management, incidents and other topics for each road. Each contains flow charts and instructions. This creates a lot of overlap so its important to separate problem definition from problem solution and to be specific – differentiate between things you must or may not do and those that are or are not actually possible.

The solution involves:

  • Policy about priority and traffic management norms
  • Identifying decision points, flow control points and segments in the road
  • Standard services – increase flow, decrease flow, reroute on a link in the network
  • Decisions to decide what the problem is, determine right thing to do, see if there’s anything conflicting, execute

The logic is all represented in decision tables. And applying the approach has successfully moved traffic to lower priority roads. Plus it fits very well with the way people work and connects changes in policies very directly to changes in behavior.

Marcia Gottgtroy from New Zealand tax presented on their lessons learned and planned development in decision management. They are moving from risk management to a business strategy, supported by analytical decision management. The initial focus was on building a decision management capability in the department and they initially focused on GST (sales tax) and it went very well, producing a decision service with proof of STP, operational efficiency very quickly. The service also had a learning loop based on the instrumentation of the service. They automated some of this (where the data was good) but did manual analysis elsewhere – not trying to over-automate nor wait for something perfect.

After this initial success, the next step is to focus on business strategy and get to decision management at an enterprise level. Hybrid and integrated solutions supported by a modern analytical culture driven by the overall strategy. Need to define a strategy, a data science framework, a methodology – all in the context of an experimental enterprise. They began to use decision modeling DMN – using decision requirements models to frame the problem improved the clarity, understanding, communication. And it documented this decision-making for the first time.

But then they had to stop as the success had caused the department to engage in a business transformation to replace and innovate everything! This has created a lot of uncertainty but also an opportunity to focus on their advanced analytic platform and the management of uncertainty. The next big shift is from decision management to decision optimization. Technology must be integrated, different approaches and an ability to experiment are key.

Nigel Crowther of IBM came up next to talk about business rules and Big Data. His interest is in combining Big Data platforms and AI with the transparency, agility and governance of business rules. Big Data teams tend to write scripts and code that is opaque, something business rules could really help with. Use cases for the combination include massive batches of decisions, simulations on large datasets and detect patterns in data lakes.

The combination uses a BRMS to manage the business rules, deploys a decision service and then runs a Map Job to fetch this and run it in parallel on a very large data set – distributing the rules to many nodes and distributing the data across these nodes so the rules can be run against them in parallel and very fast. The Hadoop dataset is stored on distributed nodes, each of which is then run through the rules in its own Map job before being reduced down to a single result set – bringing the rules to the data. This particular example uses flat data, about passengers on flights, and uses rules to identify the tiny number of “bad actors” among them. 20M passengers per day so it’s a real needle in a haystack problem. The batch process is used to simulate and back-test the rules and then the same rules are pushed into a live feed to make transactional decisions about specific passengers. So, for instance, a serious set up with 30 nodes, could scan 7B records (a year’s worth) in an hour and a half, 1.2M/second.

It’s also possible to use Big Data and analytic tools to analyze rules. Customers want, for instance, to simulate the impact of rule changes on large portfolios of customers. The rule logs of rules executed in a year, say, can also be analyzed quickly and effectively using a Big Data infrastructure.

Vijay Bandekar of InteliOps came up to talk about the digital economy and decision models to help companies face the challenges this economy creates. The digital economy is driven by the explosion of data and the parallel explosion in IoT devices. While this data is increasingly being stored but little if any is being effectively used. We need applications that can manage this data and take advantage of it because its just not possible for even the best human staff to cope – autonomous, learning, real-time decision-making systems are required. These systems require inferencing, reasoning and deductive decision models. While the algorithms work, it can be cumbersome to manage large rule bases. While machine learning approaches can come up with the rules, integrating these manually can be time consuming.

Architecturally, he says, most organizations focus on stateless decisioning with a database rather than a stateful working memory. Yet the stateful approach offers advantages in the era of fast moving, streaming data while also taking advantage of the rapidly increasing availability of massive amounts of cheap RAM. This requires agenda control and transparency, as well as effective caching and redundancy/restoration.

It’s also important to add learning models with both supervised and unsupervised learning engines to handle the increasing volumes of data. These learning models need to be injected into the streams of data, he argues, to make decisions as it arrives rather than being pointed at stored data. In addition, combinations of algorithms – ensembles – are increasingly essential given the variety of data and the value of different approaches in different scenarios.

The combination of delivers an adaptive decisions framework for real-time decisions. It uses stateful decision agents based on business rules and continuous learning using ensembles of analytic approaches on streaming data.

Last up is Tim Stephenson of Omny Link. His recent focus is on smaller companies and one of the key things about the new digital economy is the way in which this allows companies to punch above their weight. Small companies really need to drive leads to conclusion and manage customers effectively. CRM systems, even if they start free, can be complex and expensive to use. To unlock the value and respond appropriately faster to serve more customers you need to do a set of things well:

  • Have a consistent, published domain model to make data widely available across channels. For small businesses, this means a simple but extensible customer domain model e.g. contact, account etc.
  • Use APIs to support a wide variety of interfaces – contracts. This supports lots of UIs including future ones.
  • Workflow or process to seamlessly drive data through the organization and its partners
  • Consistent decision-making that enforces policy and ensures compliance with regulations

He walked through how these elements allow you to deal with core scenarios, like initial lead handling, so the company can manage leads and customers well. You need to use APIs to record well understood data, decide what to do and make sure you do what you decided to do.

The value of DMN (especially decision tables) allows you to get the business people to defined how they want to handle leads, how they want to make decisions. They can’t change the structure of the decisions, in his case, but they can tweak thresholds and categories, allowing them to focus and respond to changing conditions. And these decisions are deployed consistently across different workflows and different UIs – the same decision is made everywhere, presenting the standard answer to users no matter where they are working (a key value of separating decisions out formally as their own component). Using Decision Requirements Models to orchestrate the decision tables keeps them simpler and makes the whole thing more pluggable.

The payback for this has been clear. One user found that the time saved was about 75% but in addition, the improvement in response time ALSO means the company closes more work. Even small businesses can get an advantage from this kind of composable, consistent, repeatable, auditable, transparent decision automation.

And that’s a wrap. Next year’s Decision CAMP probably in Luxembourg in September and don’t forget all the slides are available on the Decision CAMP Schedule page.