≡ Menu

OneClick.ai is a company taking advantage of the fact that many AI problems use similar approaches to reduce the time and cost of individual AI projects. It was founded and received its initial funding in 2017, and launched the product last year. The company has a core team of 8 in the US and China with 40 active enterprise accounts supporting over 20,000 models.

OneClick.ai uses AI to build AI and so help companies get into AI more quickly and more cheaply. The intent is to get them fault-tolerant scalable APIs for custom-built AI solutions in days or even hours instead of weeks and months. They aim to automate the end-to-end development of AI solutions based on deep learning. They use meta-learning to design and evaluate millions of deep learning models to find the best ones. They are also working on capabilities to explain how those models work, to address one of the concerns of deep learning, the lack of interpretability.

The product is aimed at non-technical users with a chatbot interface to allow experts to interact with the trained models. Users can choose from public cloud, private cloud or hosted versions and software vendors have the access to an OEM version to integrate the technology into customized solutions. A wide range of AI use cases are supported, including classic predictions (weekly and monthly sales or equipment failure) to image recognition (recognize brands in shelf images to see how much shelf space they have), classification (putting complaint emails into existing categories and identifying new problems) and semantic search (find the most helpful supporting material for a fault). Several of their existing customers were already trying to use AI and have found OneClick.ai significantly quicker to get to an accurate model.

The tool is browser-based and supports multiple projects. Each project has a chatbot that can answer data science questions. Data is provided by uploading flat files that contain a learning data set – numeric, categorical, date/time, text or images. Raw data is enough but users can add domain-specific features if they have domain knowledge that a feature will likely be helpful. Users can develop classification, regression, time-series forecasts, recommendations or clustering models and target various measures of precision depending on the type of model – accuracy, mean absolute error etc.

The engine builds many models and presents the best from which the user can select the one they prefer (based on their preferred metric and the latency of the deployed model, which is calculated for each model). The engine automatically keeps 20% out for testing and uses the other 80% for training. Under the covers, the engine keeps refining the techniques it uses based on the previous training results. Once built the chatbot can answer various questions about the models such as usage tips and model comparison. Users can deploy the models as an API for real-time access with few clicks. A future update will also allow model updates and deployment through an SDK.

You can find out more here.

March 27-29 I am teaching a 3-part online live training class that will prepare you to be immediately effective in a modern, collaborative and DMN standards-based approach to decision modeling.

ExampleYou’ll learn how to identify and prioritize the decisions that drive your business, see how to analyze and model these decisions, and understand the role these decisions play in delivering more powerful information systems.

Each step in the class is supported by interactive decision modeling work sessions focused on problems that reinforce key points. All the decision modeling and notation in the class conforms to the DMN standard, future-proofing your investment in decision modeling. DMN-based decision modeling works for business rules projects using a BRMS, predictive analytic or data science projects, manual or automated decisions and even AI.

Click here for more information and registration. Early bird pricing is available through March 1, 2018 so book now!

I have written before on how a decisions-first approach is ideal for success with AI. After reading David Roe‘s article 11 Questions Organizations need to Ask Before Buying into AI I thought a few more comments were in order:

If you focus on decisions first and on how you must/could/want to make the decision, you can rapidly tell if you really need AI at all. Often business rules and simpler analytics are enough – but you need to know what decision you are trying to make before you can tell. Similarly if you don’t know what else, besides the AI, is going into the decision then you won’t be able to tell how much impact AI is going to have. It’s easy to have a compliance or policy constraint undermine the “lift” you get from AI.

The business case for most AI is “better decisions”. If you don’t know which decisions, and what counts as better, then your AI is just a gimmick. Know what decisions you are trying to improve and how before you begin to ensure your AI has a real business case.

Decision models are great for showing you what else goes into a decision besides AI. This let’s you see how exposed you are when the AI gets it wrong, how good your predictions need to be to be helpful and much more. Understand the context first and it’s easier to manage, and get support for, your AI plans.

Lastly integrate AI into your decisioning stack -make sure your business rules, predictive analytics, machine learning and AI can be integrated to deliver a single, better decision (based on a decision model).

If you want to learn more about decision modeling, contact us or come to our live online decision modeling with DMN training in March.


Back in November I posted a humorous Thanksgiving guest decision model to LinkedIn. I just repeated the exercise with a decision model to help you assess a New Year’s Resolution.

While these are just for fun, I thought it might be worth sharing how I built this one. Normally we like to work top-down talking to business experts but in this case I did not have any to work with so I had to start bottom-up with research.

  1. I started with some articles – found using google – and each became a Knowledge Source in the diagram.
  2. I looked over each and identified the things it implied you should decide about a New Year’s resolution to help you decide if it was a good one or not.
  3. As I added these Decisions to my model, I connected them to the Knowledge Sources that related to them (some Decisions recurred in several articles, of course).
  4. One set of Decisions – deciding if a resolution met the five criteria to be SMART – could be grouped as sub-decisions of a higher-level Decision.
  5. Others were grouped based on thematic elements – a common approach where there is not a specific structure driven by regulation or similar.
  6. This gave me a structure – the Decision I was trying to model, some high-level sub-decisions and logically grouped sub-decisions.
  7. Cleaning up the diagram required putting copies of the Knowledge Sources on the diagram (though these point to the same instance in the underlying repository).

In this case I didn’t deal with Input Data as the model seemed useful with just Decisions and Knowledge Sources. To finish it, we would need to identify data elements and write decision logic (or develop predictive models) for each element.

If you are interested in decision modeling, why not register for our upcoming live online Decision Modeling with DMN class in March.

Happy New Year.

SAS Decision Manager is SAS’ platform for decision automation and is getting a significant update in December 2017. I wrote a product review of SAS Decision Manager in 2014 and a number of things have changed in the new release, which is on the new SAS Platform and leverages new SAS Viya technologies.

SAS Decision Manager is aimed at an analytics ecosystem that is a moving target these days with cloud-enabled analytics that are more open and API-driven,  more people doing data science, and different kinds of data coming to the fore. Meanwhile IoT is adding new data streams and demanding decision-making at the edge while machine learning and AI are hot trends and offer real possibilities.

“If analytics does not lead to more informed decisions and more effective actions, then why do it at all”
Mike Gualtierei, Forrester.

This quote embodies the need to operationalize these analytics and enable faster decision making. SAS believes, as we do, that one must put analytics into action, operationalize your analytics, to get value. You need to go from data to discovery and to deployment. In this context, SAS Decision Management is a Portfolio to create and manage decisions:

Overall architectural view

  • SAS Model Manager – import and govern models, monitor and retrain models, deploy models. And increasingly any kind of models including R, Python…
  • SAS Decision Manager – build business rules, build decisions that use analytics and rules in a decision flow, deploy as decision services. The SAS Business Rules Manager product has been subsumed into the new SAS Decision Manager product to create a single environment.
  • SAS Event Stream Processing Studio – SAS Event Stream Processing Studio is now in the SAS Decision Management portfolio so that decisions can be injected into the streaming data environment – real time as micro services but also into streams.
  • Execution – covers Cloud Analytic Service (Viya) for testing and deployment as well as model training, Micro Analytic Service for REST ESP for streaming data, and in-database or in-Hadoop.
  • Plus, open APIs to allow REST, Python, Lua, Java and CLIs to access the platform. R and PMML can be brought into the modeling tools too.

SAS Decision Manager wraps business rules, analytic models, flow logic (and soon Python) into services while linking to Model Manager to access the models being used. These models are developed in the new SAS VDMML Model Studio. The new release of SAS Decision Manager is built on the new SAS Platform, which brings the benefits of the new platform around cloud readiness, multi-tenant etc. This release also combines the Business Rules Management offering in SAS Decision Management.

Key elements overall include:

  • Visual Decision Modeling – decision simulation and path tracing, model and business rule integration and streamlined business rules management
  • Unified publishing to ESP, Cloud Analytic or Micro Analytic services, in-database or in-hadoop
  • Model Manager integration to make it easier to share models and support for more kinds of models as well as managing publishing of models to multiple end points (e.g. in IoT) and automating updates etc.
  • Open APIs from Viya, workflow etc.

Some specific improvements for SAS Model Manager

  • Common Model Repository with GUI and REST interfaces to manage content and search to find the right models
  • Can register models from SAS VDMML Model Studio and import models from PMML, Python, Zip files, etc.
  • Model publishing to various defined targets from in-DB, In-Hadoop, SAS, streaming or real-time with SAS micro analytic service.
  • Model compare in terms of statistics and plots as well as the definition of champion and challenger.
  • Version control with revert, tracking, creation of new versions

SAS Decision Manager

  • Decision inventory in a common repository with access to the models in the model repository as well as to the rules available. All these elements are versioned.
  • New graphical decision flow editor that brings analytic models from model manager, rules and specific branching logic.
  • The testing environment shows how data flows through the decision flow to show which paths were most heavily used. Data can be brought in dynamically or from existing data sets.
  • New editor allows direct access to the model or rules from the flow and get access to repository information as the diagram is edited. Rules are managed directly in the same repository
  • Can create temporary information items on the fly for use in rules
  • Can bring in lookup tables from the SAS data environment
  • Ruleset editor allows data to be pulled in as the vocabulary (copying from another or accessing the data layer) and then rules can be written.

Test data results showing which elements of the decision flow have the most transactions.

In addition to the December released, the plan is to move to more regularly update the product with a 6 month cycle for with new algorithms, more integrations, more use of the Viya APIs etc.

You can get more information on SAS Decision Manager here.

All industry standards offer interchange. Successful standards offer skills interchange not just a technical interchange format.

The Decision Model and Notation (DMN) decision modeling standard has a published XML interchange format, of course, and several of the committee’s members are working really hard to iron out the remaining issues and make the XML interchange more robust. The ability to interchange decision models between vendors is a valuable one but the opportunity that DMN offers for skills interchange is, if anything, even more valuable.

DMN offers two critical kinds of skills interchange – it offers those working with business rules or decision logic a way to transfer their skills between products and it offers business analysts a way to transfer skills between different kinds of decisioning projects.

The vast majority of the business logic in a decisioning system can be defined using the two core DMN components:

  • Decision Requirements Diagrams structure decision problems, break them into coherent pieces. They show where data is used and what knowledge assets (policies, regulations, best practices) are involved.
  • Decision tables specify the logic for most of the decisions on the diagram using simple constructs.

You don’t get 100% of the execution defined using these two elements -you need to add “glue” of various kinds – but almost 100% of the business content is defined using these them. This means someone who knows DMN can transfer these skills between DMN tools. But it also means they can transfer these skills between business rules products too as the approach of decomposing a decision problem into a Decision Requirements Diagram before writing logic is totally transferable and frankly most decision tables look and work the same even if they don’t support DMN yet.

The second kind of skills interchange comes because decision modeling works for lots of different kinds of projects. We have used decision modeling and DMN to:

  • Define business rules / decision logic for automation
  • Frame requirements for predictive analytics and machine learning project
  • Orchestrate a mix of packaged and custom decisioning components including business rules,  predictive analytics, AI and optimization
  • Model manual decision-making for consistency, mixing manual and automated decision making
  • And more – see Decision Modeling has value across many projects

This means that business analysts who learn decision modeling can apply that skill across lots of projects.

Learn decision modeling and learn DMN. It’s a great skill that let’s you express business decision problems and one that is transferable -interchangable – across projects and products.

BPMInstitute.org would like to get your insights on how you’re using Digital Decisioning and Analytics in your organization. Your feedback will help shape articles and focus at BPM Institute for 2018.

Digital Decisioning and Analytics survey

  • Are you using analytics and reporting to innovate business functions and models?
  • What is the state of your analytics efforts as they relate to processes?

Share your insights with BPMInstitute.org and you’ll be entered into a random drawing to win one free OnDemand course of your choice



AI is a hot topic and we get asked a lot by clients how they can succeed with AI or cognitive technology. There’s often a sense of panic – “everyone is doing AI and we’re not!” – and a sense that they have to start a completely separate initiative, throw money at it and hope for the best. In fact, we tell them, they have some time – they need to keep calm and focus on decisions.

The folks over at HBR had a good article about adopting AI based on a survey of executives. This is well worth a read and makes a couple of critical points.

  1. AI really does work, if you use it right. There’s plenty of hype but also plenty of evidence that it works. But like all technologies it works when it works, it’s not a silver bullet.
  2. Not everyone is using AI – in fact hardly anyone is doing very much with it. Most regular companies are experimenting with it, trying it out in one small area. Despite what you read there’s still time to figure out how to use AI effectively in your organization. Stay Calm.
  3. AI works better if you have already digitized your business. Of course AI is a decision-making technology, so what matters here is that you have digitized decision-making.  Focus AI on digital decision-making.

To succeed with AI we have a concrete set of suggestions we give to customers, many of which overlap with the HBR recommendations as you would expect:

  • Get management support
    The best way to do this is to know which decisions you are targeting and show your executives how these decisions impact business results. Being able to describe how improving a particular decision will help an executive meet their objectives and exceed their metrics will get their attention.
  • DON’T put technologists in charge
    Like data analytics, mixed teams work best for AI. Make sure the team has business, operations, technology and analytics professionals from day 1. For maximum effectiveness, use decision modeling with DMN to describe the decision-making you plan to improve as this gives everyone a shared vision of the project expressed in non-technical terms.
  • Focus on the decision not AI
    You will want to mix and match AI with other analytic approaches, explicit rules-based approaches and people-based approaches to making decisions. Most business decisions involve a mix:

    • Rules express the regulatory and policy-based parts of your decision
    • Data analytics turn (mostly) structured data into probabilities and classifications to improve the accuracy of your decisions
    • People make the decisions that involve interaction with the real world and poorly scoped or defined ones
    • And AI handles natural language, image processing, really complicated pattern matching etc.
  • Make sure you focus on change management
    Change is always a big deal in Decision Management projects – as soon as you start changing how decisions are made and how much automation there is you need to plan for and manage change. AI is no exception – it will change roles and responsibilities and change management will be essential for actual deployment (distinct from a fun experiment).

AI is a decision-making technology. As such it is a powerful complement to Decision Management – something to be considered alongside business rules and analytics, and integrated into a coherent decision model. Here’s one example, for a company that needed to automate assignment of emails. This depended on who it was from, what it was about and how urgent it was:

  • Deciding which client an email was from involved rules run against the sender and sender’s domain.
  • Deciding on the subject of an email involved rules about senders (some automated emails always use the same sender for the same subject) and rules about subject lines (some are fixed format).
  • This left too many unclassified, however, so the subject and body of the text were analyzed using text analytics to see which products were mentioned in the email to identify them (analytically) as the subject of the email.
  • Urgency was hard too. Historical data about the client was analyzed to build customer retention model. This analytic score was used to increase the urgency of any email from a client who was a retention risk.
  • Finally AI was used to see what the tone of the email was – was the email a complaint or a problem or just a description? The more likely it was to be a problem or complaint, the higher the urgency.
  • Each of these sub-decisions used different technologies but were orchestrated in a single decision model to decide how to assign the email.

AI is certainly new and different, but success with it requires the same focus on decisions and decision-making. Put decisions first.

My friends at Actico recently had me record some videos on Decision Management and decision modeling with DMN. Here’s the first – 3 reasons why financial and insurance companies should adopt Decision Management.


As part of the build up to Building Business Capability 2017 I gave an interview on transforming the business. Check it out.

If you want to come to BBC 2017, there’s still time to register with code SPKDMS for a 10% discount.

If you are coming to BBC 2017, don’t forget to register for my tutorial Decision-Centric Business Transformation: Decision Modeling. See you there.

I have been working on Decision Management since we first started using the phrase back in 2002 – I’m probably the guilty party behind the phrase – and Decision Management Solutions (the company I run) does nothing but Decision Management. This gives us a unique perspective on new technologies and approaches that show up. One of the most interesting developments in Decision Management recently has been the use of decision modeling – especially the use of decision modeling with the Decision Model and Notation (DMN) standard.

In DMN a decision is the act of determining an output or selecting an option from inputs. In this context we mean a repeatable decision, allowing us to define a decision model for our decision-making approach. If we use DMN then we:

  • Document a decision and its sub-decisions, the components of decision-making
  • Capture specific questions and allowed answers for each decision and sub-decision
  • Identify the data required and how it is used by these decisions
  • Document sources of knowledge about decision-making
  • Define relationships between decisions, metrics, organizations and processes

Our experience with DMN is both broad and deep – we have trained nearly 1,000 people and used decision modeling on dozens of real-world projects. We have seen how valuable it is on these projects and we particularly notice how many different kinds of projects it is valuable for.

Unlike some DMN proponents, we don’t think that defining executable decision models is the only reason for using DMN. Here are some other reasons you might use decision modeling and DMN on your projects:

  • Eliciting decision-making requirements using DMN decision models is more productive, more fun, and much more rapid than traditional approaches to business rules or requirements. This is true no matter how you plan to implement the requirements – as an executable DMN model, as business rules in a BRMS, as analytics or as a decision-support environment.
  • DMN decision models  make impact analysis and traceability really work at a business level. You can answer questions like “who needs to know if I change these rules” or “who has to believe this analytic” and see how changes will impact your business results.
  • DMN decision models let you mix and match analytics, AI, rules, manual decision-making, machine learning, optimization and all other decision-making approaches in a single model. As the world moves beyond explicit logic to data-driven decision-making, this is critical future-proofing for your business.
  • DMN decision models let data science teams see what the requirements are for their analytic models, helping focus their efforts and ensure that the results will have a clear path to implementation and business impact.


Specific projects we have used decision modeling on have shown us that decision modeling with DMN is:

  • More Rapid
    It’s much faster than traditional approaches with customers telling us that in 1 hour we developed an understanding of the problem “that would have taken 10”
  • More engaging
    Business SMEs participate more fully in decision modeling and our experience is that they get so into it that they start freeing up their schedule so they can participate more!
  • Really enlightening
    Decision modeling clarifies the real requirements for data and logic in a decision. So much so that we regularly hear from experienced SMEs that they learn something from building the model. Some training departments in our customers have taken to using the decision model to train people….
  • Much more open
    Because human and automated decisions can be modeled together you can use DMN decision modeling when dealing with any kind of data, any kind of decision. That makes it easy to adopt as part of your standard approach.
  • Better at finding reuse
    Because business users can clearly express their problem and share this with others you can get agreement and discussion about reuse and common/shared decision-making long before you get to implementation

We have had great success with decision modeling and are helping many organizations adopt it right now by delivering a business value pilot that goes from a business need to a working pilot in a few weeks. Get in touch if we can help you.

Last week we completed the main phase of a proof of concept project at a client, one based in Jakarta Indonesia. After the report out, I tweeted

Loving watching a Dr at one of my clients explain (in Bahasa) the decision model for claims handling they built #dmn #decisionmgt

Tweets are great for this kind of quick heartfelt observation but I thought a blog post to explain why this is such a breakthrough was called for.

First, some background. This project was to develop a proof of concept decision service for claims handling. The client was already processing claims and integrating a Business Rules Management System. Our project was to build on that, introduce decision modeling, and show them how this decision-centric approach could help them maximize the value of their BRMS investment by engaging business owners, clarifying requirements, and enabling continuous improvement of the decision. Two things made this project exciting for me:

  • By this point in the project, the business SMEs have not only worked with us to create the model, they have also been using the model to discuss their own business. What makes a claim complex or risky? How do we want to handle this type of claim from this type of customer? This has brought clarity to their decision-making, making it possible for us to automate it but also enabling them to improve the manual aspects of the decision-making and the role of the decision-making in the overall claims process.
    In our experience, decision models provide a unique vehicle for this kind of debate and engagement among business SMEs. They give SMEs a way to deeply understand and build agreement about they way they want to make decisions.
  • Having worked with this decision model, these SMEs can now stand up in front of a crowd of their peers and colleagues and explain the decision model, discussing exactly what it meant for how claims are going to be processed. This is transformative: This is not a group of business users who understand their requirements but a group that understand the system, the algorithm that will drive their decision making.
    The sense of ownership, the clarity, the degree of detail with which they understand this new system component are amazing. This is not a black box to them. They understand and can explain how they will monitor and improve it. This understanding and their willingness to assign one of their number to monitor and improve the rules in this decision service every week – is truly inspiring.

I am looking forward to seeing this in production, and looking forward further to working with this very impressive group moving forward. Now if only I spoke Bahasa…

Drop us a line if this is something you would like to know more about.

Some other notes on the project for the more geeky among you:

  • We worked directly with the subject matter experts to build a decision model using the Decision Model and Notation (DMN) standard and our DecisionsFirst Modeler and methodology. These models are very effective at eliciting requirements for decision-making systems, helping the subject matter experts clarify their approach
  • We (partially) implemented this model in a commercial BRMS to show how this would work. All the artifacts created were linked back to the decision requirements we modeled. This enables the business users to easily find the rules they want to change, take advantage of the impact analysis of a decision model and still use the governance, version control and security features of a modern BRMS.
  • The decision model identified clear opportunities for predictive analytic models, modeling the decision-making that would leverage those predictions. This means that the analytics team has a clear goal as it analyzes data and knows exactly how to operationalize models they build.
  • A dashboard was then designed to show how the data produced by this could be used to monitor and continuously improve the decision and the rules that automate it. The data created by executing the decision model -all the sub-decision outcomes – is exactly the data needed to analyze the overall decision-making approach when it fails to deliver the business outcome desired. No additional design work is required for the dashboard – all the work has been done by designing the decision

Neuro-ID TM started about 7 years ago with the premise that by monitoring how people use their keyboards and mice one could identify the confidence level of the person filling out the form. And that this could be done without any personally identifiable information. They developed a Neuro Confidence ScoreTM  (Neuro-CS TM) which they have patented. Their focus is on the questions companies ask of their customers. Many of these questions are risk-related – they are asked to help the business establish how risky someone or something is – and companies lack confidence in the answers they get.  Neuro-ID likes to say “Smarter Questions, Better Bottom Line TM”.

There is an inherent tension in how organizations design surveys or online forms. Making an online form “frictionless” is a good objective as it makes it more likely people will fill it out. But it is hard to do this if one is also concerned about compliance. A focus on compliance can lead organizations to ask for too much detail and so create friction while a frictionless experience can easily fail to check on someone.

Neuro-ID’s technology delivers prescriptive analytics that score someone’s behavior in terms of the confidence with which those questions are answered (as well as some supporting attributes). As an example, consider declarations in financial applications. The technology monitors the session to see how people answer, how they move their mouse, what options they pick, which things they change. A baseline is created for each person as they interact and subsequent actions are compared to assess their confidence. The confidence of their movements reflects whether they are concealing something or don’t understand the question or are just not sure what the right answer is.

The technology sits behind existing forms and does not collect any personal information or PII. Because it compares the user to themselves, a lack of language skills or poor eyesight does not impact the score. Forms can add baseline questions before asking risk-related questions or can treat all questions as both baseline and risk relevant questions. It also detects meaningful edits, allowing it to ask questions like “did you overstate your income”. Experience is that this often triggers better behavior.

The technology generates a confidence level on each question. It has an interactive mode allowing loan officers or others to replay the interaction and everything is available programmatically through an API. A decision id is used at the Neuro-ID end that the company has to match to a particular applicant, allowing the technology to store detailed records without knowing how someone is. While mouse and keyboard are the most common environment, the technology also handles touch screens by assessing hesitations and changes.

Neuro-ID can be used for risk mitigation, fraud prevention or user experience design depending on the situation. An initial target are traditional banks and FIs working in the Prime segment. These organizations need a clean, quick online onboarding process that is self-directed yet does not expose them to unnecessary risk. It’s also effective with credit-invisible customers as it can send additional questions to check for third party verification when confidence is low.

A really interesting technology in my opinion. You can find more at www.neuro-id.com.

Silvie Spreeuwenburg of LibRT came up after lunch to talk about a rules-based approach to traffic management. Traffic management is a rapidly changing area, of course, thanks to IoT and self-driving cars among others.

When one is considering traffic, there are many stakeholders. Not just the road user, also businesses reliant on road transport, safety authorities etc. The authorities have a set of traffic priorities (where they need good flow), they have delays and they have restrictions for safety or legal issues. They manage this today and expect to keep doing so, even as technology evolves.

To manage this they create lots of books about traffic management, incidents and other topics for each road. Each contains flow charts and instructions. This creates a lot of overlap so its important to separate problem definition from problem solution and to be specific – differentiate between things you must or may not do and those that are or are not actually possible.

The solution involves:

  • Policy about priority and traffic management norms
  • Identifying decision points, flow control points and segments in the road
  • Standard services – increase flow, decrease flow, reroute on a link in the network
  • Decisions to decide what the problem is, determine right thing to do, see if there’s anything conflicting, execute

The logic is all represented in decision tables. And applying the approach has successfully moved traffic to lower priority roads. Plus it fits very well with the way people work and connects changes in policies very directly to changes in behavior.

Marcia Gottgtroy from New Zealand tax presented on their lessons learned and planned development in decision management. They are moving from risk management to a business strategy, supported by analytical decision management. The initial focus was on building a decision management capability in the department and they initially focused on GST (sales tax) and it went very well, producing a decision service with proof of STP, operational efficiency very quickly. The service also had a learning loop based on the instrumentation of the service. They automated some of this (where the data was good) but did manual analysis elsewhere – not trying to over-automate nor wait for something perfect.

After this initial success, the next step is to focus on business strategy and get to decision management at an enterprise level. Hybrid and integrated solutions supported by a modern analytical culture driven by the overall strategy. Need to define a strategy, a data science framework, a methodology – all in the context of an experimental enterprise. They began to use decision modeling DMN – using decision requirements models to frame the problem improved the clarity, understanding, communication. And it documented this decision-making for the first time.

But then they had to stop as the success had caused the department to engage in a business transformation to replace and innovate everything! This has created a lot of uncertainty but also an opportunity to focus on their advanced analytic platform and the management of uncertainty. The next big shift is from decision management to decision optimization. Technology must be integrated, different approaches and an ability to experiment are key.

Nigel Crowther of IBM came up next to talk about business rules and Big Data. His interest is in combining Big Data platforms and AI with the transparency, agility and governance of business rules. Big Data teams tend to write scripts and code that is opaque, something business rules could really help with. Use cases for the combination include massive batches of decisions, simulations on large datasets and detect patterns in data lakes.

The combination uses a BRMS to manage the business rules, deploys a decision service and then runs a Map Job to fetch this and run it in parallel on a very large data set – distributing the rules to many nodes and distributing the data across these nodes so the rules can be run against them in parallel and very fast. The Hadoop dataset is stored on distributed nodes, each of which is then run through the rules in its own Map job before being reduced down to a single result set – bringing the rules to the data. This particular example uses flat data, about passengers on flights, and uses rules to identify the tiny number of “bad actors” among them. 20M passengers per day so it’s a real needle in a haystack problem. The batch process is used to simulate and back-test the rules and then the same rules are pushed into a live feed to make transactional decisions about specific passengers. So, for instance, a serious set up with 30 nodes, could scan 7B records (a year’s worth) in an hour and a half, 1.2M/second.

It’s also possible to use Big Data and analytic tools to analyze rules. Customers want, for instance, to simulate the impact of rule changes on large portfolios of customers. The rule logs of rules executed in a year, say, can also be analyzed quickly and effectively using a Big Data infrastructure.

Vijay Bandekar of InteliOps came up to talk about the digital economy and decision models to help companies face the challenges this economy creates. The digital economy is driven by the explosion of data and the parallel explosion in IoT devices. While this data is increasingly being stored but little if any is being effectively used. We need applications that can manage this data and take advantage of it because its just not possible for even the best human staff to cope – autonomous, learning, real-time decision-making systems are required. These systems require inferencing, reasoning and deductive decision models. While the algorithms work, it can be cumbersome to manage large rule bases. While machine learning approaches can come up with the rules, integrating these manually can be time consuming.

Architecturally, he says, most organizations focus on stateless decisioning with a database rather than a stateful working memory. Yet the stateful approach offers advantages in the era of fast moving, streaming data while also taking advantage of the rapidly increasing availability of massive amounts of cheap RAM. This requires agenda control and transparency, as well as effective caching and redundancy/restoration.

It’s also important to add learning models with both supervised and unsupervised learning engines to handle the increasing volumes of data. These learning models need to be injected into the streams of data, he argues, to make decisions as it arrives rather than being pointed at stored data. In addition, combinations of algorithms – ensembles – are increasingly essential given the variety of data and the value of different approaches in different scenarios.

The combination of delivers an adaptive decisions framework for real-time decisions. It uses stateful decision agents based on business rules and continuous learning using ensembles of analytic approaches on streaming data.

Last up is Tim Stephenson of Omny Link. His recent focus is on smaller companies and one of the key things about the new digital economy is the way in which this allows companies to punch above their weight. Small companies really need to drive leads to conclusion and manage customers effectively. CRM systems, even if they start free, can be complex and expensive to use. To unlock the value and respond appropriately faster to serve more customers you need to do a set of things well:

  • Have a consistent, published domain model to make data widely available across channels. For small businesses, this means a simple but extensible customer domain model e.g. contact, account etc.
  • Use APIs to support a wide variety of interfaces – contracts. This supports lots of UIs including future ones.
  • Workflow or process to seamlessly drive data through the organization and its partners
  • Consistent decision-making that enforces policy and ensures compliance with regulations

He walked through how these elements allow you to deal with core scenarios, like initial lead handling, so the company can manage leads and customers well. You need to use APIs to record well understood data, decide what to do and make sure you do what you decided to do.

The value of DMN (especially decision tables) allows you to get the business people to defined how they want to handle leads, how they want to make decisions. They can’t change the structure of the decisions, in his case, but they can tweak thresholds and categories, allowing them to focus and respond to changing conditions. And these decisions are deployed consistently across different workflows and different UIs – the same decision is made everywhere, presenting the standard answer to users no matter where they are working (a key value of separating decisions out formally as their own component). Using Decision Requirements Models to orchestrate the decision tables keeps them simpler and makes the whole thing more pluggable.

The payback for this has been clear. One user found that the time saved was about 75% but in addition, the improvement in response time ALSO means the company closes more work. Even small businesses can get an advantage from this kind of composable, consistent, repeatable, auditable, transparent decision automation.

And that’s a wrap. Next year’s Decision CAMP probably in Luxembourg in September and don’t forget all the slides are available on the Decision CAMP Schedule page.

Little bit of  a late start for me so I am starting with Geoffrey De Smet from Red Hat talking about constraint planning. He points out that some decisions cannot be easily solved with rules-based approaches – they can be described as decision (and as a DMN decision model in our experience) but not readily made with rules and decision tables only.  His key point is that different decision problems require different technology

  • Eligibility is a rules problem
  • License plate recognition is a neural network (analytic) problem
  • Roster scheduling is a constraint planning problem

And our experience is that you can do this decision by decision in a decision model too, making it easy to identify the right technology and to combine them.

He went into some detail on the difference between hard and soft constraints and on the interesting way in which the Red Hat planner leverages the Red Hat rules format and engine to handle constraint definition, score solutions etc. They support various approaches to planning too, allowing you to mix and match rules-based constraints and various algorithms for searching for a solution. The integration also allows for some incremental work, taking advantage of the standard rule engine features.

I wrote about some of the early work around Drools Planner back in 2011.

I went next, presenting on the role of decision models in analytic excellence

If this resonates with you, check out this International Institute for Analytics Case Study
I’ll be back with the rest of the day to wrap up.

Bastian Steinart of Signavio came up after the break. Like Jan and I, he focused on their experience with DMN on Decision Management projects and the need for additional concepts. Better support for handling lists and sets, handling iteration and multiplicity for instance is also something they find essential. They have developed some extensions to support these things and are actively working with the committee – to show their suggestions and to make sure they end up supporting the agreed 1.2 standard.

They have also done a lot of work turning decision modeling in DMN into Drools DRL – the executable rule syntax of Drools. This implies, of course, that DMN models can be turned into any rules-based language, we would strongly agree that DMN and business rules (and Business Rules Management Systems) are very compatible. From the point of view of a code generator like Signavio however, the ability to consume DMN XML generated from a model, is probably preferable. With support for DMN execution in Drools this becomes practical.

Denis Gagne introduced how elements of DMN can and perhaps should be applied in some other standards. He (like us) has seen organizations gradually pull things out of their systems because they have separate lifecycles – data, process, decision-making etc. Extracting this helps with the disjoint change cycles but also engaging business users in the evolution of operations and systems. Simpler, more agile, smarter operations.

In particular, Denis has been working with BPMN (Business Process Model and Notation), CMMN (Case Management Model and Notation) and DMN (Decision Model and Notation). All these standards help business and IT to collaborate, facilitate analysis and reuse, drive agreement and support a clear, unambiguous definition. BPMN and CMMN support different kinds of work context (from structured to unstructured) and DMN is relevant everywhere because good decisions are important at every levell in an organization.

Trisotech wants to integrate these approaches – they want to make sure DMN can be used to define decisions in BPMN and CMMN, add FEEL as an expression language to BPMN and CMMN, harmonize information items across the standards and manage contexts.

The three standards complement each other and have defined, easy to use, arms-length integration (process task invokes decision or case for example). Trisotech is working to allow expressions in BPMN and CMMN to be defined in FEEL, allowing them to be executable and allowing reuse of their FEEL editor. Simple expressions can then be written this way while more complex ones can be modeling in DMN and linked. Aligning the information models matters too, so it is clear which data element in the BPMN model is which data element in DMN Etc. All of this helps with execution but also helps align the standards by using a common expression language – BPMN and CMMN skipped this so reusing the DMN one is clearly a good idea.

Denis has done a lot of good thinking around the overlap of these standards and how to use them together without being too focused on unifying them. Harmonizing and finding integration patterns, yes, unifying no.

Alan Fish took us up to lunch by introducing Business Knowledge Models. Business Knowledge Models, BKMs, are for reuse of decision logic. Many people (including me) focus on BKMs for reuse and for reuse in implementation in particular. This implies BKMs are only useful for the decision logic level. Alan disagrees with this approach.

Alan’s original book (which started a lot of the discussion of decision modeling with requirements models) introduced knowledge areas and these became BKMs in DMN. BKMs in his mind allow reuse and implementation but this is not what they are for – they are for modeling business knowledge in his mind.

Businesses, he argues, are very well aware of their existing knowledge assets. They need to see how they fit in their decision-making, especially in a new decision making system. Decision Requirements Models in DMN are great at showing people where specific knowledge is used in decision-making. But Alan wants to encapsulate existing knowledge in BKMs and then link BKMs into these models. He argues you can show the functional scope in a decision using BKMs and that by itemizing and categorizing these BKMs.

Each BKM in this approach is a ruleset, table, score model or calculation. The complexity of these can be assessed and estimates/tracking managed. This is indeed how we do estimates too – we just use the decisions not BKMs in this way. He also sees BKMs as a natural unit of deployment. Again, we use decisions for this, though like Alan we use the decision requirements diagram to navigate to deployed and maintainable assets. He thinks that user access and intent do not align perfectly with decisions. He also makes the great point that BKMs are a way for companies to market their knowledge – to build and package their knowledge so that other folks can consume them.

The key difference is that he sees most decisions having multiple BKMs while we generally regard these as separate decisions not as separate BKMs supporting a single decision.

Jan Vanthienen came up after lunch – not to talk about decision tables for once, but to talk about process and decision integration. In particular, how can ensure consistency and prevent clashes. Testing, verification, validation are all good but the best way to obtain correct models is to AVOID incorrect ones! One way to do this, for instance, is to avoid inconsistency e.g. by using Unique decision tables in DMN.

Jan introduces a continuum of decision process integrations

  1. No decisions therefore no inconsistency
  2. Decisions embedded in BPMN – bad, but no inconsistency
  3. Process-decision as a local concern – a simple task call to the decision – this limits inconsistencies to data passing and to unhandled decision outcomes
  4. A more real integration – several decisions in the DMN decision model are invoked by different tasks. This creates more opportunities for inconsistencies – a task might invoke a decision before tasks that invoke sub-decisions or that decision.
  5. Plus of course, no process only a decision – which also has no consistency issues

In scenario 4 particularly there are some potential mismatches:

  • You can embed part of your decision in your process with gateways creating inconsistency
  • You can fail to handle all the outcomes if the idea is to act on the outcomes with a gateway
  • You need one of the DMN intermediate results in the process you need to make sure this is calculated from the same DMN model
  • Putting sub-decisions in the process just to calculate them creates an inconsistency with the process model
  • Process models should invoke decisions in an order or a way that creates potential inconsistency with the declarative nature of the decision model. They can be recalculated but many people will assume they are not, creating issues

Last session before my panel today was Gil Ronen talking about patterns in decision logic in modern technical architectures, specifically those going to be automated. His premise is that technical architectures need to be reimagined to include decision management and business logic as a first class component.

The established use case is one in which policy or regulations drive business logic that is packaged up and deployed as a business rules component. Traditional analytic approaches focused on driving insight into human decision-making. But today big data and machine learning are driving more real-time analytics – even streaming analytics – plus the API economy is changing the boundaries of decision-making.

Many technical architectures for these new technologies refer to business logic, though some do not. In general, though, they don’t treat logic and decision-making as a manageable asset. For instance:

  • in streaming analytic architectures, it might be shown only as functions
  • In Big Data architectures, there may be questions that the data can answer or as operators
  • In APIs, there’s no distinction between APIs that just provide data and those that display decision outcomes

They all vary but they consistently fail to explicitly identify and describe the decision-making in the architecture. This lowers visibility, allows IT to pretend it does not mean to manage decisions and fails to connect the decision-making of the business to the decision logic in the architecture. A common pattern or approach to representation and a set of core features to make the case to IT to include it in architectures:

  • Make deployed decisions (as code) a thing in either a simple architecture or perhaps within all the nodes in a distributed (Big Data) architecture
  • Identify Decision Services that run on decision execution server (perhaps)
  • Identify Decision Agents that run in streams
  • He also identified a container with a self-contained API but I think this is just a decision service

All correct problems and things that would help. This is clearly a challenge and has been for a decade,. Hopefully DMN will change this.

After yesterday’s pre-conference day on DMN, the main program started today. All the slide decks are all available on the DecisionCAMP site.

Edson Tirelli started things off with a session to demystify the DMN specification. DMN does not invent anything, he says, but takes some of these concepts and defines a common language to express them. To take advantage of it we need to implement it, develop interchange for it and drive adoption.

Edson developed a runtime for DMN that takes DMN XML and executes on the Drools engine. This takes interchange files from tools and executes the logic from those files. This drives his focus – he’s thinking about execution. He has a set of lessons learned from this

  1. You need level 3 conformance to generate code. He walked through the conformance levels – all have Decision Requirements Diagrams but the decision tables go from natural language (level 1), simple expressions only (level 2) to full expression language (level3). He pointed out that level 3 is not much more than level 2.
  2. Conformance levels do not reflect reality in that vendors do not comply neatly with the levels nor is there an outside way to verify the conformance. To help with this, Edson (and others) are working on a set of Test that are publicly available to help folks test their ability to develop and consume XML.
  3. Spaces in variable names are a challenge but necessary because users really want them in their object names. This is not as hard as people think and is really important.
  4. DMN Type system is not complete because there are some like lists or ranges that cannot be defined although they are allowed in the expression language.
  5. Some bugs in the spec, but the 1.2 revision is working on these
  6. Gert involved with the community – the specification is a technical document and subject matter experts and others in the community are very helpful

Jan Purchase and I spoke next, discussing three gaps in the specification that we see in the specification. Here’s our presentation and you can get more on our thinking in our book, Real-World Decision Modeling with DMN:

Bruce Silver came next to discuss the analysis of decision tables. DMN allows many things to be put into decision tables that are “bad” – not best practices – because the specification cannot contain methodology, because there are sometimes corner cases and because there are some disagreements, forcing the specification to allow both.

Bruce generally likes the standards restrictions on what can be in a decision table and has developed some code to check DMN tables to see how complete they might be.  While these restrictions are limiting they also allow for static analysis. He checks for completeness (gaps in logic for instance), compares hit policy with the rules to make sure the rules and hit policy match and to spot problems like masked rules (rules that look valid but are never going to execute due to the hit policy). It recommends collapsing rules that could be combined and makes other suggestions to improve clarity.

It also applies “normalization” based on the work of both Jan Vanthienen and some of the later work done for The Decision Model by von Halle and Goldberg. These are applied somewhat selectively as there are some that are very restrictive.

A clear approach to validating decision tables based on DMN – very similar to what BRMS vendors have been doing for years but nice to see if for DMN.

A break here so I’ll post this.

The first day at Decision CAMP 2017 is focused explicitly on the Decision Model and Notation (DMN) standard.

Alan Fish introduced the ongoing work on 1.2. He quickly summarizes the new features in 1.1 – such as text annotations and a formal definition of a decision service. Then he went through the new features, starting with those that are agreed:

  • Annotation columns can be added to a decision table
  • Restricted labels to force the use of an object name as a minimum
  • Fixed some bugs around empty sets, XML interchange etc.

In addition, several key topics are being worked on. These three issues have not been voted on yet but we are tracking to get these done:

  • Diagram Interchange based on the OMG standard approach for this so that every diagram can be interchanged precisely as well as the underlying model. Given how important multiple views are, this is a really important feature.
  • Context free grammar is under discussion as it has been hard to make names with spaces, operators etc. parsable. Likely to use quotes around names with spaces and escaping quotes in names etc.
  • Invocable Decision Services to allow a decision to be linked to a decision service instead of a BKM. This allows a more complex reusable package of logic. Using Decision Services as the definition allows packages to be defined and reused without forcing encapsulation. This creates several difficult issues to be resolved but we (the committee) is making progress.

Bruce Silver then facilitated a discussion on what people liked and disliked about DMN.


  • Eliciting requirements using decision modeling is more productive, more fun, more rapid than traditional approaches to eliciting requirements.
  • The use of explicit requirements helps with impact analysis, traceability at a business level.
  • Really brings together an industry across multiple vendors in a positive way – it’s great to have a standard, customers like the idea that vendors are compliant.
  • Ability to mix and match analytics, AI, rules, manual decision-making, machine learning, optimization and all other decision-making approaches in a single model.
  • FEEL has supporters and has some great features – especially for certain activities like defining decision tables, possible for non-technical users to be very precise.
  • Ability for business users to clearly express their problem and share this with others to get agreement, prompt discussion – and to do this at several levels.


  • Perhaps too many details too soon, too much of a focus on the meta model and the internals v what a user can see.
  • Sometimes the standard is too precise or limiting – not allowing decision tables and output to have different names, for instance.
  • Dislike some of the corner case features because they can get people into trouble.
  • Not really any good ways to show which kinds of technology or approach can be used in decision modeling – perhaps some icons.
  • FEEL has people who don’t like it too, but this is partly because its new and a change and because it perhaps lacks some of the patterns and capabilities needed. More examples would be good too.

Last week, Silicon Valley research firm Aragon Research cited Decision Management Solutions as a visual and business-friendly extension to digital business platforms and named us a 2017 Hot Vendor in Digital Business Platforms. We’re delighted about this and feel pretty strongly that this validates our vision of a federated digital decisioning platform as an essential ingredient in a company’s digital business strategy.

The report’s author, Jim Sinur, said:

Digital Business Platforms combine five major technical tributaries to create a cornerstone technology base that supports the changing nature of business, as well as the work that supports digital. Enterprises that are looking to manage a complex or rapidly changing set of rules that empower outcomes would benefit from decision management as offered by Decision Management Solutions, especially when combined with predictive or real-time analytics.

The report says that what makes us unique is that business people can represent their decisions in a friendly, visual and industry-standard model while managing the logic and analytics for these decisions across many implementation platforms. We’re working with clients to create “virtual decision hubs” that map the complexities of enterprise decision-making to the underlying technologies that deliver the decision logic, business rules, advanced analytics and AI needed to operationalize this decision-making across channels.

Click here to view the report.

Open Data Group is an analytic deployment company. The company was started over 10 years ago and has transitioned from consulting to a product company, applying their expertise in Data Science and IT to create an analytic engine, FastScore.

Successful analytics require organizational alignment (specifically between Data Science and IT) to create coordination of systems and business problem collaboration. In addition to understanding analytics, companies are trying to leverage new technologies and modernize their analytic approach. To address some of these challenges, Open Data Group have developed FastScore.

FastScore is designed to address various analytic deployment challenges to monetize analytic outcomes including:

  • Manual recoding and other complexity
  • Too slow to deploy analytic models (largely as a result)
  • Too many languages being used
  • IT and analytic teams are not on the same journey – analytic/data science teams care about iteration and exploration while IT cares about stable systems and control.

FastScore provides a repeatable, scalable process for deploying analytic workflows. Open Data Group see the model itself as the asset and emphasize that a model needs to be language and data neutral as well as deployed using micro-services (they are a Docker container) to be a valuable, and future proofed, asset.

FastScore is an analytic deployment environment that connects a wide range of analytic design environments to a wide range of business applications. It has several elements, all within a Docker container. It also includes a model abstraction (input and output AVRO schemas, an initialization and the math action) that allows models to be ingested from a wide variety of formats (including, Python, R, C, SAS, PFA) and a stream abstraction (input and output, AVRO schema in JSON, AVRO binary or text) to consume and produce a wide range of data (from streaming to traditional databases) using a standard lightweight contract for data exchange.

The FastScore Engine is a Docker container into which customers can load models for push button deployment. Input streams are then connected to provide data to the model and output streams to push results to the required business applications or downstream environment. Multiple models can be connected into an analytic pipeline within FastScore. Models can be predictive analytic models, feature generators or any other element of an analytic decision. Everything can be accessed through a REST endpoint, with model execution being handled automatically (selecting between runners for R, Python, Java, C for instance). Within the container is the stream processor that will enforce the input and output schemas and a set of sensors that allow model performance to be monitored, tested and debugged.

Besides the core engine, additional features include:

  • Model Deploy
    A plugin for Jupyter that integrates the engine with the Jupyter data science tool. Allows a data scientist using Jupyter to develop models and then check that they will be able to deploy them, generate the various files etc.
  • Model Manage
    Docker container that hooks into running instances of FastScore and provides a way to address and manage the schemas, streams and models that are deployed. Can be integrated with version control and configuration management tools.
  • Model Compare
    New in the 1.6 release, allows models to be identified as A/B or Champion/Challenger pairs and manage the run time execution of the models. Logs this data along with the rest of the data created.
  • Dashboard
    Shows running engines and Model Manage abstractions, changes and manages the run time bindings and abstractions, provides some charting of data including that generated by Model Compare etc. Uses the REST API so all of this could be done in another product, too.

Plus Command Line Interface and REST APIs for everything.

Because all of this is done within a Docker container, the product integrates with the Docker ecosystem for components such as systems monitoring and tuning. The Docker container, allows easy deployment to variety of cloud and on premise platforms and supports micro services orchestration.

FastScore allows an organization to create a reliable, systematic, scalable process for deploying and using all the analytic models developed by their analytic and data science teams – what might be called AnalyticOps, a “function” created to provide a centralized place to manage, monitor and manipulate enterprise analytics assets.

More information on FastScore.