≡ Menu

DecisionCAMP 2017: Day 1 Part 2

Share

Bastian Steinart of Signavio came up after the break. Like Jan and I, he focused on their experience with DMN on Decision Management projects and the need for additional concepts. Better support for handling lists and sets, handling iteration and multiplicity for instance is also something they find essential. They have developed some extensions to support these things and are actively working with the committee – to show their suggestions and to make sure they end up supporting the agreed 1.2 standard.

They have also done a lot of work turning decision modeling in DMN into Drools DRL – the executable rule syntax of Drools. This implies, of course, that DMN models can be turned into any rules-based language, we would strongly agree that DMN and business rules (and Business Rules Management Systems) are very compatible. From the point of view of a code generator like Signavio however, the ability to consume DMN XML generated from a model, is probably preferable. With support for DMN execution in Drools this becomes practical.

Denis Gagne introduced how elements of DMN can and perhaps should be applied in some other standards. He (like us) has seen organizations gradually pull things out of their systems because they have separate lifecycles – data, process, decision-making etc. Extracting this helps with the disjoint change cycles but also engaging business users in the evolution of operations and systems. Simpler, more agile, smarter operations.

In particular, Denis has been working with BPMN (Business Process Model and Notation), CMMN (Case Management Model and Notation) and DMN (Decision Model and Notation). All these standards help business and IT to collaborate, facilitate analysis and reuse, drive agreement and support a clear, unambiguous definition. BPMN and CMMN support different kinds of work context (from structured to unstructured) and DMN is relevant everywhere because good decisions are important at every levell in an organization.

Trisotech wants to integrate these approaches – they want to make sure DMN can be used to define decisions in BPMN and CMMN, add FEEL as an expression language to BPMN and CMMN, harmonize information items across the standards and manage contexts.

The three standards complement each other and have defined, easy to use, arms-length integration (process task invokes decision or case for example). Trisotech is working to allow expressions in BPMN and CMMN to be defined in FEEL, allowing them to be executable and allowing reuse of their FEEL editor. Simple expressions can then be written this way while more complex ones can be modeling in DMN and linked. Aligning the information models matters too, so it is clear which data element in the BPMN model is which data element in DMN Etc. All of this helps with execution but also helps align the standards by using a common expression language – BPMN and CMMN skipped this so reusing the DMN one is clearly a good idea.

Denis has done a lot of good thinking around the overlap of these standards and how to use them together without being too focused on unifying them. Harmonizing and finding integration patterns, yes, unifying no.

Alan Fish took us up to lunch by introducing Business Knowledge Models. Business Knowledge Models, BKMs, are for reuse of decision logic. Many people (including me) focus on BKMs for reuse and for reuse in implementation in particular. This implies BKMs are only useful for the decision logic level. Alan disagrees with this approach.

Alan’s original book (which started a lot of the discussion of decision modeling with requirements models) introduced knowledge areas and these became BKMs in DMN. BKMs in his mind allow reuse and implementation but this is not what they are for – they are for modeling business knowledge in his mind.

Businesses, he argues, are very well aware of their existing knowledge assets. They need to see how they fit in their decision-making, especially in a new decision making system. Decision Requirements Models in DMN are great at showing people where specific knowledge is used in decision-making. But Alan wants to encapsulate existing knowledge in BKMs and then link BKMs into these models. He argues you can show the functional scope in a decision using BKMs and that by itemizing and categorizing these BKMs.

Each BKM in this approach is a ruleset, table, score model or calculation. The complexity of these can be assessed and estimates/tracking managed. This is indeed how we do estimates too – we just use the decisions not BKMs in this way. He also sees BKMs as a natural unit of deployment. Again, we use decisions for this, though like Alan we use the decision requirements diagram to navigate to deployed and maintainable assets. He thinks that user access and intent do not align perfectly with decisions. He also makes the great point that BKMs are a way for companies to market their knowledge – to build and package their knowledge so that other folks can consume them.

The key difference is that he sees most decisions having multiple BKMs while we generally regard these as separate decisions not as separate BKMs supporting a single decision.

Jan Vanthienen came up after lunch – not to talk about decision tables for once, but to talk about process and decision integration. In particular, how can ensure consistency and prevent clashes. Testing, verification, validation are all good but the best way to obtain correct models is to AVOID incorrect ones! One way to do this, for instance, is to avoid inconsistency e.g. by using Unique decision tables in DMN.

Jan introduces a continuum of decision process integrations

  1. No decisions therefore no inconsistency
  2. Decisions embedded in BPMN – bad, but no inconsistency
  3. Process-decision as a local concern – a simple task call to the decision – this limits inconsistencies to data passing and to unhandled decision outcomes
  4. A more real integration – several decisions in the DMN decision model are invoked by different tasks. This creates more opportunities for inconsistencies – a task might invoke a decision before tasks that invoke sub-decisions or that decision.
  5. Plus of course, no process only a decision – which also has no consistency issues

In scenario 4 particularly there are some potential mismatches:

  • You can embed part of your decision in your process with gateways creating inconsistency
  • You can fail to handle all the outcomes if the idea is to act on the outcomes with a gateway
  • You need one of the DMN intermediate results in the process you need to make sure this is calculated from the same DMN model
  • Putting sub-decisions in the process just to calculate them creates an inconsistency with the process model
  • Process models should invoke decisions in an order or a way that creates potential inconsistency with the declarative nature of the decision model. They can be recalculated but many people will assume they are not, creating issues

Last session before my panel today was Gil Ronen talking about patterns in decision logic in modern technical architectures, specifically those going to be automated. His premise is that technical architectures need to be reimagined to include decision management and business logic as a first class component.

The established use case is one in which policy or regulations drive business logic that is packaged up and deployed as a business rules component. Traditional analytic approaches focused on driving insight into human decision-making. But today big data and machine learning are driving more real-time analytics – even streaming analytics – plus the API economy is changing the boundaries of decision-making.

Many technical architectures for these new technologies refer to business logic, though some do not. In general, though, they don’t treat logic and decision-making as a manageable asset. For instance:

  • in streaming analytic architectures, it might be shown only as functions
  • In Big Data architectures, there may be questions that the data can answer or as operators
  • In APIs, there’s no distinction between APIs that just provide data and those that display decision outcomes

They all vary but they consistently fail to explicitly identify and describe the decision-making in the architecture. This lowers visibility, allows IT to pretend it does not mean to manage decisions and fails to connect the decision-making of the business to the decision logic in the architecture. A common pattern or approach to representation and a set of core features to make the case to IT to include it in architectures:

  • Make deployed decisions (as code) a thing in either a simple architecture or perhaps within all the nodes in a distributed (Big Data) architecture
  • Identify Decision Services that run on decision execution server (perhaps)
  • Identify Decision Agents that run in streams
  • He also identified a container with a self-contained API but I think this is just a decision service

All correct problems and things that would help. This is clearly a challenge and has been for a decade,. Hopefully DMN will change this.

Share

Comments on this entry are closed.