Bruce Silver wrote a couple of interesting posts on this topic – Integrating Process and Rules – Part 1 and Part 2. Reading Bruce’s posts, and thinking back on the various posts I have written about business process and business decision management (Risks of pursuing BPM without decisioning, Adding decisioning to your BPM initiative or this webinar recording), it seems to me that there is an emerging consensus on how to bring these things together. This is important as Bruce points out:
in decision-intensive processes like lending or claims, process and decision modeling are separate large-scale activities performed concurrently, usually by independent teams.
Bruce goes on to say that
Integrating BPMN and decision modeling ultimately comes down to properly representing decisions and their constituent rule families in the context of BPMN subprocesses and tasks.
To understand this you need to know a little about The Decision Model – an approach for analyzing and modeling the business rules in your decisions (check out my blog post here and this webinar by Barb von Halle for more detail). Each Decision Model describes the rules that derive a single, business conclusion in a purely declarative way – there is no procedural logic in one. Rules and rule families make up a Decision Model and there is a notation and modeling approach to ensure these are complete and accurate. If you want to think about this in terms of a Business Rules Management System, each Decision Model specifies the logic requirement for a single Rule Set (see this post for more on rules, rule sets and decisions).
While some, perhaps many, operational business decisions require only a single rule set to execute and a single Decision Model to describe them, many are more complex. Because each decision model is a single declarative “space” and maps to a ruleset, you need to be able to group multiple Decision Models into an operational business decision definition. Such a business decision definition is going to be a collection of steps, most of which will correspond to a ruleset (implementation) or Decision Model (specification). The overall decision should be stateless and should have no side effects (allowing it be implemented as a Decision Service that can be both part of a process and available to other processes and systems.
A number of business rules pureplays support something called a Rule Flow (I prefer Decision Flow) to do this – FICO, IBM/ILOG, Corticon and InRule, for instance, all have such an artifact. This defines a series of steps and simple branches where each step could be a call to a function but is most likely to be the invocation of a rule set. A number of the newer players are heading down the same path, with SAP already offering a flow in its SAP Netweaver BRM product and Savvion discussing it. What is interesting is that the representation and capabilities of these flows is converging on a common set of elements:
- Using a subset of BPMN notation to describe them
- Mapping tasks to a single rule set or decision table
- Supporting most of the common branching artifacts
- Allowing for simple looping e.g. to invoke rules for each order item on an order
- Allowing for calls to sub flows
- Accessing additional data besides that being passed in from the calling process
- NOT handling most of the time out and exception handling that should be done by the calling process
While I like Decision Flow as a name for this I am beginning to think that Decision Process is a better one – essentially acknowledging that it is a kind of process. Unlike Bruce, I don’t think it is useful for the internal structure of a Decision Model or of an executable ruleset to be shown in the process diagram. Instead I would like to see these declarative objects to be represented as Tasks with the Decision Process flow collecting them into a single, stateless, side-effect free sub-process that can be called and used whenever necessary. Such a Decision Process could be deployed as a stateless, side-effect less Decision Service for use beyond processes as well as re-used across multiple processes. Rules, rule families, rule sets and decision models could all be reused across Decision Processes too, allowing for granular re-use and other rule analysis techniques (such as those described by Ron Ross or by the various BRMS vendors) could also be used.
What do you all think?
Comments on this entry are closed.
James, you are on a great topic here, aligning the terminology around decision modeling. Idiom published a definition for a decision model in 2007 in the BRJ [http://www.brcommunity.com/b326a.php?] that draws some parallels between a decision model and a data model. That is, a single decision is related to a decision model as an entity is related to a data model. Each must have context within the model, and it is the context that makes the decision or entity meaningful. Similarly you can equate rules to attributes – each may be reused across decisions and entities respectively.
The alternative, and I see parallels with this in the above discussion, is ‘binary’ modeling. This was once promoted as a concept in data modeling but died a natural death when it became apparent that each data element needed to carry the full weight of its context as meta data if it was to be useful. The same will be true in decisioning – the model (which I believe is the thing you are looking for a name for) is more valuable than any single decision because the business needs the whole model to derive value.
This concept of decision model still doesn’t appear to have this level of paramouncy in either the discussion or the links above. A single decision implemented within a decision service is very rare – a data base with a single table is rare for the same reason – the business world does not deal in unitary concepts! It is relationships that are important, because only related things can create value (how can you create value out of a single thing unrelated to anything else). So decisions (and data models) must have greater complexity to deal with multiple perspectives if they are to describe value creation. Ultimately, you need a complete model of inter-related decisions to achieve value change. By assigning the term decision model to the equivalent of a table (a unitary concept) in data terms is going to put Business Rules terminology out of step with its peer data terminology – this would be unsustainable in my view.
We need to align decision and data concepts and their terminology because you cannot describe one without the other – data describes the business domain at rest; decisions describe the transformations in the same business domain that generate value – a static and dynamic view of the same thing that collectively describe that thing – and the ‘thing’ is business value. Processes do neither – they are merely enablers – plumbing if you like. If you change the data and/or the decisioning you must change the definition of the business value proposition itself. But any process can be used to service the domain problem – the process is not required to define anything. For this reason, commoditized generic processes are the way of the future. So I think that Bruce’s comment should be reformatted: “Integrating BPMN and decision modeling ultimately comes down to properly representing decisions and their constituent rule families in the context of BPMN subprocesses and tasks.” should be read as “Integrating BPMN and decision modeling ultimately comes down to determining the decision models that are required to create business value, and then ensuring that appropriately configured subprocesses and tasks are available to service them.”. Additional discussion on this subject can be found here: bit.ly/7S9RSx
I tend to agree that for rules of any interesting complexity you don’t necessarily want to expose those to the process as a set of BPMN-style artifacts, but rather reduce them to a series of (somewhat) black box components that you can leverage in the process.
I can’t agree, however, with the first comment on this thread. Mark, if you can’t agree that processes add value, I’m sure you would agree that bad process destroys value. Process is not just some valueless plumbing that doesn’t matter. Moreover, you seem to be arguing for a use of rules that increases complexity, reduces encapsulation, and makes useful abstractions difficult – I’m not sure that’s what you intended to argue, but that’s how I read comments like “So decisions (and data models) must have greater complexity to deal with multiple perspectives if they are to describe value creation. Ultimately, you need a complete model of inter-related decisions to achieve value change.”
I understand the hammer-nail problem of working in rules – everything starts to look like a rule. And that problem isn’t specific to rules, it happens with all manner of useful, generally applicable technologies. But I think that the ability to solve a particular problem with a particular technology doesn’t necessarily mean that it is the best-fit technology to solve a particular problem (otherwise we could have just stopped with assembly language and left it at that… )
Scott, I would like to try and clarify some of my comments because I certainly did not intend that they be interpreted as per some of your remarks.
Firstly, while I very much agree that processes have value, my assertion is that any particular process is not core to the value proposition that the business is offering. The business value proposition can be derived by reducing policy statements to one or more decisions (which are in turn a collection of ‘rules’) that define a ‘state change’ on something of importance to the business – it is this state change that creates value for the business. If I then change the decision making logic as defined by this policy, I must by definition change the business value proposition itself. But if I enable the decision-making with process (a) rather than process (b), have I changed the value proposition? We have customers who offer exactly this scenario, for instance an algorithm that calculates a cost for hospital patient episodes. This algorithm uses the decisioning knowledge of the algorithm vendor within a process supplied by the hospital. Given that the actual process is unknown when the decisioning is defined, there is evidence that the process is an independent service provided to the decision module. The process is required to deliver the data to the decision module, and to respond to the decisions made, and this act of itself does have value. But there is no discretion available to the process to drive or determine the decision-making, at least in a healthy system – it must deliver exactly the right data, and it must respond correctly to the decisions made, otherwise it is not a valid process for that decision module. The alternative of creating a process and then forcing the decision logic to comply is not tenable – it is the process that is discretionary for the decisioning, not the other way around.
I understand that there is a body of practice that recommends working from the bottom up, using ‘use cases’ to identify and then describe processes, with decision making identified as part of the process meta-data. By way of contrast, using a decision centric approach, I can ask (for instance) an insurance underwriting expert how they ‘decide’ to approve an insurance application. This reliably leads to the identification of both the requisite data and the underwriting decisions that are derived from that data. And when I know the data and decisioning, I have also prescribed the process requirement to a very high degree. This does not work in reverse – I cannot infer the decision making from the process.
How would this work if we start with the process? The process will already assume some data (or not, in which case we can proceed no further) – how did the process designer determine what data was relevant? Is this data then given to the underwriter in the above example, and the underwriter told to make his underwriting decisions based on it?
Secondly, I argue that the decision centric approach reduces rather than increases complexity. The decision logic is a single unit of work, a ‘black box’ from a system perspective. The process is built to support the black-box. Neither needs to know the internal structure of the other. The complexity of the black-box is exactly as the business value proposition requires – no more, no less. This clear separation of responsibility between the owners of business policy and of business process is the key to simplifying systems – to the point where we have many customers who provide one or other as a service to implement reuse on a commercial scale. The decision makers sell expertise, and the process providers sell applied infrastructure. If the decision model is complex and requires thousand of steps and 10’s of thousands of variables to calculate a meaningful result (like the patient costing), so be it. This decisioning complexity is bounded by the business definition of the value transaction. Such a transaction cannot be processed in parts, unless the business itself sees value in the parts. For instance, if I have one decision model for the inherent vice of an insurance risk, and another for the geographic risk, does the business get any value if either of these ‘transactions’ are executed separately? Our experience suggests not – no value is derived until the insurance application is approved or rejected, and because both risk assessments must derive from the same state of the data, they must both occur within the same ‘transaction’. From my perspective the boundary of the business decisioning transaction is clear. As is the process that must support it.