Silvie Spreeuwenburg of LibRT came up after lunch to talk about a rules-based approach to traffic management. Traffic management is a rapidly changing area, of course, thanks to IoT and self-driving cars among others.
When one is considering traffic, there are many stakeholders. Not just the road user, also businesses reliant on road transport, safety authorities etc. The authorities have a set of traffic priorities (where they need good flow), they have delays and they have restrictions for safety or legal issues. They manage this today and expect to keep doing so, even as technology evolves.
To manage this they create lots of books about traffic management, incidents and other topics for each road. Each contains flow charts and instructions. This creates a lot of overlap so its important to separate problem definition from problem solution and to be specific – differentiate between things you must or may not do and those that are or are not actually possible.
The solution involves:
- Policy about priority and traffic management norms
- Identifying decision points, flow control points and segments in the road
- Standard services – increase flow, decrease flow, reroute on a link in the network
- Decisions to decide what the problem is, determine right thing to do, see if there’s anything conflicting, execute
The logic is all represented in decision tables. And applying the approach has successfully moved traffic to lower priority roads. Plus it fits very well with the way people work and connects changes in policies very directly to changes in behavior.
Marcia Gottgtroy from New Zealand tax presented on their lessons learned and planned development in decision management. They are moving from risk management to a business strategy, supported by analytical decision management. The initial focus was on building a decision management capability in the department and they initially focused on GST (sales tax) and it went very well, producing a decision service with proof of STP, operational efficiency very quickly. The service also had a learning loop based on the instrumentation of the service. They automated some of this (where the data was good) but did manual analysis elsewhere – not trying to over-automate nor wait for something perfect.
After this initial success, the next step is to focus on business strategy and get to decision management at an enterprise level. Hybrid and integrated solutions supported by a modern analytical culture driven by the overall strategy. Need to define a strategy, a data science framework, a methodology – all in the context of an experimental enterprise. They began to use decision modeling DMN – using decision requirements models to frame the problem improved the clarity, understanding, communication. And it documented this decision-making for the first time.
But then they had to stop as the success had caused the department to engage in a business transformation to replace and innovate everything! This has created a lot of uncertainty but also an opportunity to focus on their advanced analytic platform and the management of uncertainty. The next big shift is from decision management to decision optimization. Technology must be integrated, different approaches and an ability to experiment are key.
Nigel Crowther of IBM came up next to talk about business rules and Big Data. His interest is in combining Big Data platforms and AI with the transparency, agility and governance of business rules. Big Data teams tend to write scripts and code that is opaque, something business rules could really help with. Use cases for the combination include massive batches of decisions, simulations on large datasets and detect patterns in data lakes.
The combination uses a BRMS to manage the business rules, deploys a decision service and then runs a Map Job to fetch this and run it in parallel on a very large data set – distributing the rules to many nodes and distributing the data across these nodes so the rules can be run against them in parallel and very fast. The Hadoop dataset is stored on distributed nodes, each of which is then run through the rules in its own Map job before being reduced down to a single result set – bringing the rules to the data. This particular example uses flat data, about passengers on flights, and uses rules to identify the tiny number of “bad actors” among them. 20M passengers per day so it’s a real needle in a haystack problem. The batch process is used to simulate and back-test the rules and then the same rules are pushed into a live feed to make transactional decisions about specific passengers. So, for instance, a serious set up with 30 nodes, could scan 7B records (a year’s worth) in an hour and a half, 1.2M/second.
It’s also possible to use Big Data and analytic tools to analyze rules. Customers want, for instance, to simulate the impact of rule changes on large portfolios of customers. The rule logs of rules executed in a year, say, can also be analyzed quickly and effectively using a Big Data infrastructure.
Vijay Bandekar of InteliOps came up to talk about the digital economy and decision models to help companies face the challenges this economy creates. The digital economy is driven by the explosion of data and the parallel explosion in IoT devices. While this data is increasingly being stored but little if any is being effectively used. We need applications that can manage this data and take advantage of it because its just not possible for even the best human staff to cope – autonomous, learning, real-time decision-making systems are required. These systems require inferencing, reasoning and deductive decision models. While the algorithms work, it can be cumbersome to manage large rule bases. While machine learning approaches can come up with the rules, integrating these manually can be time consuming.
Architecturally, he says, most organizations focus on stateless decisioning with a database rather than a stateful working memory. Yet the stateful approach offers advantages in the era of fast moving, streaming data while also taking advantage of the rapidly increasing availability of massive amounts of cheap RAM. This requires agenda control and transparency, as well as effective caching and redundancy/restoration.
It’s also important to add learning models with both supervised and unsupervised learning engines to handle the increasing volumes of data. These learning models need to be injected into the streams of data, he argues, to make decisions as it arrives rather than being pointed at stored data. In addition, combinations of algorithms – ensembles – are increasingly essential given the variety of data and the value of different approaches in different scenarios.
The combination of delivers an adaptive decisions framework for real-time decisions. It uses stateful decision agents based on business rules and continuous learning using ensembles of analytic approaches on streaming data.
Last up is Tim Stephenson of Omny Link. His recent focus is on smaller companies and one of the key things about the new digital economy is the way in which this allows companies to punch above their weight. Small companies really need to drive leads to conclusion and manage customers effectively. CRM systems, even if they start free, can be complex and expensive to use. To unlock the value and respond appropriately faster to serve more customers you need to do a set of things well:
- Have a consistent, published domain model to make data widely available across channels. For small businesses, this means a simple but extensible customer domain model e.g. contact, account etc.
- Use APIs to support a wide variety of interfaces – contracts. This supports lots of UIs including future ones.
- Workflow or process to seamlessly drive data through the organization and its partners
- Consistent decision-making that enforces policy and ensures compliance with regulations
He walked through how these elements allow you to deal with core scenarios, like initial lead handling, so the company can manage leads and customers well. You need to use APIs to record well understood data, decide what to do and make sure you do what you decided to do.
The value of DMN (especially decision tables) allows you to get the business people to defined how they want to handle leads, how they want to make decisions. They can’t change the structure of the decisions, in his case, but they can tweak thresholds and categories, allowing them to focus and respond to changing conditions. And these decisions are deployed consistently across different workflows and different UIs – the same decision is made everywhere, presenting the standard answer to users no matter where they are working (a key value of separating decisions out formally as their own component). Using Decision Requirements Models to orchestrate the decision tables keeps them simpler and makes the whole thing more pluggable.
The payback for this has been clear. One user found that the time saved was about 75% but in addition, the improvement in response time ALSO means the company closes more work. Even small businesses can get an advantage from this kind of composable, consistent, repeatable, auditable, transparent decision automation.
–
And that’s a wrap. Next year’s Decision CAMP probably in Luxembourg in September and don’t forget all the slides are available on the Decision CAMP Schedule page.