≡ Menu

All industry standards offer interchange. Successful standards offer skills interchange not just a technical interchange format.

The Decision Model and Notation (DMN) decision modeling standard has a published XML interchange format, of course, and several of the committee’s members are working really hard to iron out the remaining issues and make the XML interchange more robust. The ability to interchange decision models between vendors is a valuable one but the opportunity that DMN offers for skills interchange is, if anything, even more valuable.

DMN offers two critical kinds of skills interchange – it offers those working with business rules or decision logic a way to transfer their skills between products and it offers business analysts a way to transfer skills between different kinds of decisioning projects.

The vast majority of the business logic in a decisioning system can be defined using the two core DMN components:

  • Decision Requirements Diagrams structure decision problems, break them into coherent pieces. They show where data is used and what knowledge assets (policies, regulations, best practices) are involved.
  • Decision tables specify the logic for most of the decisions on the diagram using simple constructs.

You don’t get 100% of the execution defined using these two elements -you need to add “glue” of various kinds – but almost 100% of the business content is defined using these them. This means someone who knows DMN can transfer these skills between DMN tools. But it also means they can transfer these skills between business rules products too as the approach of decomposing a decision problem into a Decision Requirements Diagram before writing logic is totally transferable and frankly most decision tables look and work the same even if they don’t support DMN yet.

The second kind of skills interchange comes because decision modeling works for lots of different kinds of projects. We have used decision modeling and DMN to:

  • Define business rules / decision logic for automation
  • Frame requirements for predictive analytics and machine learning project
  • Orchestrate a mix of packaged and custom decisioning components including business rules,  predictive analytics, AI and optimization
  • Model manual decision-making for consistency, mixing manual and automated decision making
  • And more – see Decision Modeling has value across many projects

This means that business analysts who learn decision modeling can apply that skill across lots of projects.

Learn decision modeling and learn DMN. It’s a great skill that let’s you express business decision problems and one that is transferable -interchangable – across projects and products.

BPMInstitute.org would like to get your insights on how you’re using Digital Decisioning and Analytics in your organization. Your feedback will help shape articles and focus at BPM Institute for 2018.

Digital Decisioning and Analytics survey

  • Are you using analytics and reporting to innovate business functions and models?
  • What is the state of your analytics efforts as they relate to processes?

Share your insights with BPMInstitute.org and you’ll be entered into a random drawing to win one free OnDemand course of your choice

 

 

AI is a hot topic and we get asked a lot by clients how they can succeed with AI or cognitive technology. There’s often a sense of panic – “everyone is doing AI and we’re not!” – and a sense that they have to start a completely separate initiative, throw money at it and hope for the best. In fact, we tell them, they have some time – they need to keep calm and focus on decisions.

The folks over at HBR had a good article about adopting AI based on a survey of executives. This is well worth a read and makes a couple of critical points.

  1. AI really does work, if you use it right. There’s plenty of hype but also plenty of evidence that it works. But like all technologies it works when it works, it’s not a silver bullet.
  2. Not everyone is using AI – in fact hardly anyone is doing very much with it. Most regular companies are experimenting with it, trying it out in one small area. Despite what you read there’s still time to figure out how to use AI effectively in your organization. Stay Calm.
  3. AI works better if you have already digitized your business. Of course AI is a decision-making technology, so what matters here is that you have digitized decision-making.  Focus AI on digital decision-making.

To succeed with AI we have a concrete set of suggestions we give to customers, many of which overlap with the HBR recommendations as you would expect:

  • Get management support
    The best way to do this is to know which decisions you are targeting and show your executives how these decisions impact business results. Being able to describe how improving a particular decision will help an executive meet their objectives and exceed their metrics will get their attention.
  • DON’T put technologists in charge
    Like data analytics, mixed teams work best for AI. Make sure the team has business, operations, technology and analytics professionals from day 1. For maximum effectiveness, use decision modeling with DMN to describe the decision-making you plan to improve as this gives everyone a shared vision of the project expressed in non-technical terms.
  • Focus on the decision not AI
    You will want to mix and match AI with other analytic approaches, explicit rules-based approaches and people-based approaches to making decisions. Most business decisions involve a mix:

    • Rules express the regulatory and policy-based parts of your decision
    • Data analytics turn (mostly) structured data into probabilities and classifications to improve the accuracy of your decisions
    • People make the decisions that involve interaction with the real world and poorly scoped or defined ones
    • And AI handles natural language, image processing, really complicated pattern matching etc.
  • Make sure you focus on change management
    Change is always a big deal in Decision Management projects – as soon as you start changing how decisions are made and how much automation there is you need to plan for and manage change. AI is no exception – it will change roles and responsibilities and change management will be essential for actual deployment (distinct from a fun experiment).

AI is a decision-making technology. As such it is a powerful complement to Decision Management – something to be considered alongside business rules and analytics, and integrated into a coherent decision model. Here’s one example, for a company that needed to automate assignment of emails. This depended on who it was from, what it was about and how urgent it was:

  • Deciding which client an email was from involved rules run against the sender and sender’s domain.
  • Deciding on the subject of an email involved rules about senders (some automated emails always use the same sender for the same subject) and rules about subject lines (some are fixed format).
  • This left too many unclassified, however, so the subject and body of the text were analyzed using text analytics to see which products were mentioned in the email to identify them (analytically) as the subject of the email.
  • Urgency was hard too. Historical data about the client was analyzed to build customer retention model. This analytic score was used to increase the urgency of any email from a client who was a retention risk.
  • Finally AI was used to see what the tone of the email was – was the email a complaint or a problem or just a description? The more likely it was to be a problem or complaint, the higher the urgency.
  • Each of these sub-decisions used different technologies but were orchestrated in a single decision model to decide how to assign the email.

AI is certainly new and different, but success with it requires the same focus on decisions and decision-making. Put decisions first.

My friends at Actico recently had me record some videos on Decision Management and decision modeling with DMN. Here’s the first – 3 reasons why financial and insurance companies should adopt Decision Management.

Enjoy.

As part of the build up to Building Business Capability 2017 I gave an interview on transforming the business. Check it out.

If you want to come to BBC 2017, there’s still time to register with code SPKDMS for a 10% discount.

If you are coming to BBC 2017, don’t forget to register for my tutorial Decision-Centric Business Transformation: Decision Modeling. See you there.

I have been working on Decision Management since we first started using the phrase back in 2002 – I’m probably the guilty party behind the phrase – and Decision Management Solutions (the company I run) does nothing but Decision Management. This gives us a unique perspective on new technologies and approaches that show up. One of the most interesting developments in Decision Management recently has been the use of decision modeling – especially the use of decision modeling with the Decision Model and Notation (DMN) standard.

In DMN a decision is the act of determining an output or selecting an option from inputs. In this context we mean a repeatable decision, allowing us to define a decision model for our decision-making approach. If we use DMN then we:

  • Document a decision and its sub-decisions, the components of decision-making
  • Capture specific questions and allowed answers for each decision and sub-decision
  • Identify the data required and how it is used by these decisions
  • Document sources of knowledge about decision-making
  • Define relationships between decisions, metrics, organizations and processes

Our experience with DMN is both broad and deep – we have trained nearly 1,000 people and used decision modeling on dozens of real-world projects. We have seen how valuable it is on these projects and we particularly notice how many different kinds of projects it is valuable for.

Unlike some DMN proponents, we don’t think that defining executable decision models is the only reason for using DMN. Here are some other reasons you might use decision modeling and DMN on your projects:

  • Eliciting decision-making requirements using DMN decision models is more productive, more fun, and much more rapid than traditional approaches to business rules or requirements. This is true no matter how you plan to implement the requirements – as an executable DMN model, as business rules in a BRMS, as analytics or as a decision-support environment.
  • DMN decision models  make impact analysis and traceability really work at a business level. You can answer questions like “who needs to know if I change these rules” or “who has to believe this analytic” and see how changes will impact your business results.
  • DMN decision models let you mix and match analytics, AI, rules, manual decision-making, machine learning, optimization and all other decision-making approaches in a single model. As the world moves beyond explicit logic to data-driven decision-making, this is critical future-proofing for your business.
  • DMN decision models let data science teams see what the requirements are for their analytic models, helping focus their efforts and ensure that the results will have a clear path to implementation and business impact.

 

Specific projects we have used decision modeling on have shown us that decision modeling with DMN is:

  • More Rapid
    It’s much faster than traditional approaches with customers telling us that in 1 hour we developed an understanding of the problem “that would have taken 10”
  • More engaging
    Business SMEs participate more fully in decision modeling and our experience is that they get so into it that they start freeing up their schedule so they can participate more!
  • Really enlightening
    Decision modeling clarifies the real requirements for data and logic in a decision. So much so that we regularly hear from experienced SMEs that they learn something from building the model. Some training departments in our customers have taken to using the decision model to train people….
  • Much more open
    Because human and automated decisions can be modeled together you can use DMN decision modeling when dealing with any kind of data, any kind of decision. That makes it easy to adopt as part of your standard approach.
  • Better at finding reuse
    Because business users can clearly express their problem and share this with others you can get agreement and discussion about reuse and common/shared decision-making long before you get to implementation

We have had great success with decision modeling and are helping many organizations adopt it right now by delivering a business value pilot that goes from a business need to a working pilot in a few weeks. Get in touch if we can help you.

Last week we completed the main phase of a proof of concept project at a client, one based in Jakarta Indonesia. After the report out, I tweeted

Loving watching a Dr at one of my clients explain (in Bahasa) the decision model for claims handling they built #dmn #decisionmgt

Tweets are great for this kind of quick heartfelt observation but I thought a blog post to explain why this is such a breakthrough was called for.

First, some background. This project was to develop a proof of concept decision service for claims handling. The client was already processing claims and integrating a Business Rules Management System. Our project was to build on that, introduce decision modeling, and show them how this decision-centric approach could help them maximize the value of their BRMS investment by engaging business owners, clarifying requirements, and enabling continuous improvement of the decision. Two things made this project exciting for me:

  • By this point in the project, the business SMEs have not only worked with us to create the model, they have also been using the model to discuss their own business. What makes a claim complex or risky? How do we want to handle this type of claim from this type of customer? This has brought clarity to their decision-making, making it possible for us to automate it but also enabling them to improve the manual aspects of the decision-making and the role of the decision-making in the overall claims process.
    In our experience, decision models provide a unique vehicle for this kind of debate and engagement among business SMEs. They give SMEs a way to deeply understand and build agreement about they way they want to make decisions.
  • Having worked with this decision model, these SMEs can now stand up in front of a crowd of their peers and colleagues and explain the decision model, discussing exactly what it meant for how claims are going to be processed. This is transformative: This is not a group of business users who understand their requirements but a group that understand the system, the algorithm that will drive their decision making.
    The sense of ownership, the clarity, the degree of detail with which they understand this new system component are amazing. This is not a black box to them. They understand and can explain how they will monitor and improve it. This understanding and their willingness to assign one of their number to monitor and improve the rules in this decision service every week – is truly inspiring.

I am looking forward to seeing this in production, and looking forward further to working with this very impressive group moving forward. Now if only I spoke Bahasa…

Drop us a line if this is something you would like to know more about.

Some other notes on the project for the more geeky among you:

  • We worked directly with the subject matter experts to build a decision model using the Decision Model and Notation (DMN) standard and our DecisionsFirst Modeler and methodology. These models are very effective at eliciting requirements for decision-making systems, helping the subject matter experts clarify their approach
  • We (partially) implemented this model in a commercial BRMS to show how this would work. All the artifacts created were linked back to the decision requirements we modeled. This enables the business users to easily find the rules they want to change, take advantage of the impact analysis of a decision model and still use the governance, version control and security features of a modern BRMS.
  • The decision model identified clear opportunities for predictive analytic models, modeling the decision-making that would leverage those predictions. This means that the analytics team has a clear goal as it analyzes data and knows exactly how to operationalize models they build.
  • A dashboard was then designed to show how the data produced by this could be used to monitor and continuously improve the decision and the rules that automate it. The data created by executing the decision model -all the sub-decision outcomes – is exactly the data needed to analyze the overall decision-making approach when it fails to deliver the business outcome desired. No additional design work is required for the dashboard – all the work has been done by designing the decision

Neuro-ID TM started about 7 years ago with the premise that by monitoring how people use their keyboards and mice one could identify the confidence level of the person filling out the form. And that this could be done without any personally identifiable information. They developed a Neuro Confidence ScoreTM  (Neuro-CS TM) which they have patented. Their focus is on the questions companies ask of their customers. Many of these questions are risk-related – they are asked to help the business establish how risky someone or something is – and companies lack confidence in the answers they get.  Neuro-ID likes to say “Smarter Questions, Better Bottom Line TM”.

There is an inherent tension in how organizations design surveys or online forms. Making an online form “frictionless” is a good objective as it makes it more likely people will fill it out. But it is hard to do this if one is also concerned about compliance. A focus on compliance can lead organizations to ask for too much detail and so create friction while a frictionless experience can easily fail to check on someone.

Neuro-ID’s technology delivers prescriptive analytics that score someone’s behavior in terms of the confidence with which those questions are answered (as well as some supporting attributes). As an example, consider declarations in financial applications. The technology monitors the session to see how people answer, how they move their mouse, what options they pick, which things they change. A baseline is created for each person as they interact and subsequent actions are compared to assess their confidence. The confidence of their movements reflects whether they are concealing something or don’t understand the question or are just not sure what the right answer is.

The technology sits behind existing forms and does not collect any personal information or PII. Because it compares the user to themselves, a lack of language skills or poor eyesight does not impact the score. Forms can add baseline questions before asking risk-related questions or can treat all questions as both baseline and risk relevant questions. It also detects meaningful edits, allowing it to ask questions like “did you overstate your income”. Experience is that this often triggers better behavior.

The technology generates a confidence level on each question. It has an interactive mode allowing loan officers or others to replay the interaction and everything is available programmatically through an API. A decision id is used at the Neuro-ID end that the company has to match to a particular applicant, allowing the technology to store detailed records without knowing how someone is. While mouse and keyboard are the most common environment, the technology also handles touch screens by assessing hesitations and changes.

Neuro-ID can be used for risk mitigation, fraud prevention or user experience design depending on the situation. An initial target are traditional banks and FIs working in the Prime segment. These organizations need a clean, quick online onboarding process that is self-directed yet does not expose them to unnecessary risk. It’s also effective with credit-invisible customers as it can send additional questions to check for third party verification when confidence is low.

A really interesting technology in my opinion. You can find more at www.neuro-id.com.

Silvie Spreeuwenburg of LibRT came up after lunch to talk about a rules-based approach to traffic management. Traffic management is a rapidly changing area, of course, thanks to IoT and self-driving cars among others.

When one is considering traffic, there are many stakeholders. Not just the road user, also businesses reliant on road transport, safety authorities etc. The authorities have a set of traffic priorities (where they need good flow), they have delays and they have restrictions for safety or legal issues. They manage this today and expect to keep doing so, even as technology evolves.

To manage this they create lots of books about traffic management, incidents and other topics for each road. Each contains flow charts and instructions. This creates a lot of overlap so its important to separate problem definition from problem solution and to be specific – differentiate between things you must or may not do and those that are or are not actually possible.

The solution involves:

  • Policy about priority and traffic management norms
  • Identifying decision points, flow control points and segments in the road
  • Standard services – increase flow, decrease flow, reroute on a link in the network
  • Decisions to decide what the problem is, determine right thing to do, see if there’s anything conflicting, execute

The logic is all represented in decision tables. And applying the approach has successfully moved traffic to lower priority roads. Plus it fits very well with the way people work and connects changes in policies very directly to changes in behavior.

Marcia Gottgtroy from New Zealand tax presented on their lessons learned and planned development in decision management. They are moving from risk management to a business strategy, supported by analytical decision management. The initial focus was on building a decision management capability in the department and they initially focused on GST (sales tax) and it went very well, producing a decision service with proof of STP, operational efficiency very quickly. The service also had a learning loop based on the instrumentation of the service. They automated some of this (where the data was good) but did manual analysis elsewhere – not trying to over-automate nor wait for something perfect.

After this initial success, the next step is to focus on business strategy and get to decision management at an enterprise level. Hybrid and integrated solutions supported by a modern analytical culture driven by the overall strategy. Need to define a strategy, a data science framework, a methodology – all in the context of an experimental enterprise. They began to use decision modeling DMN – using decision requirements models to frame the problem improved the clarity, understanding, communication. And it documented this decision-making for the first time.

But then they had to stop as the success had caused the department to engage in a business transformation to replace and innovate everything! This has created a lot of uncertainty but also an opportunity to focus on their advanced analytic platform and the management of uncertainty. The next big shift is from decision management to decision optimization. Technology must be integrated, different approaches and an ability to experiment are key.

Nigel Crowther of IBM came up next to talk about business rules and Big Data. His interest is in combining Big Data platforms and AI with the transparency, agility and governance of business rules. Big Data teams tend to write scripts and code that is opaque, something business rules could really help with. Use cases for the combination include massive batches of decisions, simulations on large datasets and detect patterns in data lakes.

The combination uses a BRMS to manage the business rules, deploys a decision service and then runs a Map Job to fetch this and run it in parallel on a very large data set – distributing the rules to many nodes and distributing the data across these nodes so the rules can be run against them in parallel and very fast. The Hadoop dataset is stored on distributed nodes, each of which is then run through the rules in its own Map job before being reduced down to a single result set – bringing the rules to the data. This particular example uses flat data, about passengers on flights, and uses rules to identify the tiny number of “bad actors” among them. 20M passengers per day so it’s a real needle in a haystack problem. The batch process is used to simulate and back-test the rules and then the same rules are pushed into a live feed to make transactional decisions about specific passengers. So, for instance, a serious set up with 30 nodes, could scan 7B records (a year’s worth) in an hour and a half, 1.2M/second.

It’s also possible to use Big Data and analytic tools to analyze rules. Customers want, for instance, to simulate the impact of rule changes on large portfolios of customers. The rule logs of rules executed in a year, say, can also be analyzed quickly and effectively using a Big Data infrastructure.

Vijay Bandekar of InteliOps came up to talk about the digital economy and decision models to help companies face the challenges this economy creates. The digital economy is driven by the explosion of data and the parallel explosion in IoT devices. While this data is increasingly being stored but little if any is being effectively used. We need applications that can manage this data and take advantage of it because its just not possible for even the best human staff to cope – autonomous, learning, real-time decision-making systems are required. These systems require inferencing, reasoning and deductive decision models. While the algorithms work, it can be cumbersome to manage large rule bases. While machine learning approaches can come up with the rules, integrating these manually can be time consuming.

Architecturally, he says, most organizations focus on stateless decisioning with a database rather than a stateful working memory. Yet the stateful approach offers advantages in the era of fast moving, streaming data while also taking advantage of the rapidly increasing availability of massive amounts of cheap RAM. This requires agenda control and transparency, as well as effective caching and redundancy/restoration.

It’s also important to add learning models with both supervised and unsupervised learning engines to handle the increasing volumes of data. These learning models need to be injected into the streams of data, he argues, to make decisions as it arrives rather than being pointed at stored data. In addition, combinations of algorithms – ensembles – are increasingly essential given the variety of data and the value of different approaches in different scenarios.

The combination of delivers an adaptive decisions framework for real-time decisions. It uses stateful decision agents based on business rules and continuous learning using ensembles of analytic approaches on streaming data.

Last up is Tim Stephenson of Omny Link. His recent focus is on smaller companies and one of the key things about the new digital economy is the way in which this allows companies to punch above their weight. Small companies really need to drive leads to conclusion and manage customers effectively. CRM systems, even if they start free, can be complex and expensive to use. To unlock the value and respond appropriately faster to serve more customers you need to do a set of things well:

  • Have a consistent, published domain model to make data widely available across channels. For small businesses, this means a simple but extensible customer domain model e.g. contact, account etc.
  • Use APIs to support a wide variety of interfaces – contracts. This supports lots of UIs including future ones.
  • Workflow or process to seamlessly drive data through the organization and its partners
  • Consistent decision-making that enforces policy and ensures compliance with regulations

He walked through how these elements allow you to deal with core scenarios, like initial lead handling, so the company can manage leads and customers well. You need to use APIs to record well understood data, decide what to do and make sure you do what you decided to do.

The value of DMN (especially decision tables) allows you to get the business people to defined how they want to handle leads, how they want to make decisions. They can’t change the structure of the decisions, in his case, but they can tweak thresholds and categories, allowing them to focus and respond to changing conditions. And these decisions are deployed consistently across different workflows and different UIs – the same decision is made everywhere, presenting the standard answer to users no matter where they are working (a key value of separating decisions out formally as their own component). Using Decision Requirements Models to orchestrate the decision tables keeps them simpler and makes the whole thing more pluggable.

The payback for this has been clear. One user found that the time saved was about 75% but in addition, the improvement in response time ALSO means the company closes more work. Even small businesses can get an advantage from this kind of composable, consistent, repeatable, auditable, transparent decision automation.

And that’s a wrap. Next year’s Decision CAMP probably in Luxembourg in September and don’t forget all the slides are available on the Decision CAMP Schedule page.

Little bit of  a late start for me so I am starting with Geoffrey De Smet from Red Hat talking about constraint planning. He points out that some decisions cannot be easily solved with rules-based approaches – they can be described as decision (and as a DMN decision model in our experience) but not readily made with rules and decision tables only.  His key point is that different decision problems require different technology

  • Eligibility is a rules problem
  • License plate recognition is a neural network (analytic) problem
  • Roster scheduling is a constraint planning problem

And our experience is that you can do this decision by decision in a decision model too, making it easy to identify the right technology and to combine them.

He went into some detail on the difference between hard and soft constraints and on the interesting way in which the Red Hat planner leverages the Red Hat rules format and engine to handle constraint definition, score solutions etc. They support various approaches to planning too, allowing you to mix and match rules-based constraints and various algorithms for searching for a solution. The integration also allows for some incremental work, taking advantage of the standard rule engine features.

I wrote about some of the early work around Drools Planner back in 2011.

I went next, presenting on the role of decision models in analytic excellence

If this resonates with you, check out this International Institute for Analytics Case Study
I’ll be back with the rest of the day to wrap up.

Bastian Steinart of Signavio came up after the break. Like Jan and I, he focused on their experience with DMN on Decision Management projects and the need for additional concepts. Better support for handling lists and sets, handling iteration and multiplicity for instance is also something they find essential. They have developed some extensions to support these things and are actively working with the committee – to show their suggestions and to make sure they end up supporting the agreed 1.2 standard.

They have also done a lot of work turning decision modeling in DMN into Drools DRL – the executable rule syntax of Drools. This implies, of course, that DMN models can be turned into any rules-based language, we would strongly agree that DMN and business rules (and Business Rules Management Systems) are very compatible. From the point of view of a code generator like Signavio however, the ability to consume DMN XML generated from a model, is probably preferable. With support for DMN execution in Drools this becomes practical.

Denis Gagne introduced how elements of DMN can and perhaps should be applied in some other standards. He (like us) has seen organizations gradually pull things out of their systems because they have separate lifecycles – data, process, decision-making etc. Extracting this helps with the disjoint change cycles but also engaging business users in the evolution of operations and systems. Simpler, more agile, smarter operations.

In particular, Denis has been working with BPMN (Business Process Model and Notation), CMMN (Case Management Model and Notation) and DMN (Decision Model and Notation). All these standards help business and IT to collaborate, facilitate analysis and reuse, drive agreement and support a clear, unambiguous definition. BPMN and CMMN support different kinds of work context (from structured to unstructured) and DMN is relevant everywhere because good decisions are important at every levell in an organization.

Trisotech wants to integrate these approaches – they want to make sure DMN can be used to define decisions in BPMN and CMMN, add FEEL as an expression language to BPMN and CMMN, harmonize information items across the standards and manage contexts.

The three standards complement each other and have defined, easy to use, arms-length integration (process task invokes decision or case for example). Trisotech is working to allow expressions in BPMN and CMMN to be defined in FEEL, allowing them to be executable and allowing reuse of their FEEL editor. Simple expressions can then be written this way while more complex ones can be modeling in DMN and linked. Aligning the information models matters too, so it is clear which data element in the BPMN model is which data element in DMN Etc. All of this helps with execution but also helps align the standards by using a common expression language – BPMN and CMMN skipped this so reusing the DMN one is clearly a good idea.

Denis has done a lot of good thinking around the overlap of these standards and how to use them together without being too focused on unifying them. Harmonizing and finding integration patterns, yes, unifying no.

Alan Fish took us up to lunch by introducing Business Knowledge Models. Business Knowledge Models, BKMs, are for reuse of decision logic. Many people (including me) focus on BKMs for reuse and for reuse in implementation in particular. This implies BKMs are only useful for the decision logic level. Alan disagrees with this approach.

Alan’s original book (which started a lot of the discussion of decision modeling with requirements models) introduced knowledge areas and these became BKMs in DMN. BKMs in his mind allow reuse and implementation but this is not what they are for – they are for modeling business knowledge in his mind.

Businesses, he argues, are very well aware of their existing knowledge assets. They need to see how they fit in their decision-making, especially in a new decision making system. Decision Requirements Models in DMN are great at showing people where specific knowledge is used in decision-making. But Alan wants to encapsulate existing knowledge in BKMs and then link BKMs into these models. He argues you can show the functional scope in a decision using BKMs and that by itemizing and categorizing these BKMs.

Each BKM in this approach is a ruleset, table, score model or calculation. The complexity of these can be assessed and estimates/tracking managed. This is indeed how we do estimates too – we just use the decisions not BKMs in this way. He also sees BKMs as a natural unit of deployment. Again, we use decisions for this, though like Alan we use the decision requirements diagram to navigate to deployed and maintainable assets. He thinks that user access and intent do not align perfectly with decisions. He also makes the great point that BKMs are a way for companies to market their knowledge – to build and package their knowledge so that other folks can consume them.

The key difference is that he sees most decisions having multiple BKMs while we generally regard these as separate decisions not as separate BKMs supporting a single decision.

Jan Vanthienen came up after lunch – not to talk about decision tables for once, but to talk about process and decision integration. In particular, how can ensure consistency and prevent clashes. Testing, verification, validation are all good but the best way to obtain correct models is to AVOID incorrect ones! One way to do this, for instance, is to avoid inconsistency e.g. by using Unique decision tables in DMN.

Jan introduces a continuum of decision process integrations

  1. No decisions therefore no inconsistency
  2. Decisions embedded in BPMN – bad, but no inconsistency
  3. Process-decision as a local concern – a simple task call to the decision – this limits inconsistencies to data passing and to unhandled decision outcomes
  4. A more real integration – several decisions in the DMN decision model are invoked by different tasks. This creates more opportunities for inconsistencies – a task might invoke a decision before tasks that invoke sub-decisions or that decision.
  5. Plus of course, no process only a decision – which also has no consistency issues

In scenario 4 particularly there are some potential mismatches:

  • You can embed part of your decision in your process with gateways creating inconsistency
  • You can fail to handle all the outcomes if the idea is to act on the outcomes with a gateway
  • You need one of the DMN intermediate results in the process you need to make sure this is calculated from the same DMN model
  • Putting sub-decisions in the process just to calculate them creates an inconsistency with the process model
  • Process models should invoke decisions in an order or a way that creates potential inconsistency with the declarative nature of the decision model. They can be recalculated but many people will assume they are not, creating issues

Last session before my panel today was Gil Ronen talking about patterns in decision logic in modern technical architectures, specifically those going to be automated. His premise is that technical architectures need to be reimagined to include decision management and business logic as a first class component.

The established use case is one in which policy or regulations drive business logic that is packaged up and deployed as a business rules component. Traditional analytic approaches focused on driving insight into human decision-making. But today big data and machine learning are driving more real-time analytics – even streaming analytics – plus the API economy is changing the boundaries of decision-making.

Many technical architectures for these new technologies refer to business logic, though some do not. In general, though, they don’t treat logic and decision-making as a manageable asset. For instance:

  • in streaming analytic architectures, it might be shown only as functions
  • In Big Data architectures, there may be questions that the data can answer or as operators
  • In APIs, there’s no distinction between APIs that just provide data and those that display decision outcomes

They all vary but they consistently fail to explicitly identify and describe the decision-making in the architecture. This lowers visibility, allows IT to pretend it does not mean to manage decisions and fails to connect the decision-making of the business to the decision logic in the architecture. A common pattern or approach to representation and a set of core features to make the case to IT to include it in architectures:

  • Make deployed decisions (as code) a thing in either a simple architecture or perhaps within all the nodes in a distributed (Big Data) architecture
  • Identify Decision Services that run on decision execution server (perhaps)
  • Identify Decision Agents that run in streams
  • He also identified a container with a self-contained API but I think this is just a decision service

All correct problems and things that would help. This is clearly a challenge and has been for a decade,. Hopefully DMN will change this.

After yesterday’s pre-conference day on DMN, the main program started today. All the slide decks are all available on the DecisionCAMP site.

Edson Tirelli started things off with a session to demystify the DMN specification. DMN does not invent anything, he says, but takes some of these concepts and defines a common language to express them. To take advantage of it we need to implement it, develop interchange for it and drive adoption.

Edson developed a runtime for DMN that takes DMN XML and executes on the Drools engine. This takes interchange files from tools and executes the logic from those files. This drives his focus – he’s thinking about execution. He has a set of lessons learned from this

  1. You need level 3 conformance to generate code. He walked through the conformance levels – all have Decision Requirements Diagrams but the decision tables go from natural language (level 1), simple expressions only (level 2) to full expression language (level3). He pointed out that level 3 is not much more than level 2.
  2. Conformance levels do not reflect reality in that vendors do not comply neatly with the levels nor is there an outside way to verify the conformance. To help with this, Edson (and others) are working on a set of Test that are publicly available to help folks test their ability to develop and consume XML.
  3. Spaces in variable names are a challenge but necessary because users really want them in their object names. This is not as hard as people think and is really important.
  4. DMN Type system is not complete because there are some like lists or ranges that cannot be defined although they are allowed in the expression language.
  5. Some bugs in the spec, but the 1.2 revision is working on these
  6. Gert involved with the community – the specification is a technical document and subject matter experts and others in the community are very helpful

Jan Purchase and I spoke next, discussing three gaps in the specification that we see in the specification. Here’s our presentation and you can get more on our thinking in our book, Real-World Decision Modeling with DMN:

Bruce Silver came next to discuss the analysis of decision tables. DMN allows many things to be put into decision tables that are “bad” – not best practices – because the specification cannot contain methodology, because there are sometimes corner cases and because there are some disagreements, forcing the specification to allow both.

Bruce generally likes the standards restrictions on what can be in a decision table and has developed some code to check DMN tables to see how complete they might be.  While these restrictions are limiting they also allow for static analysis. He checks for completeness (gaps in logic for instance), compares hit policy with the rules to make sure the rules and hit policy match and to spot problems like masked rules (rules that look valid but are never going to execute due to the hit policy). It recommends collapsing rules that could be combined and makes other suggestions to improve clarity.

It also applies “normalization” based on the work of both Jan Vanthienen and some of the later work done for The Decision Model by von Halle and Goldberg. These are applied somewhat selectively as there are some that are very restrictive.

A clear approach to validating decision tables based on DMN – very similar to what BRMS vendors have been doing for years but nice to see if for DMN.

A break here so I’ll post this.

The first day at Decision CAMP 2017 is focused explicitly on the Decision Model and Notation (DMN) standard.

Alan Fish introduced the ongoing work on 1.2. He quickly summarizes the new features in 1.1 – such as text annotations and a formal definition of a decision service. Then he went through the new features, starting with those that are agreed:

  • Annotation columns can be added to a decision table
  • Restricted labels to force the use of an object name as a minimum
  • Fixed some bugs around empty sets, XML interchange etc.

In addition, several key topics are being worked on. These three issues have not been voted on yet but we are tracking to get these done:

  • Diagram Interchange based on the OMG standard approach for this so that every diagram can be interchanged precisely as well as the underlying model. Given how important multiple views are, this is a really important feature.
  • Context free grammar is under discussion as it has been hard to make names with spaces, operators etc. parsable. Likely to use quotes around names with spaces and escaping quotes in names etc.
  • Invocable Decision Services to allow a decision to be linked to a decision service instead of a BKM. This allows a more complex reusable package of logic. Using Decision Services as the definition allows packages to be defined and reused without forcing encapsulation. This creates several difficult issues to be resolved but we (the committee) is making progress.

Bruce Silver then facilitated a discussion on what people liked and disliked about DMN.

Likes

  • Eliciting requirements using decision modeling is more productive, more fun, more rapid than traditional approaches to eliciting requirements.
  • The use of explicit requirements helps with impact analysis, traceability at a business level.
  • Really brings together an industry across multiple vendors in a positive way – it’s great to have a standard, customers like the idea that vendors are compliant.
  • Ability to mix and match analytics, AI, rules, manual decision-making, machine learning, optimization and all other decision-making approaches in a single model.
  • FEEL has supporters and has some great features – especially for certain activities like defining decision tables, possible for non-technical users to be very precise.
  • Ability for business users to clearly express their problem and share this with others to get agreement, prompt discussion – and to do this at several levels.

Dislikes

  • Perhaps too many details too soon, too much of a focus on the meta model and the internals v what a user can see.
  • Sometimes the standard is too precise or limiting – not allowing decision tables and output to have different names, for instance.
  • Dislike some of the corner case features because they can get people into trouble.
  • Not really any good ways to show which kinds of technology or approach can be used in decision modeling – perhaps some icons.
  • FEEL has people who don’t like it too, but this is partly because its new and a change and because it perhaps lacks some of the patterns and capabilities needed. More examples would be good too.

Last week, Silicon Valley research firm Aragon Research cited Decision Management Solutions as a visual and business-friendly extension to digital business platforms and named us a 2017 Hot Vendor in Digital Business Platforms. We’re delighted about this and feel pretty strongly that this validates our vision of a federated digital decisioning platform as an essential ingredient in a company’s digital business strategy.

The report’s author, Jim Sinur, said:

Digital Business Platforms combine five major technical tributaries to create a cornerstone technology base that supports the changing nature of business, as well as the work that supports digital. Enterprises that are looking to manage a complex or rapidly changing set of rules that empower outcomes would benefit from decision management as offered by Decision Management Solutions, especially when combined with predictive or real-time analytics.

The report says that what makes us unique is that business people can represent their decisions in a friendly, visual and industry-standard model while managing the logic and analytics for these decisions across many implementation platforms. We’re working with clients to create “virtual decision hubs” that map the complexities of enterprise decision-making to the underlying technologies that deliver the decision logic, business rules, advanced analytics and AI needed to operationalize this decision-making across channels.

Click here to view the report.

Open Data Group is an analytic deployment company. The company was started over 10 years ago and has transitioned from consulting to a product company, applying their expertise in Data Science and IT to create an analytic engine, FastScore.

Successful analytics require organizational alignment (specifically between Data Science and IT) to create coordination of systems and business problem collaboration. In addition to understanding analytics, companies are trying to leverage new technologies and modernize their analytic approach. To address some of these challenges, Open Data Group have developed FastScore.

FastScore is designed to address various analytic deployment challenges to monetize analytic outcomes including:

  • Manual recoding and other complexity
  • Too slow to deploy analytic models (largely as a result)
  • Too many languages being used
  • IT and analytic teams are not on the same journey – analytic/data science teams care about iteration and exploration while IT cares about stable systems and control.

FastScore provides a repeatable, scalable process for deploying analytic workflows. Open Data Group see the model itself as the asset and emphasize that a model needs to be language and data neutral as well as deployed using micro-services (they are a Docker container) to be a valuable, and future proofed, asset.

FastScore is an analytic deployment environment that connects a wide range of analytic design environments to a wide range of business applications. It has several elements, all within a Docker container. It also includes a model abstraction (input and output AVRO schemas, an initialization and the math action) that allows models to be ingested from a wide variety of formats (including, Python, R, C, SAS, PFA) and a stream abstraction (input and output, AVRO schema in JSON, AVRO binary or text) to consume and produce a wide range of data (from streaming to traditional databases) using a standard lightweight contract for data exchange.

The FastScore Engine is a Docker container into which customers can load models for push button deployment. Input streams are then connected to provide data to the model and output streams to push results to the required business applications or downstream environment. Multiple models can be connected into an analytic pipeline within FastScore. Models can be predictive analytic models, feature generators or any other element of an analytic decision. Everything can be accessed through a REST endpoint, with model execution being handled automatically (selecting between runners for R, Python, Java, C for instance). Within the container is the stream processor that will enforce the input and output schemas and a set of sensors that allow model performance to be monitored, tested and debugged.

Besides the core engine, additional features include:

  • Model Deploy
    A plugin for Jupyter that integrates the engine with the Jupyter data science tool. Allows a data scientist using Jupyter to develop models and then check that they will be able to deploy them, generate the various files etc.
  • Model Manage
    Docker container that hooks into running instances of FastScore and provides a way to address and manage the schemas, streams and models that are deployed. Can be integrated with version control and configuration management tools.
  • Model Compare
    New in the 1.6 release, allows models to be identified as A/B or Champion/Challenger pairs and manage the run time execution of the models. Logs this data along with the rest of the data created.
  • Dashboard
    Shows running engines and Model Manage abstractions, changes and manages the run time bindings and abstractions, provides some charting of data including that generated by Model Compare etc. Uses the REST API so all of this could be done in another product, too.

Plus Command Line Interface and REST APIs for everything.

Because all of this is done within a Docker container, the product integrates with the Docker ecosystem for components such as systems monitoring and tuning. The Docker container, allows easy deployment to variety of cloud and on premise platforms and supports micro services orchestration.

FastScore allows an organization to create a reliable, systematic, scalable process for deploying and using all the analytic models developed by their analytic and data science teams – what might be called AnalyticOps, a “function” created to provide a centralized place to manage, monitor and manipulate enterprise analytics assets.

More information on FastScore.

 

Avola Decision is a decision model-based decisioning platform migrating from supporting the proprietary TDM (The Decision Model) approach to support for the DMN (Decision Model and Notation) open standard. I reviewed the previous product and since then the team has been working on a new product. The new Avola Decision is .Net based on the backend and is available on-premise or on Azure for the SaaS version (public or private clouds). The UI has been rewritten and is completely HTML and browser-based.

Customers begin at a landing page – a dashboard where shortcuts and information that are used regularly such as notifications or tasks can be displayed. Customers can have many projects within their environment and projects can be created by non-technical users. Projects are within a domain (and a domain may have many projects) and individual projects can be linked to dependent domains to bring in shared content if the owner of that content allows it. Multiple Decision Services can be defined for the project and different members can be added with different roles. A separate identity server supports two-factor identification and allows custom security approaches for specific customers.

Domains contain business concepts (sets of data elements) and projects. Users work within a project and its related domain(s). They have instant free-text search across the project that shows hits for the search as it is typed. Explicit tags are coming soon, allowing objects to be tagged and managed using these tags.

Data elements can be defined based on a set of allowed types. Specific data types can be constrained to a specific set of values (value lists) or precision. These can be used as a glossary for multiple data elements with a where-used capability to see which data elements are using the definition and which decisions use that data. Documents such as policies can be uploaded to create Knowledge Sources and additional ones can be created that point to websites etc.

The decisions in a model can be viewed, either just those exposed as decision services or all decisions. As before, the diagram is generated from the logic being defined behind the decisions. Plans exist to allow editing of the diagram directly but for now it is based on the executable logic behind it. The diagram is DMN-like, using input data nodes as well as the boxed list of attributes (combining both styles of data presentation). In addition, the decision nodes are divided up to see conditions, operands, and metadata from both data and sub-decisions. Users can zoom in and out, restrict the number of levels being viewed, see the layer “above” a decision – the decisions that require it etc. Future versions will allow the user to hide the data elements, knowledge sources etc.

Behind each decision is decision logic, currently only as a decision table. Other DMN representations will be coming soon as will multiple action columns but they plan to continue to use the TDM layout for decision tables as well as some of the decision table features in TDM but not yet defined in DMN. The decision table editor has been upgraded to support row and column movement, in-line editing and change highlighting. Rules can be cloned and edited and some checking is built in such as type conformance, range overlap/underlap. A formula builder is used for calculations and users can click through to follow inputs to their sources. Importing and exporting FEEL that defines decision logic is a future possibility also but they don’t plan to expose it as a standard way to edit logic in the tool.

Testing can be done at any point. Test data collections can be defined and used to test the development or any deployed version. One or many test collections can be run and the status of each collection and test is shown, making a quick visual check for success easy. An Excel template can be downloaded to bulk create test cases or they can be entered/edited individually. Tests can be viewed in terms of the rows that executed in the various decision tables and the table can be opened for editing. Impact analysis – seeing the impact of a proposed change in terms of overall results – is also planned.

Once logic is tested and confirmed, there is an approval cycle and deployment support as you would expect. Once this process starts, the whole service is packaged up and encapsulated so it cannot be impacted by changes e.g. to a shared value list.

Executions store the version of the model used, the outcome of the decision invoked as well as the results of each sub-decision as well as the data used to create it. This information is available for reporting and analysis.

Future add-ons will allow the data defined to be used to define web forms or surveys to capture the data needed.

More information at Avola Decision.

When the famous nerd webcomic XKCD pokes fun at how you use technology, it’s probably time to try a different approach. Recently he posted on Machine Learning and took a swipe at the mindless way some people approach machine learning.  His characters discuss a “machine learning system” that involves pouring data into a big pile of linear algebra and stirring until you get the answers you want. Humorous though this is, it also represents a definite school of thought when it comes to advanced analytics – that more data and better algorithms is all you need. With enough data and algorithmic power there’s no need to think about the problem, no need to talk to the people who need the output, no need to do anything except “let the data speak”.

In our experience this approach has a number of problems:

  • Just because the data can tell you something, there’s no guarantee that the business cares about it, can use it or will use it for decision-making.
  • What the business needs – the analytic insight that they can use to make better decisions – is often not what your machine learning system/data science team think is most meaningful or important.
  • How accurate your analytics need to be, and how they need to be operationalized, in order to improve business decision-making cannot be determined from the data – only the people responsible for the decision can tell you this.
  • Most business decisions require policies and regulations to be applied too, not just the analytic insight from your data. Simply pushing data through machine learning or other analytic algorithms won’t tell you this.

In the end there is no substitute for knowing what the business problem is. In machine learning (predictive analytics, data mining, data science), this means:

  • Knowing what the business metrics are that show if you have succeeded.
  • Knowing what decision (or decisions) the business needs to make more accurately to achieve this.
  • Knowing how that decision is made today, what constrains or guides that decision, and how analytics might be used to improve the results.
  • Being able to place the analytic algorithms you develop into this decision-making context so they can be effectively used once you are done.

We have found on multiple projects that this is the biggest single problem – get the problem (decision) definition right and the odds of successful analytic projects (ones that actually improve business results) go way up. Decision discovery and modeling, especially using the Decision Model and Notation (DMN) standard, is tremendously effective at doing this. So much so that we do this as standard now on all our analytic projects.

But don’t just believe me – AllAnalytics identified this as the greatest problem in analytic projects  and research by the Economist Information Unit talked about the Broken Links In The Analytics Value Chain (you can find some posts on this over on our company blog – How To Fix The Broken Links In The Analytics Value Chain and Framing Analytics with Decision Modeling).

If you want to learn more, we  have a case study on bringing clarity to data science project as well as two briefs – Analytics Teams: 5 Things You Need to Know Before You Deploy Your Model and Analytics Teams: 6 Questions to Ask Your Business Partner Before You Model – to show how a focus on decisions, and decision modeling, can really help. Or contact us and we can chat.

DataRobot is focused on automated machine learning and on helping customers build an AI driven business, especially by focusing on decisions that can be automated using machine learning and other AI technologies. DataRobot was founded in 2012 and currently has nearly 300 staff including 150+ data scientists. Since it was founded, well over 200M models have been built on the DataRobot cloud.

DataRobot’s core value proposition is that they can speed the time to build and deploy custom machine learning models, deliver great accuracy “out of the box” and provide a simple UI for business analysts so they can leverage machine learning without being a data scientist. The technology can be used to make data scientists more productive as well as to increase the range of people who can solve data science problems.

DataRobot runs either on AWS or on a customer’s hardware. Modeling-ready datasets can be loaded from ODBC databases, Hadoop, URLs or local files – partnerships with companies like Alteryx support data preparation, blending etc. The software then automatically performs the kind of data transformations needed to make machine learning work – data cleansing, feature engineering needed for the various machine learning algorithms such as scaling and converting data to match the algorithms. It does not currently generate domain-specific potential features/characteristics from raw data, instead making it easy for data and business analysts to create them and feed them into the modeling environment. Once data is loaded, some basic descriptive statistics are loaded and the tool recommends a measurement approach (to select between algorithms) based on the kind of data/target.

DataRobot can apply a wide variety of machine learning algorithms to these datasets, for now almost exclusively supervised learning techniques where a specific target is selected by the user. Multiple algorithms are run and DataRobot partitions data automatically to keep holdout data for validation (to prevent overfitting), applies smart downsampling to improve the accuracy of algorithms and allows some other advanced parameters to be configured for specific kinds of data. Once started, DataRobot looks at target variable, dataset, characteristics, combinations of characteristics and selects a set of machine learning algorithms/configurations (blueprints) to run. These then get trained and more “workers” can be configured to speed the time to complete, essentially spinning up more capacity for a specific job.

As the algorithms complete, the results are displayed on a leader board based on the measurement approach selected. DataRobot speeds this process by running the blueprints initially only against a subset of the data and then running the top ones against the full dataset. Users who are data scientists can investigate the blueprints, see exactly the approach taken for the blueprint in terms of algorithm configuration, data transformations etc. Key drivers- the features that make the most difference – are identified and a set of reason codes generated for each entry in the dataset. Several other descriptive elements, such as word clouds for text analytics, are also generated to allow models to be investigated.

The tool also has a UI for non-technical users. This skips the display of the leader board and internal status information and displays just a summary of the best model with its confusion matrix, lift and key drivers. A word cloud for text fields and a point and click deployment of a scoring UI (for batch scoring of a data file or scoring a single hand-entered record) complete the process. More advanced users can interact with the same projects, allowing the full range of deployment and reuse of projects created this way.

Once a model is done, the best way to deploy them is to use the DataRobot API. A REST API end point is generated for each model and can be used to score a record. All the fields used in the sample are used to create the REST API and the results come back with the reason codes generated. Everything to do with modeling is also available through an API, allowing customers to build applications that re-build and monitor models. Users can also generate code for models but this is discouraged.

You can get more information on DataRobot at http://datarobot.com

The Rexer Data Science survey is one of the best and longest running polls of data mining, analytic and data science professionals. I regularly refer to it and blog about it. It’s time to take this year’s survey – and the survey is aimed at all analytic people, no matter whether they consider themselves to be Data Analysts, Predictive Modelers, Data Scientists, Data Miners, Statisticians, Machine Learning Specialists or any other type of analytic person. Highlights of the 2017 survey results will be unveiled at Predictive Analytics World – NY in October, 2017 and the full 2017 Survey summary report will be available for free download from the Rexer Analytics website near the end of 2017.

The survey should take approximately 20 minutes to complete. Your responses are completely confidential.

Direct Link to Start the Survey – Access Code:  M4JY4
Karl tells me it is OK to share this Access Code with your friends and colleagues as it can be used by multiple people. You can also get more survey information & FREE downloads of the 2007-2015 Survey Summary Reports from Rexer Analytics.

SAP BusinessObjects Predictive Analytics 3.1 is the current release of the SAP predictive analytic suite. Like most in the analytics space, SAP sees its clients struggling to make use of massive amounts of data that are newly available while facing ever increasing business expectations, faster business decision cycles and an analytical skill gap. SAP therefore is focused on predictive analytic capabilities that:

  • Produce accurate results in days not weeks
  • Deliver operationalization for machine learning at scale
  • Embed predictive analytics in business processes and applications

The predictive analytics suite consists then of four elements:

  • Data Manager for integrating and creating and reusing (potentially very wide) datasets
  • Automated Modeler, a wizard-driven modeling tool for predictive analytics
  • Predictive Composer, a more custom pipeline/workflow development tool for maximum control
  • Predictive Factory to operationalize all of this

These can access data from SAP HANA, SAP VORA, Hadoop/Spark, 3rd party databases and SAP HANA Cloud. And they can be embedded into SAP applications and other custom applications.

Four offerings package this up:

  • SAP BusinessObjects Predictive Analytics Suite for on-premise and for on cloud
  • SCP Predictive Services on cloud for embedding machine learning
  • SAP BusinessObjects Predictive Analytics for OEM/SAP application embedding

SAP is focused on speed, building models fast, but also on automating techniques. The assumption is that organizations need to manage hundreds or thousands of models and very wide data sets. Plus, for many SAP customers, SAP integration is obviously important. Finally, the suite is designed to support the whole analytic lifecycle.

The tools are moving to a new UI environment, replacing desktop tools with a browser-based environment. Predictive Factory was the first of these and more and more of the capabilities of the suite are being integrated, allowing Predictive Factory to be a single point of entry into the suite. As part of this integration and simplification, everything is being built to be effective with both SAP Hana and Hadoop. There is also an increasing focus on massive automation e.g. segmented modeling.

One of the most interesting features of the SAP BusinessObjects Predictive Analytics Suite is that there are two integrated perspectives – Automated Modeler and Predictive Composer. This allows data scientists and analytics professionals to build very custom models while also allowing less technical teams, or those with more projects to complete, to use the automation. All the models are stored and managed in Predictive Factory and Predictive Composer can be used to configure nodes for use in Automated Modeler. Predictive Factory also lets you create multiple projects across multiple servers etc. Existing models can be imported from previous tool versions or from PMML, new tasks (such as checking for data deviation or retraining models) can be created and scheduled to run asynchronously. Tasks can be monitored and managed, allowing large numbers of models to be created, supervised and updated.

The same automated algorithms can be accessed from the SAP BusinessObjects Cloud. Users can identify a dataset, identify something they are interested in and run automated modeling algorithms to see, for instance, what influences the data element of interest. This requires some understanding of the power and limitations of predictive analytics but no skill with the analytic technique itself. Data is presented along with some explanation and supporting text. The results can easily be integrated into stories being developed in the BI environment or applied to datasets. Over time, this capability will include all the capabilities of the on-premise solution.

Predictive Analytics Integrator allows these capabilities to be brought into SAP applications such as SAP Fraud Manager. Because SAP applications all site on SAP HANA, the Predictive Analytics Integrator is designed to make it easy to bring advanced analytics into the applications. Each application can develop a UI and use terminology that works for the application users while accessing all the underlying automation from the suite.

Predictive Analytics 3.2 in July will be the first release where the suite’s components are being integrated into the browser environment and the Predictive Composer name will be used. This release will not have 100% equivalence with the desktop install but will support the building and deployment of models using both the data scientist and automated tools.

You can get more information on the SAP BusinessObjects Predictive Analytics Suite here.