≡ Menu

One of the most persistent problems with decision modeling in my experience is the tendency of people to think of decision modeling as a one-time requirements effort. Many teams are convinced that building a decision model using the Decision Model and Notation (DMN) standard (white paper here) is going to help them with their business rules or analytic projects.  Some of these teams, however, think that decision modeling is only valuable at the beginning – as a way to initially frame and structure their requirements. Our experience, on dozens of projects around the world, is that this significantly limits the value that projects, and especially organizations, could get from decision modeling.

Decision modeling has a best practice life cycle:

  • Decision Modeling OrchestrationBuild the initial model to drive requirements, structuring and framing business rules and analytic efforts
  • Use this model to decide on the automation boundary – what gets automated, what gets left to people – recognizing that decision modeling is a great way to specify requirements for automation AND to specify how people should make a decision.
  • Use the model to understand which parts of the decision might be best automated with business rules, which will benefit from analytics, even where optimization might be useful.
  • Keep the model alive to ensure traceability from the original business-centric requirements to the detailed technical implementation
  • Update the model as business needs change to support ongoing orchestration of your decisioning technology deployments.

To make this work you need to ensure that the decision models you build can be integrated with each other into a shared repository and that this is a living repository that everyone can access. DecisionsFirst Modeler, the tool we have developed based on our experience in decision modeling, does this. Not only is everything stored in a shared repository, everything in that repository is available through an API. We have used that API to develop a read-only viewer of the repository, displaying all the content in a mobile-friendly, HTML 5 app so that models are not limited to those who are building them but can be widely shared across the organization. This makes it easier for folks to share the models they develop in DecisionsFirst Modeler – everyone in the organization can see the models and links to specific objects or diagrams can be emailed around to engage reviewers, show the impact of changes to business rules, assess the impact of decisions on the business as a whole, share knowledge and decision-making best practices and much more.

To date this has been a very popular feature of our Enterprise Edition. Today we announced that it will now be available for users of the free basic edition also. You can read the announcement here and the button below will take you to a recorded demo.
DecisionsFirst Modeler Reader Demonstration

The founders of Avola began working together in 2012 as an incubator called Bizzotope, intended as a venue for starting a number of innovative ventures. The first company was Bizzomate – a business and IT consulting firm focused on low-code platforms like Mendix and agile/Scrum approaches. Platforms like these generally lack process and decisioning capabilities so the company adopted The Decision Model and ultimately decided to build a decision modeling/execution platform to support it. Avola Decision is the result and was released to the market in April 2014. Avola Decision has customers across various industries, but with a strong focus onn banking, insurance and professional/legal services. Today these are focused in the Netherlands, Belgium and UK, where Avola has offices.

The basic premise of Avola Decision is to develop decision-aware applications so that they can support decision-aware processes. Today the platform remains focused on TDM but later this year the platform will support the Decision Model and Notation (DMN) notation. Avola Decision consists of a SaaS platform for building decision models and an execution engine that can execute these decision models in a public or private cloud, or on premise.

The modeler itself is web-based, allowing easy access to decisions, rule families and the supporting glossary/fact models. Each decision can be expressed as a decision table or rule family supported by a set of facts and a glossary. Decisions and rule families are linked through the shared fact model.

Rule families are shown and edited using the classic tabular rules-as-rows decision table format and a TDM decision model diagram is inferred from the underlying rule families or decision tables. Each of these can be opened and edited while the model for that rule family – the sub-decisions – can be viewed and used to navigate to underlying sub-decisions. Users can also navigate to the associated fact model as needed. The rule editor is data type aware so that conditions match the data and calculated data can be specified for use in the decision. As users edit the rule family by adding conditions they can specify that a new or existing fact type is required and can identify that this is being derived by a new rule family, adding that to the hierarchy.

Fact Types are managed grouped in business concepts and value lists can be defined and used for multiple facts through an underlying glossary. Business concepts are associated with business domains and these domains are the core management artifact with roles/security defined at this level. Users can navigate from the fact model to the rule families and decision models that use or derive them and vice versa.

Testing and validation is supported in the modeling tool and is linked to the decision models. Each revision of the model is stored and tests can be run against any revision. Users can do a single test using a form generated by the system from the inputs required by that decision or can download an Excel template to create a batch of tests. The template has value lists etc built in and, once filled in, can be uploaded to create a set of tests. Results of tests can be reviewed online or downloaded in a result spreadsheet. When reviewing online, users can drill down into the intermediate generated values and see how the rules fired at each level.

Versioning is also supported along with an approval cycle. This approval cycle manages the departments who own different concepts to ensure that all the right people are involved approving models that potentially cut-across business concepts. Separate approval users can be defined in addition to those who can actually edit and manage the models. Reports can also be published for decision models as documentation.

Once the model is complete it can be deployed to public or private cloud (Azure) or to an on-premise Windows server. The deployed service is accessible as a Rest API and is executed in memory when deployed. Decisions are logged in terms of the version of data provided, decision-making version used, interim conclusions drawn etc.

Pricing is annual with a low price for modelers and approvers and a separate execution pricing based on the number of decisions being made with extra fees for private cloud or on-premise deployment.

More information on Avola Decision is available here and Avola is a vendor in our Decision Management Systems Platform Technology Report

DMNBookFrontCoverJan Purchase and I are working away on our book, Real-World Decision Modeling with DMN, and ahead of the publication we have made a series of short videos about decisions, decision modeling and DMN. These have been posted over on our company blog but here they are for you reference. Check them out.

  1. What is Decision Modeling – a video
  2. What’s the Motivation for Decision Modeling? – a video
  3. Why Model Decisions with DMN – a video
  4. Why Write a Book When There’s a Standard? – A Video
  5. The differences between decisions and business rules– a video
  6. Real world examples of how modeling decisions with DMN has helped – a video

If you are interested in the book, you can sign up here for updates and release date information.

Sparkling Logic is focused on enabling business and data analysts to manage automate decisions better and faster – what they call Analytics driven Decision Management. Sparkling Logic was founded in 2010 and I have blogged a few times about their decisioning platform (most recently here). Customers include Equifax, Paypal, FirstRate, Accela, Northrop Grumman and others across a wide range of solution areas with a strong focus on enterprise customers. These enterprise customers are very focused on multiple projects, enterprise integration and supporting both on-premise/cloud deployments.

The Product Portfolio now includes Pencil, a decision modeling and requirements tool, and SMARTS, their full lifecyle decision management platform supporting predictive models, data analysis, and expertise / business rules. SMARTS is available on premise, on cloud or embedded in another product and runs across Java and .NET deployments. SMARTS includes form-centric rule authoring, rule induction from data, strong navigation between the various components and integrated collaboration.

Pencil was added to the product portfolio recently to add support for decision modeling with the Decision Model and Notation (DMN) standard along with a glossary. Pencil provides graphical or Excel-based interfaces that share the same repository as SMARTS and can be used stand-alone or to generate artifacts from the decision model for use in SMARTS. This shared infrastructure means a decision model can be included in a SMARTS project. A decision model consists of DMN diagrams and a glossary, which can be shared across projects. As the decision model is built the glossary is either referenced or constructed automatically to support the model. Elements in the glossary can be categorized for management and the use of data in processing is highlighted. Computed elements can be defined as none (for casual models) or SparkL (a subset of the language used in SMARTS allowing execution). Pencil guides the business analysts in decomposing their decision according to the DMN standard, specifying inputs and outputs as he/she goes.  Each node in the diagram can be specified using a tabular layout, text or a list of rules. SparkL used in these is verified and has type-ahead access to the glossary. Models can be versioned and compared with a nice graphical comparison.

SMARTS, like Pencil, is browser-based. It continues to support a collaborative environment with fluid rule authoring that allows rules to be changed between decision tables, decision trees, decision graphs and text as necessary, combining the right mix of metaphors for each project. The ability to see how the current rules affect a set of data (in real time as the rules are edited) is still central to the product experience and data, analytics and rules are integrated with each other throughout. New features since 2013 include:

  • Cascading or inherited decisions allow a decision to be extended or overridden as necessary to deliver a specific version. For instance, a decision on underwriting might be defined and then specialized for California. Rules can be added or removed and values changed. SMARTS remembers the original, allowing for detailed comparison. Multiple levels can be defied and SMARTS ensures that elements that have not been overridden can be changed and these changes are inherited.
  • Native PMML support has been added and 10 PMML 4.2 model types can be executed with no-reprogramming – just bind the interface into SMARTS. This is combined with the “Blue Pen” capability to use analytic techniques like rule induction to find rules in data. Many analytic algorithms can be applied to data inside SMARTS. PMML models can be dragged and dropped into the project either as black box models (just binding the data for the model to the project) or exploded into a set of SMARTS artifacts.
  • Lookup models are likewise treated as black box models that are bound to a task in a decision flow. These tables can be defined in the tool, imported from spreadsheets,or managed externally and the query is defined using the SparkL language. They can return one or many answers that are simple or complex data objects. This can all be managed in releases and captured in the tracing etc. This allows potentially very large lookup tables to be managed and versioned without having to convert them to business rules. The fully indexed table model engine guarantees fast execution.
  • Built in support for champion/challenger or A/B testing means that experiments can be defined for any step in the decision. Any number of alternatives can be defined for a given step. A set of experiments can be defined for the overall decision that define which of these alternatives should be used for each of the tasks that have alternatives. Random or non-random selection criteria can be defined for allocation. Experiments can be set up to use all, none or some of these strategies in a particular simulation run or deployment. Alternatives can be defined explicitly or the Blue Pen feature can be used to use machine learning to find alternative rules in a training set. As these rules are identified they are injected into the alternative decision task approach so that they can be used in the experiment. Experiments can be run as simulations with test data or in production.
  • Starting with the Quebec version, a new graphic investigation feature provides a graph of the rules fired for a specific transaction that led to a specific conclusion. This includes the data and can be used to understand the execution of a specific transaction or a group of transactions.
  • SMARTS supports multiple project enterprises with extensive lifecycle management and task automation. Users can define a flow with tasks for everything from machine learning, to simulation, release management to testing. New versions of rules or models or data can be picked up and included automatically, allowing for instance a modeling team to provide new versions of models without requiring manual configuration. Projects and libraries can be imported and synchronized, deployments managed etc.

In addition, the product is localized into English, Japanese, Chinese and Spanish. For OEM customers the user interface and documentation can be branded to allow the product to be deeply integrated into a commercial offering.

More information on Sparkling Logic SMARTS can be found here and Sparkling Logic is one of the vendors in our Decision Management Systems Platform Technology Report.

This was my first briefing on Statistica since I reviewed v11 in 2012. Statsoft was established in 1984 and acquired by Dell in March 2014. The product has been continuously developed to the point where it has over 16,000 integrated functions.  Since acquisition, Dell has been focused on integrating it into Dell systems and processes and accelerating its product development and release cycle cadence. Dell Statistica 13.1 was recently announced and Version 13 (released Sept 2015) was the first Dell version:

  • The user interface got a major upgrade to be a more modern look and feel – still a windows application but significantly upgraded
  • Data discovery and visualization was a big focus in V13 with interactive charts, maps, dashboarding and an entirely new visualization engine.
  • New connectors support Hive, SQL etc
  • New data blending and improved real-time scoring.

The overall environment supports

  • Text analytics along with other analytic techniques and embedded business rules in the core environment
  • Enterprise Server with embedded business rules, validated data and metadata management.
  • Automated model monitoring, collaboration & process control
  • Real time model scoring
  • Regulatory compliance & change management
  • NLP, Entity Extraction, & in-Hadoop model deployment
  • Single or multi-site deployment: license and usage management

13.1 had some new specific capabilities

  • Making access easier is a key feature. Data preparation is a key area. 80 pre-built functions including a node for data health checks (sparse data, missing data, outliers etc). Semi-structured data from documents or XML files is supported along with redaction of confidential information.
  • Data can be exported directly to Tableau and Qlik and new visualization capabilities were added.
  • Reuse of data prep and analytic workflows built by an expert is also supported, allowing an expert to create a complex flow that can be templatized and passed to someone with less expertise so they can embed it in their own work.
  • Dell also added a Native Distributed Analytics Architecture. Dell Statistica runs on premise and any analytic package built in it can be exported as Java, PMML, SQL etc. This can be pushed into various deployment environments like SQL Server, Hadoop, Dell Boomi, clouds (private and publish) or embedded in edge systems like devices.
  • R and Python are supported – R packages can be brought in as node and scripts can be written in Python within the environment.
  • In-database analytics are supported through SQL and Spark push back. Correlation, sorts, frequency, classification, regression and other elements are supported with more to come.

A new web-based reporting UI has been added and V14 is expected to be browser based and more cloud-centric.

More information on Dell Statistica is available here and Dell is a vendor in our Decision Management Systems Platform Technology Report

Modelshop was founded by Tom Tobin, someone who has spent his career building analytic applications in credit origination, portfolio optimization, risk management and fraud detection for major financial institutions. Tom’s vision for Modelshop is to make the technologies that enable these types of solutions available to all organizations so they can make their analytics actionable. Modelshop got a round of seed funding about a year ago and has been building its team and product since. Now in beta with the platform Modelshop has a couple of lending customers focused on disruptive lending solutions as well as some others focused on portfolio management. The vision for Modelshop is to be a platform that allows companies to quickly create analytic applications that can automate sophisticated data-driven decisions in real-time to support mission critical operations, and to optimize these applications (and the decisions they make) over time. All running in public or private clouds.

The Modelshop platform allows you to model decisions, simulate these for expected outcomes, deploy them as real-time analytic applications with APIs, and predict future behavior and performance to optimize results over time. The platform is underlain by a computational engine that makes sophisticated real-time logic approachable and transparent. The decision models themselves are built by business analysts with data scientists focused on optimizing and improving decisioning over time. Deployment capabilities allow for integration with online applications, LOS etc.

A typical Modelshop application has:

  • A Modelshop model starts with a set of business objects that can been defined. These have a set of properties and can be populated from connectors (to databases, APIs or XML/JSON). Objects can be related to each other by simply linking fields and additional calculated properties can be defined.
  • This model and all the associated meta data can be viewed and edited in the tool. All this is defined in the business language – groovy – and an interactive editor allows things to be manipulated using it with auto complete etc.
  • Calculations are strongly associated with these objects – they are defined as object properties. Calculations can share functions to allow similar calculations on several objects and inheritance is supported to allow these to be specialized, for instance to allow different classes of products to have different approaches to a calculation.
  • All the objects and their relationships can be explored in a model graph visual editor allowing exploration. Additional links can be defined diagrammatically. Any specific instance of an object can be drilled into and the links for this instance can be navigated and calculations viewed in that context.
  • Rule bases can be associated with an object and fire as soon as they need to – any data change causes rule firings. Rules use the same syntax as the calculation engine and build on the calculations defined for objects.
  • R integration allows R to be executed in situ – data scientists can pick the objects to be integrated and an automatic data frame is created that can be processed using R. Model training, testing and scoring is handled in-situ by the engine. Scores are presented as object properties.
  • The Modelshop calculation engine ensures that all values (calculations, cubes, rule results and model scores) are updated continuously using a proprietary dependency technology. Any change to data propagates events and immediately updates end results. For example, changing an address to a new zip code could trigger a cost of living model to increase a debt assumption that ultimately causes a credit decline decision. The update could be delivered back to a Modelshop user or via an API to an external application or website, all in real-time.
  • Finally, Modelshop provides for multiple dashboard views with charts, data lists, document widgets and real-time cubes showing overall business performance. Like the rest of the environment this is HTML5, interactive and updated in real-time.

Modelshop is designed to integrate both a portfolio and real-time perspective while continuously updating all the calculations and providing a single environment for all the logic required to manage a complete analytic application.

More information on Modelshop is available here and anyone interested should contact them for access as they are moving into a public release. Modelshop is one of the vendors in our Decision Management Systems Platform Technology Report.

DMNBookFrontCoverAs Jan and I work on our book, Real-World Decision Modeling with DMN, we have been discussing some of the common misconceptions about decision modeling that we’ve encountered among adopters. This is going to be one of the chapters in the book, in which we analyze misguided applications of decision modeling and their consequences, but I thought I would take a minute to discuss some of them. Decision modeling is a powerful, potentially transformative approach for organizations that approach it correctly. These misconceptions can really get in your way:

Misconception: Decision modeling only works for business rules
This is one of the most common and dangerous misconceptions— that the only reason to build a decision model is to manage decision logic expressed as business rules and, often, specifically the business rules automated in a Business Rules Management System. While decision models are very good at structuring and managing business rules, a singular focus on this use case means that organizations miss the benefit of using decision models for other purposes, for example to:

  • Coordinate structure and improve the rigor of large decisions
  • Train personnel in the execution of manual decisions
  • Frame analytic project requirements
  • Discover gaps in data availability
  • Reveal inconsistencies and conflicts of responsibility in organization support for making pivotal decisions
  • Promote consistency in decision making across multiple business channels
  • Clarify the automation boundary of decision making

Misconception: Decision models are only complete if you can generate code from them
Decision models are complete when they are fit for purpose. This might be when you can generate code but might also be when you can fully describe a decision for training purposes, build an effective analytic, define decision making responsibilities across an organization, plan the incremental advancement of an automation boundary or structure the logic you are managing in a BRMS. The purpose for which you are using a decision model will dictate the extent to which its decision logic needs to be complete.

Misconception: Decision logic is what matters in DMN

A focus on execution can sometimes lead to the misconception that only the decision logic layer really matters in DMN. After all the majority of DMN specification is devoted to documenting this. The value of decision requirements modeling is downplayed, other than perhaps as a thinking tool. This misunderstanding can lead to the bad practice of adorning a process model with DMN decision tables to describe each piece of logic in the process, skipping decision requirements modeling altogether.

This is a dangerous misconception as it: misses the value gained by thinking about the decision as a whole—its structure and dependencies. It also embeds historical, inflexible and unnecessary sequences in the decision model while mixing elements of process and decision making that are better kept separated. Perhaps most importantly it focuses on the trees at the expense of the forest. We discuss this flawed approach to decision modeling in more detail in our book.

Misconception: Decision models are handed off to developers as a technical asset

Decision models are neither an implementation-centric IT asset nor a way to eliminate IT by allowing business analysts to build and maintain executable models in isolation. They are, in fact, a vehicle for effective business/IT collaboration—a jointly owned, evolving consensus of what decisions are to be made, how and why. Decision making expertise is a corporate asset that should not be buried in IT systems; it requires explicit capture, innovation and maintenance. Handing decision models off to developers for a ‘one-shot’ translation into code or trying to pretend that a sufficiently detailed, executable decision model can mean you don’t need IT are both equally dangerous and impractical propositions. Decision models are a precise vehicle for representing the requirements, structure and mechanism of decision making which can be maintained and innovated by business analysts, directly inform systems implementation and demonstrate compliance to a third party.

Misconception: Capture business rules first, decision will emerge from this

In addition, people get sometimes think that business rules should be captured first, independently of a decision model. They shouldn’t be as this is analogous to building a new house by starting with brick walls and doors. Part of the value of a decision model is that it’s a foundation: it structures, supports and scopes the capture and analysis of business rules. Trying to list all the rules without a decision model is just going to waste time and energy.

Decision modeling is a powerful concept—don’t be misled by these common misconceptions.

To learn more about the book and to sign up to be notified when it is published, visit http://www.mkpress.com/DMN/.

I got a briefing from a new predictive analytics vendor, DMWay. DMWay was founded in 2014 based on significant research over the previous decades. DMWay had a first VC round in 2015, have 20+ employees and dozens of customers. DMWay is focused on democratizing data science and automating machine learning so that those with business domain expertise can leverage data science -without also having to be a hard-core data scientist. Plus the pricing is designed to be very aggressive to remove that barrier also.

As well documented, predictive analytics in many organizations is expensive, time consuming and not scalable. The need for strong technical skills means that those involved often lack the business domain expertise to understand the data.

DMWay has a very simple UI, designed to be very accessible to non-technical folks. It lays out a simple process – define the input data, document metadata, modeling.

  • Input data is pointed to some existing flat file data and the tool automatically keeps a validation set etc.
  • The structure of this data is then analyzed to build meta data that the business user can then extend or correct. The tool expects flattened but not analyzed or enriched data.
  • The final step is to pick a modeling approach (from a set which, they claim, have undergone a rigorous selection process and been proven to create highly stable and accurate models) and run the process.
  • The time from data input to initializing a run takes less than a minute.

The tool analyzes the data to find the right approach – binning, categorical variable identification, missing value handling etc. The engine identifies suppressing variables to eliminating correlation problems, overfitting, uses the validation data etc. – all automatically. The models created by DMWay are open for inspection by the user. The model is deployed either using their own scoring engine with a REST API or by generating R, Java or SQL code for either batch or real-time scoring.

Models can be grouped into project for management and a simple interface provides basic information about last run and last modified.

DMWay runs on premise on desktops or servers and the scoring engine similarly.

You can get more information on DMWay here. DMWay is a vendor in our Decision Management Systems Platform Technologies report.

Final set of announcements at FICO World relates to Cybersecurity or Cyber Threat Analytics. Obviously cyber attacks are increasing and driving a lot of fraud and abuse – phishing, malware, personal record theft etc. The big challenge in all this is the timespan. 60% of data theft happens with minutes or hours of a breach but breach detection typically takes weeks or months with nearly half taking months to find – and many times this is found by someone else. Many problems slip past the current detection mechanisms because they change rapidly. The alerts are also too numerous and undifferentiated.

FICO has been investing in taking its experience in fraud detection and applying the analytics and expertise to cyber threats. Many of its approaches, refined for credit card fraud detection, work for general cyber security too such as profiling, behavioral archetypes, behavior sorting to prioritize risks and reduce false positives, self calibrating and learning models- all combined with a consortium model to share data.

This work has been combined with iboss cybersecurity, a cloud-based platform for mobile and on-premise end point security. FICOs profiling and analytics are linked to the iboss cloud solution to drive better detection and blocking as well as improved investigation.

Continuing with product announcements at FICO World, next up is the new products for fraud prevention and compliance. Financial crime is on the minds of consumers right now with 2/3 worrying about having their credit card data compromised or having their accounts used fraudulently. FICO has launched a new FICO Consumer Fraud Control product. This allows specific card management scenarios like controlling how much a card can be used and on what, making sure it is not used inappropriately etc. The new product has features like:

  • 2 factor or 2 element controls for specific transaction types
  • Support for recurring transactions
  • Temporary suspension of controls to allow a transaction
  • Integration so issuers can use the input of a consumer in their fraud detection

All of this is running on mobile devices, increasingly the preferred banking environment for consumers. This focus on mobile devices also means that understanding these devices can help in an overall fraud solution. FICO has launched new Mobile Device Security Analytics that combine contextual and device data as well as streaming data to identify where devices are and what they are doing. This behavioral profile can be fed into fraud detection for cards linked to those devices. It can also be used to improve the profiling and grouping already being used to detect fraud – Falcon Fraud Manager does a lot of work to identify what kinds of transactions might be normal for someone like you and this mobile data can become part of this analysis.

Other products include:

  • New Customer Communication Services designer makes it easy to build customer engagement applications quickly with automatic validation, text to voice and other elements.
  • The Identity Resolution Engine is designed to integrate disparate and incomplete data, organize this into people places and things, uses these to resolve identities to support visualization of relationships for investigations for fraud rings, drive graph analytics to automate social network analysis and create a single view of the customer.
  • FICO recently acquired Tonbeller for Know Your Customer, Anti-Money Laundering and Case Management. These solutions are appealing to FICO because they scale across organizations and throughout the lifecycle while also supporting non-banking customers in insurance or for general corporate compliance and business partner management. Plans for Tonbeller involve best practice sharing, using pooled data to prioritize investigations and seeing where behaviorial profiles differ from the KYC data.

All this is part of a move toward Enterprise Fraud Management with a central fraud hub that detects and prevents fraud across multiple channels and provides fraud-aware communication and integration for other solutions.

One final fraud and abuse product is FICO Falcon Assurance Navigator is a new product designed to detect and manage fraud in the procure to pay process. Driven by new federal regulations, this product leverages FICO’s fraud technology and expertise from Stanford to track 100% of transactions to prioritize those that need to be investigated, applying analytics and rules based on different compliance regimes (federal grants v other university funds for instance). It can check POs in advance as well as invoices later, check time and expenses and integrate with procurement.

FICO made a series of announcements today at FICOWorld 2016. The event kicked off with a fun retrospective of the 60 year history of FICO. Bill Fair and Early Isaac founded the company in 1956 to use data and analytics to improve decision-making. This focus has not changed really in all the years since – FICO is still focused on analytical decision-making.

The products being launched fall into three categories – FICO Decision Management Suite 2.0, fraud detection and prevention, and cybersecurity. All these products, of course, are focused on high speed, high volume decision-making and on using analytics to improve decisions.

The capabilities being developed are designed to solve three classes of business problem

  • Deal with complexity
  • Develop a sustainable competitive advantage
  • Defend against criminals

FICO’s vision for its Decision Management Suite is designed to support the whole decision-making sequence – gather and manage data, analyze it, make decisions based on this analysis and take appropriate actions. Good organizations do this in a thoughtful, coherent way and the suite is designed to support a lifecycle for this (authoring, managing, governing, executing and improving) and make this all accessible. The suite is designed to be both a general purpose platform for customers to use and a basis for the product solutions FICO develops itself.

Lessons from the 1.0 Decision Management Platform led to 5 business drivers for the new release:

  1. Capturing subject matter expertise
    Most organizations don’t capture business expertise well or systematically and they need to do so to prioritize decision and management improvement efforts
  2. Intelligent solution creation
    Despite investments in rapid application development there was still work to do making it easy to build solutions
  3. Faster insight to execution
    Time to market, time to using analytics, is critical.
  4. Building institutional memory
    Organizations are increasingly focused on how to build institutional memory, in a more leveragable way, especially as expertise is being embedded into systems.
  5. Greater analytic accessibility
    Organizations need to have more people using analytics and to have analytics be more pervasive.

SuiteMajor upgrades and new capabilities to address these issues drove the Decision Management Suite 2.0 designation. New and significantly improved capabilities then include:

FICO DMN Modeler

Decision making is hard because there is a lack of timely, relevant data; because some of this data is contradictory or opinion based; there’s not enough planning; and communication is poor. To address this there has been a real effort to define decision-making formally and separate from business process or data – the Decision Model and Notation (DMN) standard. Part of the new suite is a modeler for building decision models based on the new standard. I blog about the standard a lot so here’s a link to other posts on decision modeling.

FICO Optimization Solutions

FICO has been doing optimization a long time and made a number of acquisitions to build out its product portfolio. Their focus has been on not just developing innovative algorithms but also on rapid operationalization of these models based on templates and on copying with poor data. New features in the 2.0 suite include new templates around pricing and collections problem, an improved business analyst interface, improved collaboration for those working on optimization models as well as improved performance.

FICO Decision Modeler

FICO Decision Modeler is the evolution of Blaze Advisor, FICO’s established Business Rules Management System, on the cloud. The cloud focus makes it easier to engage business users, extending testing and validation in particular. Faster deployment and operationalization is also critical. All the decision rule metaphors have been redesigned and built as native HTML editors. Rapid deployment of SAS, PMML and other models without recoding allows analytics to be combined with the rules built using these metaphors.

FICO Text Analyzer

New tools for extracting structured, usable insight from unstructured text through entity analysis and related algorithms. FICO’s particular focus is to make unstructured data available for predictive analytics cost effectively.

FICO Strategy Director

Strategy Director is based on long experience with managing decision-making strategies in the account and customer management space (FICO’s TRIAD product). It is designed to provide a common environment that supports collaboration, shifts the balance to the business from IT and means teams are not starting from scratch each time. This is particular good at managing groups, scoring them, doing champion/challenger (A/B) testing, segmenting customers and then reporting on all this. The new Strategy Director is available for configurations beyond account/customer management – using configuration packages based on data fields, variables, scores and defined decision areas. These can be defined by FICO, customers or partners and can be updates based on what is learned. New configurations are coming for pricing, deposits and in industries beyond banking.

FICO Decision Central

This is the evolution of FICO Model Central (reviewed here). This now records how all the decision was made, not just what the analytic scores were. All the decisioning assets used in the decisions are recorded, all the outcomes and performance data is pulled together, and all the logged information (scores calculated, rules fired etc). A tool for reviewing how decisions are being made, improve them and capture institutional memory.

FICO Decision Management Platform 2.0

All of these capabilities need to be deployable and manageable. It has to be scalable, resilient, easy to integrate and all the rest. The platform includes the DMN Modeler, a new data modeling environment for business vocabulary/business terms, Decision Modeler for logic and one-click execution on Apache Spark – turning a DMN model into the Spark graph for native execution. Plus platform management capabilities and visualization all running on AWS (with FICO cloud and on-premise to come).

FICO Decision Management Platform Streaming

The final piece is one to handle streaming data and make very low-latency decisions by embedding decisioning in the stream. Not just handling the data stream, but using rules and predictive analytics to make in-stream decisions. This platform is designed to allow drag and drop assembly of steps (rules, models, connectors) into stateful models that are agnostic of the data source. And execute them very fast with very low latency.

FICO uses the new platform itself to develop solutions such as its FICO Originations Manager – now built and executed 100% on the new platform. The new platform will be available on the FICO Analytic Cloud, with much of it available already and the rest soon – with free trials and some free usage.

Back when I was attending the SAS Inside Intelligence analyst day they briefed us under embargo about their new Viya platform. This was announced today from SAS Global Forum. Ultimately all the SAS products will be moving to the SAS Viya platform and the platform is designed to ensure that all SAS products have some common characteristics:

  • All HTML5, responsive user interfaces supporting both point-and-click/drag and drop interactions and an interactive programming interface across the products on the platform. This is intended to allow some to program and some to work more visually while sharing key underlying components. SAS has also invested in making the interfaces more usable in terms of providing improved visual feedback and interactive suggestions as users work with data for example.
  • Support for Python, Lua and Java not just SAS’ own programming language. In addition REST APIs will be available for the services delivered by the products on the platform so that these can be integrated more easily and accessed from a wide variety of environments. This is based on a micro-services architecture designed to make it easy to take small pieces of functionality and leverage them.
  • Multi-platform and cloud-centric to try and remove some of the impedance created as companies mix and match different computing platforms. This is true especially of the deployment capabilities with a much greater focus on SDKs, APIs and deployment more generally. Viya products will provide support for deployment in-Hadoop, in-database, in-stream and in-memory.
  • SAS is committed to delivering a wide range of new analytic and machine learning (and cognitive) algorithms on this platform as well as making it easier to integrate their algorithms with others’. Many of the new algorithms should be available as services in the cloud, allowing them to be easily integrated not just leveraged inside SAS tools.

More to come on this but I think this is a good direction for SAS. The years of development behind SAS products give them some heft but can also make them “lumpy” and result in layers of technology added on top of each other. Viya will let them re-set a robust and powerful set of capabilities on a modern and more open platform.

Lisa Kart and Roy Schulte recently published a new research report Develop Good Decision Models to Succeed at Decision Management (subscription required). This is the first piece of formal research published by Gartner on decision modeling. Their introduction text says

The industry trends toward algorithmic business and decision automation are driving wider adoption of the decision management discipline. To succeed at decision management, data and analytics leaders need to understand which decisions need to be modeled and how to model them.

I really like this phrase “algorithmic business.” I was just co-hosting the TDWI Solution Summit on Big Data and Advanced Analytics with Philip Russom and we discussed what “advanced analytics” meant. We concluded that it was the focus on an algorithm, not just human interpretation, that was key. This phrase of Gartner’s builds on this and I think it is clear that advanced analytics – data mining, predictive analytics, data science – is central to an algorithmic business. But it’s not enough as they also make clear – you need decision management wrapped around those algorithms to deliver the business value. After all as an old friend once said “predictive analytics just make predictions, they don’t DO anything.” It is this focus on action, on doing, that’s drives the need to manage (and model) decisions.

Lisa and Roy make three core recommendations:

  • Use Analytic Decision Models to Ensure the “Best” Solution in Light of Constraints
  • Use Business-Logic Decision Models to Implement Repeatable Decision-Making Processes
  • Build Explicit Analytic and Business-Logic Decision Models at Conceptual, Logical and Physical Level

All good advice. The first bullet point relates to the kind of decision models that are prevalent in operations research. These are a powerful tool for analytical work and should definitely be on the radar of anyone doing serious analytic work.

The second point discusses Business-Logic Decision Models, the kind of model defined in the Decision Model and Notation standard. These decision models are focused on defining what decision-making approach (both explicit logic and analytic results) should be used to make a decision. While using these to structure business rules is the more known use case, these kinds of models are equally useful for predictive analytics as Roy and Lisa note in their paper. Business logic models can embed analytics functions such as scoring to show exactly where in the decision-making the analytic will be applied. More importantly we know from our clients using this kind of decision modeling in their advanced analytics groups that the model provides a clear statement of the business problem, focusing the analytic team on business value and providing requirements that mesh seamlessly with the predictive model development process.

As for the third point, we see clients gaining tremendous value from conceptual models that cover decision requirements as well as more detailed models linked to actual business logic or analytic models to fully define a decision. Any repeatable decision, but especially high volume operational decisions, really repays an investment in decision modeling.

Roy and Lisa also address one of the key challenges with decision modeling when they say that “many data and analytics leaders are unfamiliar with decision models.” This is indeed a key challenge. Hopefully the growing number of vendors supporting it, the case studies being presented at conferences, books and the general uptick in awareness that comes from consultants and others suggesting it to projects will start to address this.

They list some great additional Gartner research but my additional reading list looks like this:

One of the interesting and useful things about the Decision Model and Notation (DMN) standard for decision models is how it handles the data required by a decision. Simply put, a Decision in DMN may have any number of Information Requirements and these define its data dependencies – the Decision requires this information to be available if the decision is to be made. These Information Requirements may be to Input Data (what DMN calls raw data available”outside” the decision) or to other Decisions. Because making a decision creates data – you can think of making a decision as answering a question, producing the answer – Input Data and Decision outcomes are interchangeable in Information Requirements. This has lots of benefits in terms of simpler execution and isolation of changes as models are developed and extended.

Recently a group I belong to was asked (indirectly) if an Information Requirement can be met by a Decision in one situation and an Input Data in another. The context was that a specific decision was being deployed into two different decision services. In one case the information this decision needed was supplied directly and in the other it was derived by a sub-decision within the decision service. For instance, a calculated monthly payment is required by an eligibility decision that is deployed in two decision services – in one it is calculated from raw data input to the service and in the other it is calculated and stored before the service is invoked and passed in to the service.

This question illustrates a critical best practice in decision modeling with DMN:

If the information is ever the result of a decision then it is always the result of a decision.

The fact that it is calculated inside one decision service (because the decision is inside the execution boundary) and outside the other (the decision is outside the execution boundary and supplies information for a decision that is inside it) does not change this. This should be shown in every diagram as a decision. If it must be decided then it’s always a Decision and if it just input as-is to the decision-making then it’s Input Data. The fact that its value is sometimes stored in a database or passed it as an XML structure and then consumed as though it was a piece of data does not change its nature.

Why does this matter? Why not let people use Input Data when it is “looked up” and a Decision when it is “calculated”? Several reasons:

  1. Having two representations makes impact analysis and traceability harder – you will have to remember/record that these are the same.
  2. More importantly, there is an issue of timeliness. Showing it as a piece of data obscures this and would imply it is just “known” instead of actually being the result of decision-making.
    For instance we had a situation like this with a Decision “is US entity”. This is a decision taken early in an onboarding process and then stored as a database field. This should always be shown as a Decision in a decision model, though, as this makes people think about WHEN the decision was made – how recently. Perhaps it does not matter to them how long ago this decision was made but perhaps it does.
  3. DMN has a way to show the situation described. A decision service boundary can be defined for each decision service to show if it is “passed in” or calculated.
    I would never show this level of technical detail to a business user or business analyst but it matters to IT. The business modeler should always see it as a Decision (which it is) and simply notes that they have another Decision that requires it. They can treat it as a black box from a requirements point of view so it does not make the diagram any more complex – it just means its a rectangle (Decision) not an oval (Input Data).

DMNBookFrontCoverThis is the kind of tip Jan and I are working to bring to our new book, Real-World Decision Modeling with DMN, which will be available from MK Press in print and Kindle versions in the coming months. To learn more and to sign up to be notified when it is published, visit http://www.mkpress.com/DMN/.

I am faculty member of the International Institute for Analytics and Bob Morison has recently published some great research (to which I made a very modest contribution) on Field Experience In Embedded Analytics – a topic that includes Decision management. If you want access to the full research you will need to become an Enterprise Research client but the three big ideas, with my comments, are:

  • Embedding analytics in business processes makes them part of the regular workflows and decision-making apparatus of information workers, thus promoting consistent and disciplined use
    We find that focusing analytics, especially predictive analytics, on repeatable decisions that are embedded in day to day processes is the most effective and highest ROI use for analytics. Understanding where the analytics will be used is critical and we really emphasize decision modeling as a way to frame analytic requirements and build this understanding.
  • Unless decisions and actions are totally automated, organizations face the challenges of adjusting the mix of responsibilities between automated analytics and human decision makers
    Again decision modeling can really help, especially in drawing the automation boundary. Of course when decisions are wholly or partly automated you need to embed the analytics you build into your operational systems using Decision Management and PMML for instance.
  • When embedding analytics to assist smart decision makers, you’ve got to make them easy to understand and use – and difficult to argue with and ignore
    As our research into analytic capabilities pointed out, the need for visual v numeric output from analytics was one of the key elements in picking the right analytic capability to solve a problem.

Enterprise Research clients can get the report here and if you are interested in analytics you should seriously consider becoming an Enterprise Research client.

One of the fun things going on over at the Decision Management Community is a series of challenges based on various real or real-ish problems. For each the site encourages folks to develop and submit a decision model to show how the problem described could be solved. This month there was one on Port Clearance Rules

We are looking for a decision model capable to decide if a ship can enter a Dutch port on a certain date. The rules for this challenge are inspired by the international Ship and Port Facility Security Code. They were originally developed for The Game Of Rules, a publication of the Business Rules Platform Netherlands. The authors: Silvie Spreeuwenberg, LibRT; Charlotte Bouvy, Oelan; Martijn Zoet, Zuyd University of Applied Sciences.

DMCommunityShipExampleFor fun I worked this up in DecisionsFirst Modeler (with a little help from Jan Purchase, my co-author on Real-World Decision Modeling with DMN). I deliberately did not develop the decision tables for this as I wanted to show the power of a decision requirements model. Looking at the model, and asking only the questions that are immediately apparent as you develop the model, was revealing and to me showed the value of a decision model:

  • It’s clear what the structure of the problem is
  • It’s clear what’s missing
  • It’s much easier to see the whole problem than it is to get the gist from a list of rules

I have done this kind of exercise many times – building an initial model from a set of rules or documents – and it never fails to be useful.

The full report generated from DecisionsFirst Modeler is in the solution set.

DMNBookFrontCoverJan Purchase of LuxMagi and I are working away on our new book, Real-World Decision Modeling with DMN and one of the questions we have been asking ourselves is who we are aiming the book at – who builds decision models? Jan had a great post on Who Models Business Decisions? recently to address this question and I wanted to point you to it and make two quick observations as it relates to analytic projects (both “hard” data science projects and “soft” business analytics projects) and subject matter experts.

We have done a number of analytic projects using decision models to frame analytic requirements. We work with data scientists using decision models to make sure that the data mining and predictive analytic efforts they are engaged in will connect to improved business results (better decisions) and that the models built can be deployed into production. Decision models put the analytic into context and ensure the analytic team stays focused on the business. We also work with data analysts building dashboards or visualizations. These are sometimes just focused on monitoring the business but increasingly are designed to help someone make decisions. By focusing on the decision making first and then designing a dashboard to help, data analysts avoid letting the structure of the data or the availability of “neat” widgets drive the design. Decision models keep you focused on the business problem – improving decision-making. We have a neat white paper on this too – decision modeling for dashboard projects. Don’t only use decision models on rules projects – use them for analytics too.

We are also increasingly bullish on the ability of subject matter experts, business owners, to participate in decision modeling. Our experience is that the simplicity of the DMN palette combined with the logical structure of a DMN decision model makes it easy for SMEs to participate actively in developing a model. They do have to watch their tendency to describe decision-making sequentially but rapidly pick up the requirements/dependency approach critical to DMN models. Don’t limit DMN decision modeling to modelers and analysts – bring in the business!

Don’t forget to sign up for updates on the book so you know when it is published at http://www.mkpress.com/DMN/.

The Call for Speakers is open for DecisionCAMP 2016 until April 1st. This year DecisionCAMP will be hosted by the International Web Rule Symposium (RuleML) on July 7, 2016 at Stony Brook University, New York, USA. This event will aim to summarize the current state in Decision Management with a particular focus on the use of the Decision Model and Notation (DMN) standard. As always it will show how people are building solutions to real-world business problems using various Decision Management tools and capabilities. If you are interested in speaking at this event, the call for speakers is open through April 1st, 2016

We are currently seeking speakers on a variety of topics such as:

  • Decision Modeling
  • Business Rules Management Systems
  • Predictive Analytics and Data Mining
  • Decision Optimization
  • Decision Management Use Cases in Different Industries
  • Best Practices for using DMN and Decision Management Technologies
  • Right now we are looking for some great presentations so if you want to present at this event please submit the abstract of your presentation using EasyChair.

If you don’t feel you have something to share then at least make sure you put it on your calendar. Take advantage of the opportunity to share your unique insights to empower industry with the latest advances in decision management – apply to speak here by April 1st.

And don’t forget there are still a couple of days to apply to speak at Building Business Capability 2016 too.

I got an update from a new player in the decision management market today – ACTICO. They aren’t really new, though, as they are using the Bosch SI business rules management system, Visual Rules (last reviewed by me in 2010). The Visual Rules business has been split with Bosch SI focusing on IoT and manufacturing and ACTICO focusing on the financial industry (banks and insurance).

Visual Rules has been in the business a while now (Innovations Software Technology was founded in 1997) and has 100+ customers across 30 countries with a concentration of customers in Europe. The product is currently called Visual Rules for Finance (reflecting the historical strength of the product in banking and insurance) with Bosch SI continuing to use the Visual Rules name.

The product has the same components you are familiar with:

  • A visual modeling studio
  • A Team Server for collaboration and a Builder for testing and deployment
  • Execution tools including a runtime, execution server and batch execution
  • Integration capabilities for database and other data sources as well as Identity Management (supporting multi-tenant)

Plus there is the Dynamic Application Framework for building rules-based UIs.

Visual Rules continues to support flow rules (something a little like a decision tree), decision tables and state flows (classic state transition diagrams). These rule editors are integrated with the Dynamic Application Framework. This framework leverages the rules and combines them with user interface, data designs and integration as well as processes defined using the state flows, to design complete rules-based applications.

Decision Management is the core focus for Visual Rules for Finance moving forward. The suite is now focused on building dynamic business applications to drive business decision-making using rules, analytics, processes and user interfaces. Users can take data, apply business rules and analytics to make decisions, and deliver this in a process or workflow framework for decision-makers with a dynamic user interface while tracking all the documents and supporting information that can be used to drive new analytics and new rules.

Later this year the company is going to re-brand the product around the new ACTICO brand – the ACTICO Platform will have Modeler, Team Server, Execution Server and Workplace (a new name for the Dynamic Application Framework). The ACTICO platform will support business rule management, analytics, process management and Rapid Application Development.

Besides the re-branding, a number of enhancements are planned:

  • Support will be added to allow the import of PMML Decision Trees and the conversation of these into a flow rule. This is a nice feature as it allows the creation of flow rules that can execute an analytic decision tree as part of an overall decision. They are working with a couple of analytic tools to make sure this works with several analytic vendors.
  • A new repository is planned to sit between the collaboration/team server and the execution server. This will handle code generation, test/simulation, approval and deployment. Separating this from the collaboration and execution environments will improve approval processes, traceability and testing.
  • The UI modeling and rendering in the Dynamic Application Framework will be formalized with dedicated editors for UI models. The rendering will be replaced with a modern Angular JS and REST environment so it can support mobile, responsive UIs.
  • Monitoring and execution statistics are being extended to include process state, data values and decision outcomes – information that is known to the Dynamic Application Framework wrapped around a rule execution. All this information will be integrated with the current rule execution statistics (which are nicely displayed in the modeler).
  • Finally, there is a plan to do a cloud offering – putting the server products and the workplace (and the products based on this) into the cloud.

Beyond that there is consideration of more support for simulation and monitoring of deployed services as well as some support for DMN. In addition, various customers are using advanced analytics in fraud, money laundering and abuse detection systems. These projects are driving additional research into analytics and the team is very focused on how to make sure the data being produced by the platform is easy to consume by analytic teams as well as working on how to integrate analytics into the platform.

You can get more information on ACTICO here and they are one of the vendors in our Decision Management Systems Platform Technologies Report.

[Minor edits for clarity 3/18/16]

Last session of the day is a freeform executive Q&A so I will just list bullet points as they come up:

  • Open Source R is obviously a hot topic in analytics and SAS’ focus on more open APIs that are broadly accessible and their renewed focus on academic partnerships are designed to “leave R in the dust”
  •  The SAS team recognize a need to get their brand in front of new audiences – Small/Medium Enterprises, developers etc – and this is a key marketing challenge this year and one of the reasons for an increasing focus on partners.
  • The move to a more API-centric view is going to create opportunities for new and different pricing models especially with OEMs and other analytic service providers.
  • Open Source is something SAS is happy to work on – so Hadoop is widely used and integrated, most of their installations are running on Linux etc.
  • Clearly an expectation that future sales will be more consumption based given the way the platform are evolving at SAS and the growth of cloud.
  • In particular the evolution of industry clouds and industry-specific functionality built on SAS available through those clouds will be key.
  • SAS clearly sees opportunities for lowering entry barriers, especially price, so that new customers can explore the ROI of capabilities.
  • Competitive pressures have changed in the last few years with very large competitors increasingly offering broad analytic portfolios while also having to compete with niche startups. SAS is focusing on its core analytic strength and history while also recognizing that it must keep changing in response to changing competitors.
  • SAS sees simplicity in analytics, power in visualization and machine learning all as part of how analytics continues to expand in organizations.
  • Unlike many vendors in the analytic and data infrastructure space, SAS overwhelmingly sells to the Line of Business with a business value proposition and does not see this changing. At the same time they need to make sure IT is behind their technology choices and understands the architecture.
  • The expansion to smaller enterprises involves driving their solutions through inside sales and partners – new pricing and positioning, new sales enablement but not really new products. Plus more OEM and Managed Analytic Service Providers.

And that’s a wrap – lots of open and direct responses from the executive team as always.