≡ Menu

I participated in a webinar Building Outstanding Customer Relationships: Delivering Relevant Next Best Actions for Retail Bank Customers with Steven Noels of NGDATA.

We discussed next best action marketing, where each customer becomes a “segment of one” versus a “segment of many,” improving marketing action precision and relevancy. Implementing next best action marketing requires the right technology to give you a complete understanding of each and every customer so you can decide on the right actions to take, at the right time, in the right channel. The webinar covered:

  • Key concepts of next best action marketing
  • The importance of understanding your customers in an omni-channel environment
  • How to get your organization aligned around the strategy
  • How to get on the road to success with the right technology in place

You can watch the recording here.

We are kicking off a number of business rules projects this months – some new, some part of existing programs – and we are going to be applying decision modeling in all of them. Why? Because decision modeling with DMN (the Decision Model and Notation standard) really works for business rules projects. When we work with Business Rules Architects to use decision modeling as part of their business rules management system (BRMS) implementations we see 3 key benefits:

If you want to learn more about the role of Decision Modeling in BRMS implementations, check out our upcoming webinars:

Enova Decisions was launched in January of 2016 as an outgrowth of Enova International’s existing technology and analytics capabilities, which are used to offer online consumer and small business loans to 11 brands in six countries including NetCredit and Headway Capital. Launched in 2004 as an online lender, Enova does all its own analytics for credit risk, fraud, operations and marketing and has 1,100 employees and nearly 5M customers around the world. Enova’s core business relies on easy application, rapid online underwriting and multi-channel service. This requires real-time decisions based on analytics around risk, fraud, marketing etc. To deliver this, Enova developed its own platform for deploying analytics and wrapping these analytics with rules for decisioning. The original platform became limiting so the Colossus platform was developed both to support the internal brands and for sale to their clients through the Enova Decisions analytics-as-a-service brand.

The Colossus platform separated out analytics in a service oriented architecture. Colossus can run a wide variety of algorithms from regression to machine learning. It deploys models built in SAS, R, Python and is integrated with a wide range of third-party data providers. This platform supports all the Enova business and is the basis for Enova Decisions.

Enova Decisions then is a real-time analytics platform for real-time predictive analytics and on-demand decision-making capabilities. Enova Decisions supports both a decisioning interface and a reporting interface. The decisioning API allows requests for decisions to be made that are then processed using rules in Enova Decisions and analytics processed using the Colossus analytic platform. Data and distributions about decisions, scores and outcomes are stored and available through a performance dashboard on the reporting side. Thresholds can be set to trigger alerts and model updates etc.

Enova Decisions is positioned as analytics as a service. Initially Enova Decisions is focused on customer experience across customer acquisition, fraud and alerting and customer operations/growth (retention, debiting and collections). These are the areas where Enova has expertise — Enova has a 50+ person analytics team with experience in these areas — but the platform itself is completely agnostic and can deploy and integrate models developed by customers too. Because the platform was built to support the Enova business, it supports complex decision-making with multiple models, A/B or champion/challenger testing and many rules as a single API call.

The platform runs on AWS and is typically set up and integrated before being billed based on usage. Enova Decisions does an initial set up engagement around integration, third party data integration and data analysis as well as (generally) an initial set of analytic models. Once it is set up, Enova Decisions provide operations support and may develop additional or updated analytics. Enova Decisions tries to keep the up-front integration and analytics development costs low so that customers can focus on the ongoing service cost.

The platform is evolving to support PMML to allow models to be integrated from a wider array of analytic tools, and business user rule management capabilities are under development to allow customers to manage rules directly rather than by working with the Enova Decisions team.

You can get more information on the Enova Decisions website, and they will be included in our Decision Management Systems Platform Technology Report.

The folks over at ZS Associates sponsored a study by the Economist Information Unit on analytics titled “Broken Links: Why analytics investments have yet to pay off”. This report showed the classic challenge of analytics – 70% think analytics is very or extremely important but only 2% say their analytics efforts have a broad, positive impact. In response I recently wrote a series of blog posts – How To Fix The Broken Links In The Analytics Value Chain – over on our company blog. You can find the posts here:

  • How To Fix The Broken Links In The Analytics Value Chain
    The first step is to understand what is broken. The study showed two areas where analytic adoption fails – in problem framing/solution approach and in taking action/managing change. Analytic technology works but the analytic value chain is broken at the start and at the finish.
  • Framing Analytics with Decision Modeling
    Fixing the first broken link means accurately framing your analytic problem. What CRISP-DM calls Business Understanding is critical for analytic success yet most analytic teams jump straight from identifying a metric to building analytic models. Framing the problem in terms of the decision-making that must be improved is critical and decision modeling is the right way to do this.
  • Operationalizing Analytics with Decision Modeling
    Fixing the first link and applying analytics to the right problem is necessary but not sufficient – you still need to actually change organizational behavior and take action. Operationalizing your analytics so that the decision-making you identified is actually changed, doing this fast enough and tracking the effectiveness of this change are all critical. Decision modeling is key here too.

Decision modeling, specifically decision modeling using the Decision Model and Notation (DMN) standard,  can fix the broken links in your analytic value chain. To learn more, check out these briefs:

I recently got a chance to catch up with the IBM SPSS team for an update. Analytics, in IBM’s view and mine, are increasingly necessary as digitization increases the scale of business data and digital disruptors increase the difficulty of making good decisions. For those being disrupted analytics offers a powerful way to fight back. Those CEOs that are outperforming in this difficult environment are focusing increasingly on predictive analytics (not just analytics) and streaming/operationalized solutions not just visualization. In this environment IBM wants to offer a comprehensive platform for analytics with data connectors for all kinds of data, data preparation, analytics at scale, and insight to action with deployment. The full suite includes

  • IBM SPSS Predictive Analytics (Last review here)
    • Statistics
    • Modeler
    • Analytic Server
  • IBM Prescriptive Analytics
    • CPLEX Optimization Studio
    • Decision Optimization Center
    • Decision Optimization on Cloud (Reviewed here)

Plus there’s pre-configured and configurable content on Customer Analytics, Operational Analytics and Thread/Fraud Analytics. All of these – SPSS Modeler, the decision management capabilities and the optimization engine – are part of the IBM SPSS Modeler Gold.

The Predictive Analytics stack is focused on creating value faster by offering a mix of long-standing and new capabilities:

  • Simplified, scalable, code-free deployment
  • Advanced Model Management including Champion/Challenger
  • In-database and In-Hadoop modeling
  • Batch/Real Time/Streaming deployment
  • Analytic Decision Management deployment

One of the key areas of focus is scaling these capabilities on big data systems because customers overwhelmingly intend to deploy to Hadoop, Spark, cloud and streaming environments. Customers really want to move to this environment and this has to be reflected in the way the products work. IBM SPSS has two approaches for this scale:

  • Parallelism with support for Hadoop, Spark and streaming
  • In-database across a wide range of database technologies

Spark is clearly a critical element with IBM making a large commitment to Spark. IBM SPSS allows users to deploy models on Spark clusters for instance. Recently IBM has made more of the algorithms in SPSS massively parallel so that they scale up to support Big Data volumes without the need for Analytic Server. New algorithms have been added in the area of geospatial analytics.

IBM is also focused on involving developers, data scientists and business analysts in the predictive analytic process. This means allowing the Watson Analytics smart data discovery environment to collaborate with those using more advanced predictive analytics tools like SPSS Modeler. Some of the same underlying technology is used in Watson Analytics albeit with a different UI but the intent is to allow users of Watson Analytics to access models developed using the more robust workflow management in SPSS Modeler.

Open Source is, of course, a big deal in analytics, so SPSS has been supporting R, Python and Spark. These can be scripted directly but data scientists can also encapsulate this code behind a simple UI and made available as a node in SPSS Modeler. An increasing array of these extensions are available in the IBM SPSS Predictive Analytics Gallery. Several of these also use the Watson APIs. Various Python and other extensions can also be loaded into the Modeler environment to make it easier to use a wider range of open source algorithms and scripting approaches in the workflows being managed in SPSS Modeler.

From a deployment perspective, Predictive Analytics on Bluemix allows models to be easily deployed to and then used in the cloud. The developer just needs to have access to the model project and they can create a scoring service in the cloud.

IBM has also recently launched the Data Science Experience leveraging RStudio, Notebooks and more. This is focused on a community environment for programmers and “hackers” and is web based, very focused on downloading examples, tutorials, community etc. All of the open source tooling can be integrated into the notebook metaphor and Apps can be created using Shiny. This is primarily focused on exploratory data science and deployment today means taking the scripts and loading them up into SPSS – the different environments have a shared understanding of open source scripting languages. IBM sees this a complementary to SPSS Modeler and sees more integration and overlap in the future.

You can get details on recent adds to SPSS Modeler here.

We have a set of online training coming up in July:

  • Introduction to Decision Management to introduce the key concepts and terminology of Decision Management and provide a framework for successful business rules and analytic projects
  • Decision Modeling with DMN to learn how to model requirements using the Decision Model and Notation standard, the cornerstone technique for specifying requirements for these powerful technologies
  • Decision Table Modeling with DMN to take your decision modeling to the detail you need for execution with decision table expert Jan Vanthienen.

These online training classes will help you position your projects and programs for ongoing success. Decision Management and decision modeling can help you improve risk management, become more customer-centric, deliver increased business agility, and dramatically improve your business processes. We offer the most complete, standards-based and vendor-neutral, Decision Management and Decision Modeling training curriculum. Proven with 800+ students and delivered live online in multiple, short sessions, our training is highly rated by students. Plus the early bird and multiple attendee/multiple class discounts make it really affordable.

Ready to sign up?  Click on the link for more information and registration.
Upcoming Training

Our training is great value and our live online delivery makes it easy to attend without disrupting project schedules. Even so, getting approval to pay for training can be tricky. To help you, we have developed an outline proposal to get support from your organization.
Convince your Boss

I recently participated in an International Institute for Analytics webinar,  Prescribing The Right Decision With Prescriptive Analytics (I am a faculty member of IIA). In the webinar we did some surveys that had some interesting results.

First off we asked the audience what they were using analytics for. This was interesting to me as it overlapped with a question I asked as part of the Analytics Capability Landscape research we did last year.  I took a subset of the answers and corrected for it being a multi-select answer this time to come up with the graph below that shows the degree to which analytics are focused on:

  • Reporting on data
  • Monitoring business performance
  • Improving decision-making

I ended up with three sets of answers – one from the Analytic Capability Landscape (ACL) survey focused on what folks were doing then, one from the same survey focused on where they expected their focus to be in 12-24 months and the IIA results from this week.

IIA Why Use Analytics

You can see that when we surveyed before we got a strong focus on analytics being about reporting and monitoring, with a lot less on decision-making. The IIA results, on the other hand, showed a bigger focus on decision-making while the survey that asked what people expected to be their focus in the future showed a clear trend – away from reporting, away from monitoring and increasingly focused on decision-making. This is why we like to say “Begin with the decision in mind” – stay focused on the decision to maximize the value of analytics.

IIAPrescribingActionsThe second survey was asking about prescription – to what extent were companies using analytics to prescribe action not simply as a way to present insight. Here you see that over half the respondents are doing analytics but not driving to prescribed actions while another 40% are only prescribing action sometimes. This is a missed opportunity – using Decision Management to drive actions from predictive analytics is key to getting value from them.

IIA Deployment ResultsThe final survey asked about deployment – time to deploy analytics and see results. As usual well under half the respondents said they were able to get their analytics into deployment in weeks or less – most took months or never really managed it. Focusing on how the analytic can drive action is one way to improve this – it focuses deployment efforts – but the other is to ensure that deploying and integrating the analytic is part of the same project as developing the analytic. As one client likes to say “minimize the white space between analytic success and business success”.

We strongly recommend decision modeling for analytic clients to address all these issues. Using decision modeling:

  • Focuses everyone on the decision-making to be improved
  • Makes sure that the actions that are being guided or prescribed by the analytic are clear
  • Puts the analytic into a deployment and usage context right from the start

If you want to see the webinar, check out the recording here. If you want to learn more about decision modeling, check out this white paper on framing analytic requirements with decision modeling.

We work with a lot of business rules architects and we see that more and more of them are using decision modeling as part of their business rules management system (BRMS) implementations. I recently wrote a series of blog posts – 3 Reasons Rules Architects Are Adopting Decision Modeling – over on our company blog. You can find the posts here:

  • #1 Business Engagement
    Rules architects find that building a decision model (especially one using the Decision Model and Notation (DMN) standard immediately engages their business partners – business analysts, subject matter experts and business owners – because decision modeling let’s everyone see the forest not just the trees.
  • #2 Expanded Traceability and Impact Analysis
    Decision models link the business context to the business rules, enabling traceability all the way to the knowledge sources that drive rules and impact analysis all the way to the business objectives.
  • #3 Using Agile Not Waterfall to Write the Rules
    Unlike traditional rules-first approaches, decision models lend themselves to iterative development and agile project approaches.

Of course you need a decision modeling tool to make this work, one that is integrated with your BRMS. If you are interested, you can see how we have done this with our DecisionsFirst Modeler – BRMS integrations in action in these demonstrations:

One of my students (from the UCI Extension Predictive Analytics Certificate in which I teach Business Goals for Predictive Analytics) sent me this article on Toyota Financial Services and its use of data science, predictive analytics, in collections. It’s a great example of how to use analytics to improve your business outcomes and well worth a read. Three key points leap out at me:

  1. What Toyota Financial Services did is a classic example of what I call Micro Decisions (a phrase Neil Raden and I came up with for Smart (Enough) Systems). Instead of treating everyone the same – using a “broad brush” as the article puts it, analytics are used to drive a decision for each specific customer. What will help keep this customer in their car while lowering overall delinquencies.
  2. Solving this kind of problem – a Decision Management problem – often involves a mix of technologies and you need to be solution-focused not technology-focused as a result. As the article says “the whole is greater than the sum its parts”).
  3. It’s essential to keep the analytics team focused on the business problem, not just on the data or the analytic itself. The team co-located and kept its eyes on the decision-making they were trying to improve – “This is a team effort, not just the department, and you have many players that all have to cooperate”.

There’s a great quote in the article:

“Analytics is all about making decisions. Focus on what decisions you have to make and what actions you have to take, rather than starting with data or systems. Understand the business process. Involve the statisticians, and fit the analytics to the corporate culture.”

It’s well worth the read and if you like the article, check out this white paper on framing predictive analytics projects – something that will help you do what Toyota Financial Services did.

There is a great article from Bain and Company from 2013 that Elena Makurochkina (@elenamdata) pointed me to today – Infobesity: The enemy of good decisions. This is not only a fabulous phrase – infobesity feels viscerally correct as soon as you see it – but a great article too. Some quotes:

Companies have overindulged in information. Some are finding it more difficult than ever to decide and deliver…
Useful information creates opportunity and makes for better decisions. Infobesity does not.

Closeup portrait of business executive drowning under water

These are great. More information – infobesity – will not improve decision-making. Simply overloading decision-makers or decision-making systems with every more data will not get it done. This has been true for a long time – how long have we been talking about “drowning in data” after all?

Big Data makes the problem even greater, making it ever easier to drop more data on a problem and declare victory. We can mitigate this somewhat by using analytics to summarize and increase the value of our data but we run a real risk even so of overwhelming decision-makers.

And frankly decision-makers themselves are part of the problem. Ask them what data they need to make a decision (or that a system would need to make it) and they will rattle off a long list.

So what can we do about it? Well the folks at Bain suggest four things:

  • Focus clearly on the data you need
  • Standardize data where you can
  • Watch the timing of which data, when
  • Manage the quantity and source of your data, especially Big Data, to make sure it is relevant to decisions

But how to do this in an analytic project? We have found that decision modeling, especially decision modeling with the Decision Model and Notation standard, is a great tool. A decision model identifies the (repeatable) business decision at issue, decomposes it into its component sub-decisions, identifies the data that must be input to each piece of the decision-making and shows where the know-how to make (or automate) the decision can be found. Plus it gathers business, application and organizational context for the decision. Experience with these models at real customers shows just how these models can tackle infobesity:

  • One decision model showed that the data a group of medical experts had requested included lots of data that, while interesting, was not going to actually impact their decision about a patient.
  • The same one showed that all the data in the system could not replace one particular piece of data that had to be gathered “live” by the decision-maker.
  • Another showed that a large amount of claims history data did not need to be shown to claims adjusters if analytics could be used to produce a believable fraud marker for a provider.
  • A third model showed that adding more data, and even analytics, to a decision could not result in business savings because it happened too late in a process and all the costs had already been incurred. A cheaper decision meant an earlier decision, one that would have to be made with less data and less accurate analytics.
  • A model showed the mismatch between how management wanted the decision made – their objectives – and how the staff that made the decision were actually incented to make it.

and much more. In every case the clear focus on decisions delivered by the use of decision modeling cured actual or impending infobesity. For more on how you can model decisions for analytics projects, check out this white paper on framing analytic requirements with decision modeling.

I’ll leave the last word to the folks at Bain:

At root, a company’s performance is simply the sum of the decisions it makes and the actions it takes every day. The better its decisions and its execution, the better its results.


One of the most persistent problems with decision modeling in my experience is the tendency of people to think of decision modeling as a one-time requirements effort. Many teams are convinced that building a decision model using the Decision Model and Notation (DMN) standard (white paper here) is going to help them with their business rules or analytic projects.  Some of these teams, however, think that decision modeling is only valuable at the beginning – as a way to initially frame and structure their requirements. Our experience, on dozens of projects around the world, is that this significantly limits the value that projects, and especially organizations, could get from decision modeling.

Decision modeling has a best practice life cycle:

  • Decision Modeling OrchestrationBuild the initial model to drive requirements, structuring and framing business rules and analytic efforts
  • Use this model to decide on the automation boundary – what gets automated, what gets left to people – recognizing that decision modeling is a great way to specify requirements for automation AND to specify how people should make a decision.
  • Use the model to understand which parts of the decision might be best automated with business rules, which will benefit from analytics, even where optimization might be useful.
  • Keep the model alive to ensure traceability from the original business-centric requirements to the detailed technical implementation
  • Update the model as business needs change to support ongoing orchestration of your decisioning technology deployments.

To make this work you need to ensure that the decision models you build can be integrated with each other into a shared repository and that this is a living repository that everyone can access. DecisionsFirst Modeler, the tool we have developed based on our experience in decision modeling, does this. Not only is everything stored in a shared repository, everything in that repository is available through an API. We have used that API to develop a read-only viewer of the repository, displaying all the content in a mobile-friendly, HTML 5 app so that models are not limited to those who are building them but can be widely shared across the organization. This makes it easier for folks to share the models they develop in DecisionsFirst Modeler – everyone in the organization can see the models and links to specific objects or diagrams can be emailed around to engage reviewers, show the impact of changes to business rules, assess the impact of decisions on the business as a whole, share knowledge and decision-making best practices and much more.

To date this has been a very popular feature of our Enterprise Edition. Today we announced that it will now be available for users of the free basic edition also. You can read the announcement here and the button below will take you to a recorded demo.
DecisionsFirst Modeler Reader Demonstration

The founders of Avola began working together in 2012 as an incubator called Bizzotope, intended as a venue for starting a number of innovative ventures. The first company was Bizzomate – a business and IT consulting firm focused on low-code platforms like Mendix and agile/Scrum approaches. Platforms like these generally lack process and decisioning capabilities so the company adopted The Decision Model and ultimately decided to build a decision modeling/execution platform to support it. Avola Decision is the result and was released to the market in April 2014. Avola Decision has customers across various industries, but with a strong focus onn banking, insurance and professional/legal services. Today these are focused in the Netherlands, Belgium and UK, where Avola has offices.

The basic premise of Avola Decision is to develop decision-aware applications so that they can support decision-aware processes. Today the platform remains focused on TDM but later this year the platform will support the Decision Model and Notation (DMN) notation. Avola Decision consists of a SaaS platform for building decision models and an execution engine that can execute these decision models in a public or private cloud, or on premise.

The modeler itself is web-based, allowing easy access to decisions, rule families and the supporting glossary/fact models. Each decision can be expressed as a decision table or rule family supported by a set of facts and a glossary. Decisions and rule families are linked through the shared fact model.

Rule families are shown and edited using the classic tabular rules-as-rows decision table format and a TDM decision model diagram is inferred from the underlying rule families or decision tables. Each of these can be opened and edited while the model for that rule family – the sub-decisions – can be viewed and used to navigate to underlying sub-decisions. Users can also navigate to the associated fact model as needed. The rule editor is data type aware so that conditions match the data and calculated data can be specified for use in the decision. As users edit the rule family by adding conditions they can specify that a new or existing fact type is required and can identify that this is being derived by a new rule family, adding that to the hierarchy.

Fact Types are managed grouped in business concepts and value lists can be defined and used for multiple facts through an underlying glossary. Business concepts are associated with business domains and these domains are the core management artifact with roles/security defined at this level. Users can navigate from the fact model to the rule families and decision models that use or derive them and vice versa.

Testing and validation is supported in the modeling tool and is linked to the decision models. Each revision of the model is stored and tests can be run against any revision. Users can do a single test using a form generated by the system from the inputs required by that decision or can download an Excel template to create a batch of tests. The template has value lists etc built in and, once filled in, can be uploaded to create a set of tests. Results of tests can be reviewed online or downloaded in a result spreadsheet. When reviewing online, users can drill down into the intermediate generated values and see how the rules fired at each level.

Versioning is also supported along with an approval cycle. This approval cycle manages the departments who own different concepts to ensure that all the right people are involved approving models that potentially cut-across business concepts. Separate approval users can be defined in addition to those who can actually edit and manage the models. Reports can also be published for decision models as documentation.

Once the model is complete it can be deployed to public or private cloud (Azure) or to an on-premise Windows server. The deployed service is accessible as a Rest API and is executed in memory when deployed. Decisions are logged in terms of the version of data provided, decision-making version used, interim conclusions drawn etc.

Pricing is annual with a low price for modelers and approvers and a separate execution pricing based on the number of decisions being made with extra fees for private cloud or on-premise deployment.

More information on Avola Decision is available here and Avola is a vendor in our Decision Management Systems Platform Technology Report

DMNBookFrontCoverJan Purchase and I are working away on our book, Real-World Decision Modeling with DMN, and ahead of the publication we have made a series of short videos about decisions, decision modeling and DMN. These have been posted over on our company blog but here they are for you reference. Check them out.

  1. What is Decision Modeling – a video
  2. What’s the Motivation for Decision Modeling? – a video
  3. Why Model Decisions with DMN – a video
  4. Why Write a Book When There’s a Standard? – A Video
  5. The differences between decisions and business rules– a video
  6. Real world examples of how modeling decisions with DMN has helped – a video

If you are interested in the book, you can sign up here for updates and release date information.

Sparkling Logic is focused on enabling business and data analysts to manage automate decisions better and faster – what they call Analytics driven Decision Management. Sparkling Logic was founded in 2010 and I have blogged a few times about their decisioning platform (most recently here). Customers include Equifax, Paypal, FirstRate, Accela, Northrop Grumman and others across a wide range of solution areas with a strong focus on enterprise customers. These enterprise customers are very focused on multiple projects, enterprise integration and supporting both on-premise/cloud deployments.

The Product Portfolio now includes Pencil, a decision modeling and requirements tool, and SMARTS, their full lifecyle decision management platform supporting predictive models, data analysis, and expertise / business rules. SMARTS is available on premise, on cloud or embedded in another product and runs across Java and .NET deployments. SMARTS includes form-centric rule authoring, rule induction from data, strong navigation between the various components and integrated collaboration.

Pencil was added to the product portfolio recently to add support for decision modeling with the Decision Model and Notation (DMN) standard along with a glossary. Pencil provides graphical or Excel-based interfaces that share the same repository as SMARTS and can be used stand-alone or to generate artifacts from the decision model for use in SMARTS. This shared infrastructure means a decision model can be included in a SMARTS project. A decision model consists of DMN diagrams and a glossary, which can be shared across projects. As the decision model is built the glossary is either referenced or constructed automatically to support the model. Elements in the glossary can be categorized for management and the use of data in processing is highlighted. Computed elements can be defined as none (for casual models) or SparkL (a subset of the language used in SMARTS allowing execution). Pencil guides the business analysts in decomposing their decision according to the DMN standard, specifying inputs and outputs as he/she goes.  Each node in the diagram can be specified using a tabular layout, text or a list of rules. SparkL used in these is verified and has type-ahead access to the glossary. Models can be versioned and compared with a nice graphical comparison.

SMARTS, like Pencil, is browser-based. It continues to support a collaborative environment with fluid rule authoring that allows rules to be changed between decision tables, decision trees, decision graphs and text as necessary, combining the right mix of metaphors for each project. The ability to see how the current rules affect a set of data (in real time as the rules are edited) is still central to the product experience and data, analytics and rules are integrated with each other throughout. New features since 2013 include:

  • Cascading or inherited decisions allow a decision to be extended or overridden as necessary to deliver a specific version. For instance, a decision on underwriting might be defined and then specialized for California. Rules can be added or removed and values changed. SMARTS remembers the original, allowing for detailed comparison. Multiple levels can be defied and SMARTS ensures that elements that have not been overridden can be changed and these changes are inherited.
  • Native PMML support has been added and 10 PMML 4.2 model types can be executed with no-reprogramming – just bind the interface into SMARTS. This is combined with the “Blue Pen” capability to use analytic techniques like rule induction to find rules in data. Many analytic algorithms can be applied to data inside SMARTS. PMML models can be dragged and dropped into the project either as black box models (just binding the data for the model to the project) or exploded into a set of SMARTS artifacts.
  • Lookup models are likewise treated as black box models that are bound to a task in a decision flow. These tables can be defined in the tool, imported from spreadsheets,or managed externally and the query is defined using the SparkL language. They can return one or many answers that are simple or complex data objects. This can all be managed in releases and captured in the tracing etc. This allows potentially very large lookup tables to be managed and versioned without having to convert them to business rules. The fully indexed table model engine guarantees fast execution.
  • Built in support for champion/challenger or A/B testing means that experiments can be defined for any step in the decision. Any number of alternatives can be defined for a given step. A set of experiments can be defined for the overall decision that define which of these alternatives should be used for each of the tasks that have alternatives. Random or non-random selection criteria can be defined for allocation. Experiments can be set up to use all, none or some of these strategies in a particular simulation run or deployment. Alternatives can be defined explicitly or the Blue Pen feature can be used to use machine learning to find alternative rules in a training set. As these rules are identified they are injected into the alternative decision task approach so that they can be used in the experiment. Experiments can be run as simulations with test data or in production.
  • Starting with the Quebec version, a new graphic investigation feature provides a graph of the rules fired for a specific transaction that led to a specific conclusion. This includes the data and can be used to understand the execution of a specific transaction or a group of transactions.
  • SMARTS supports multiple project enterprises with extensive lifecycle management and task automation. Users can define a flow with tasks for everything from machine learning, to simulation, release management to testing. New versions of rules or models or data can be picked up and included automatically, allowing for instance a modeling team to provide new versions of models without requiring manual configuration. Projects and libraries can be imported and synchronized, deployments managed etc.

In addition, the product is localized into English, Japanese, Chinese and Spanish. For OEM customers the user interface and documentation can be branded to allow the product to be deeply integrated into a commercial offering.

More information on Sparkling Logic SMARTS can be found here and Sparkling Logic is one of the vendors in our Decision Management Systems Platform Technology Report.

This was my first briefing on Statistica since I reviewed v11 in 2012. Statsoft was established in 1984 and acquired by Dell in March 2014. The product has been continuously developed to the point where it has over 16,000 integrated functions.  Since acquisition, Dell has been focused on integrating it into Dell systems and processes and accelerating its product development and release cycle cadence. Dell Statistica 13.1 was recently announced and Version 13 (released Sept 2015) was the first Dell version:

  • The user interface got a major upgrade to be a more modern look and feel – still a windows application but significantly upgraded
  • Data discovery and visualization was a big focus in V13 with interactive charts, maps, dashboarding and an entirely new visualization engine.
  • New connectors support Hive, SQL etc
  • New data blending and improved real-time scoring.

The overall environment supports

  • Text analytics along with other analytic techniques and embedded business rules in the core environment
  • Enterprise Server with embedded business rules, validated data and metadata management.
  • Automated model monitoring, collaboration & process control
  • Real time model scoring
  • Regulatory compliance & change management
  • NLP, Entity Extraction, & in-Hadoop model deployment
  • Single or multi-site deployment: license and usage management

13.1 had some new specific capabilities

  • Making access easier is a key feature. Data preparation is a key area. 80 pre-built functions including a node for data health checks (sparse data, missing data, outliers etc). Semi-structured data from documents or XML files is supported along with redaction of confidential information.
  • Data can be exported directly to Tableau and Qlik and new visualization capabilities were added.
  • Reuse of data prep and analytic workflows built by an expert is also supported, allowing an expert to create a complex flow that can be templatized and passed to someone with less expertise so they can embed it in their own work.
  • Dell also added a Native Distributed Analytics Architecture. Dell Statistica runs on premise and any analytic package built in it can be exported as Java, PMML, SQL etc. This can be pushed into various deployment environments like SQL Server, Hadoop, Dell Boomi, clouds (private and publish) or embedded in edge systems like devices.
  • R and Python are supported – R packages can be brought in as node and scripts can be written in Python within the environment.
  • In-database analytics are supported through SQL and Spark push back. Correlation, sorts, frequency, classification, regression and other elements are supported with more to come.

A new web-based reporting UI has been added and V14 is expected to be browser based and more cloud-centric.

More information on Dell Statistica is available here and Dell is a vendor in our Decision Management Systems Platform Technology Report

Modelshop was founded by Tom Tobin, someone who has spent his career building analytic applications in credit origination, portfolio optimization, risk management and fraud detection for major financial institutions. Tom’s vision for Modelshop is to make the technologies that enable these types of solutions available to all organizations so they can make their analytics actionable. Modelshop got a round of seed funding about a year ago and has been building its team and product since. Now in beta with the platform Modelshop has a couple of lending customers focused on disruptive lending solutions as well as some others focused on portfolio management. The vision for Modelshop is to be a platform that allows companies to quickly create analytic applications that can automate sophisticated data-driven decisions in real-time to support mission critical operations, and to optimize these applications (and the decisions they make) over time. All running in public or private clouds.

The Modelshop platform allows you to model decisions, simulate these for expected outcomes, deploy them as real-time analytic applications with APIs, and predict future behavior and performance to optimize results over time. The platform is underlain by a computational engine that makes sophisticated real-time logic approachable and transparent. The decision models themselves are built by business analysts with data scientists focused on optimizing and improving decisioning over time. Deployment capabilities allow for integration with online applications, LOS etc.

A typical Modelshop application has:

  • A Modelshop model starts with a set of business objects that can been defined. These have a set of properties and can be populated from connectors (to databases, APIs or XML/JSON). Objects can be related to each other by simply linking fields and additional calculated properties can be defined.
  • This model and all the associated meta data can be viewed and edited in the tool. All this is defined in the business language – groovy – and an interactive editor allows things to be manipulated using it with auto complete etc.
  • Calculations are strongly associated with these objects – they are defined as object properties. Calculations can share functions to allow similar calculations on several objects and inheritance is supported to allow these to be specialized, for instance to allow different classes of products to have different approaches to a calculation.
  • All the objects and their relationships can be explored in a model graph visual editor allowing exploration. Additional links can be defined diagrammatically. Any specific instance of an object can be drilled into and the links for this instance can be navigated and calculations viewed in that context.
  • Rule bases can be associated with an object and fire as soon as they need to – any data change causes rule firings. Rules use the same syntax as the calculation engine and build on the calculations defined for objects.
  • R integration allows R to be executed in situ – data scientists can pick the objects to be integrated and an automatic data frame is created that can be processed using R. Model training, testing and scoring is handled in-situ by the engine. Scores are presented as object properties.
  • The Modelshop calculation engine ensures that all values (calculations, cubes, rule results and model scores) are updated continuously using a proprietary dependency technology. Any change to data propagates events and immediately updates end results. For example, changing an address to a new zip code could trigger a cost of living model to increase a debt assumption that ultimately causes a credit decline decision. The update could be delivered back to a Modelshop user or via an API to an external application or website, all in real-time.
  • Finally, Modelshop provides for multiple dashboard views with charts, data lists, document widgets and real-time cubes showing overall business performance. Like the rest of the environment this is HTML5, interactive and updated in real-time.

Modelshop is designed to integrate both a portfolio and real-time perspective while continuously updating all the calculations and providing a single environment for all the logic required to manage a complete analytic application.

More information on Modelshop is available here and anyone interested should contact them for access as they are moving into a public release. Modelshop is one of the vendors in our Decision Management Systems Platform Technology Report.

DMNBookFrontCoverAs Jan and I work on our book, Real-World Decision Modeling with DMN, we have been discussing some of the common misconceptions about decision modeling that we’ve encountered among adopters. This is going to be one of the chapters in the book, in which we analyze misguided applications of decision modeling and their consequences, but I thought I would take a minute to discuss some of them. Decision modeling is a powerful, potentially transformative approach for organizations that approach it correctly. These misconceptions can really get in your way:

Misconception: Decision modeling only works for business rules
This is one of the most common and dangerous misconceptions— that the only reason to build a decision model is to manage decision logic expressed as business rules and, often, specifically the business rules automated in a Business Rules Management System. While decision models are very good at structuring and managing business rules, a singular focus on this use case means that organizations miss the benefit of using decision models for other purposes, for example to:

  • Coordinate structure and improve the rigor of large decisions
  • Train personnel in the execution of manual decisions
  • Frame analytic project requirements
  • Discover gaps in data availability
  • Reveal inconsistencies and conflicts of responsibility in organization support for making pivotal decisions
  • Promote consistency in decision making across multiple business channels
  • Clarify the automation boundary of decision making

Misconception: Decision models are only complete if you can generate code from them
Decision models are complete when they are fit for purpose. This might be when you can generate code but might also be when you can fully describe a decision for training purposes, build an effective analytic, define decision making responsibilities across an organization, plan the incremental advancement of an automation boundary or structure the logic you are managing in a BRMS. The purpose for which you are using a decision model will dictate the extent to which its decision logic needs to be complete.

Misconception: Decision logic is what matters in DMN

A focus on execution can sometimes lead to the misconception that only the decision logic layer really matters in DMN. After all the majority of DMN specification is devoted to documenting this. The value of decision requirements modeling is downplayed, other than perhaps as a thinking tool. This misunderstanding can lead to the bad practice of adorning a process model with DMN decision tables to describe each piece of logic in the process, skipping decision requirements modeling altogether.

This is a dangerous misconception as it: misses the value gained by thinking about the decision as a whole—its structure and dependencies. It also embeds historical, inflexible and unnecessary sequences in the decision model while mixing elements of process and decision making that are better kept separated. Perhaps most importantly it focuses on the trees at the expense of the forest. We discuss this flawed approach to decision modeling in more detail in our book.

Misconception: Decision models are handed off to developers as a technical asset

Decision models are neither an implementation-centric IT asset nor a way to eliminate IT by allowing business analysts to build and maintain executable models in isolation. They are, in fact, a vehicle for effective business/IT collaboration—a jointly owned, evolving consensus of what decisions are to be made, how and why. Decision making expertise is a corporate asset that should not be buried in IT systems; it requires explicit capture, innovation and maintenance. Handing decision models off to developers for a ‘one-shot’ translation into code or trying to pretend that a sufficiently detailed, executable decision model can mean you don’t need IT are both equally dangerous and impractical propositions. Decision models are a precise vehicle for representing the requirements, structure and mechanism of decision making which can be maintained and innovated by business analysts, directly inform systems implementation and demonstrate compliance to a third party.

Misconception: Capture business rules first, decision will emerge from this

In addition, people get sometimes think that business rules should be captured first, independently of a decision model. They shouldn’t be as this is analogous to building a new house by starting with brick walls and doors. Part of the value of a decision model is that it’s a foundation: it structures, supports and scopes the capture and analysis of business rules. Trying to list all the rules without a decision model is just going to waste time and energy.

Decision modeling is a powerful concept—don’t be misled by these common misconceptions.

To learn more about the book and to sign up to be notified when it is published, click here.

I got a briefing from a new predictive analytics vendor, DMWay. DMWay was founded in 2014 based on significant research over the previous decades. DMWay had a first VC round in 2015, have 20+ employees and dozens of customers. DMWay is focused on democratizing data science and automating machine learning so that those with business domain expertise can leverage data science -without also having to be a hard-core data scientist. Plus the pricing is designed to be very aggressive to remove that barrier also.

As well documented, predictive analytics in many organizations is expensive, time consuming and not scalable. The need for strong technical skills means that those involved often lack the business domain expertise to understand the data.

DMWay has a very simple UI, designed to be very accessible to non-technical folks. It lays out a simple process – define the input data, document metadata, modeling.

  • Input data is pointed to some existing flat file data and the tool automatically keeps a validation set etc.
  • The structure of this data is then analyzed to build meta data that the business user can then extend or correct. The tool expects flattened but not analyzed or enriched data.
  • The final step is to pick a modeling approach (from a set which, they claim, have undergone a rigorous selection process and been proven to create highly stable and accurate models) and run the process.
  • The time from data input to initializing a run takes less than a minute.

The tool analyzes the data to find the right approach – binning, categorical variable identification, missing value handling etc. The engine identifies suppressing variables to eliminating correlation problems, overfitting, uses the validation data etc. – all automatically. The models created by DMWay are open for inspection by the user. The model is deployed either using their own scoring engine with a REST API or by generating R, Java or SQL code for either batch or real-time scoring.

Models can be grouped into project for management and a simple interface provides basic information about last run and last modified.

DMWay runs on premise on desktops or servers and the scoring engine similarly.

You can get more information on DMWay here. DMWay is a vendor in our Decision Management Systems Platform Technologies report.

Final set of announcements at FICO World relates to Cybersecurity or Cyber Threat Analytics. Obviously cyber attacks are increasing and driving a lot of fraud and abuse – phishing, malware, personal record theft etc. The big challenge in all this is the timespan. 60% of data theft happens with minutes or hours of a breach but breach detection typically takes weeks or months with nearly half taking months to find – and many times this is found by someone else. Many problems slip past the current detection mechanisms because they change rapidly. The alerts are also too numerous and undifferentiated.

FICO has been investing in taking its experience in fraud detection and applying the analytics and expertise to cyber threats. Many of its approaches, refined for credit card fraud detection, work for general cyber security too such as profiling, behavioral archetypes, behavior sorting to prioritize risks and reduce false positives, self calibrating and learning models- all combined with a consortium model to share data.

This work has been combined with iboss cybersecurity, a cloud-based platform for mobile and on-premise end point security. FICOs profiling and analytics are linked to the iboss cloud solution to drive better detection and blocking as well as improved investigation.

Continuing with product announcements at FICO World, next up is the new products for fraud prevention and compliance. Financial crime is on the minds of consumers right now with 2/3 worrying about having their credit card data compromised or having their accounts used fraudulently. FICO has launched a new FICO Consumer Fraud Control product. This allows specific card management scenarios like controlling how much a card can be used and on what, making sure it is not used inappropriately etc. The new product has features like:

  • 2 factor or 2 element controls for specific transaction types
  • Support for recurring transactions
  • Temporary suspension of controls to allow a transaction
  • Integration so issuers can use the input of a consumer in their fraud detection

All of this is running on mobile devices, increasingly the preferred banking environment for consumers. This focus on mobile devices also means that understanding these devices can help in an overall fraud solution. FICO has launched new Mobile Device Security Analytics that combine contextual and device data as well as streaming data to identify where devices are and what they are doing. This behavioral profile can be fed into fraud detection for cards linked to those devices. It can also be used to improve the profiling and grouping already being used to detect fraud – Falcon Fraud Manager does a lot of work to identify what kinds of transactions might be normal for someone like you and this mobile data can become part of this analysis.

Other products include:

  • New Customer Communication Services designer makes it easy to build customer engagement applications quickly with automatic validation, text to voice and other elements.
  • The Identity Resolution Engine is designed to integrate disparate and incomplete data, organize this into people places and things, uses these to resolve identities to support visualization of relationships for investigations for fraud rings, drive graph analytics to automate social network analysis and create a single view of the customer.
  • FICO recently acquired Tonbeller for Know Your Customer, Anti-Money Laundering and Case Management. These solutions are appealing to FICO because they scale across organizations and throughout the lifecycle while also supporting non-banking customers in insurance or for general corporate compliance and business partner management. Plans for Tonbeller involve best practice sharing, using pooled data to prioritize investigations and seeing where behaviorial profiles differ from the KYC data.

All this is part of a move toward Enterprise Fraud Management with a central fraud hub that detects and prevents fraud across multiple channels and provides fraud-aware communication and integration for other solutions.

One final fraud and abuse product is FICO Falcon Assurance Navigator is a new product designed to detect and manage fraud in the procure to pay process. Driven by new federal regulations, this product leverages FICO’s fraud technology and expertise from Stanford to track 100% of transactions to prioritize those that need to be investigated, applying analytics and rules based on different compliance regimes (federal grants v other university funds for instance). It can check POs in advance as well as invoices later, check time and expenses and integrate with procurement.