≡ Menu

Continuing with product announcements at FICO World, next up is the new products for fraud prevention and compliance. Financial crime is on the minds of consumers right now with 2/3 worrying about having their credit card data compromised or having their accounts used fraudulently. FICO has launched a new FICO Consumer Fraud Control product. This allows specific card management scenarios like controlling how much a card can be used and on what, making sure it is not used inappropriately etc. The new product has features like:

  • 2 factor or 2 element controls for specific transaction types
  • Support for recurring transactions
  • Temporary suspension of controls to allow a transaction
  • Integration so issuers can use the input of a consumer in their fraud detection

All of this is running on mobile devices, increasingly the preferred banking environment for consumers. This focus on mobile devices also means that understanding these devices can help in an overall fraud solution. FICO has launched new Mobile Device Security Analytics that combine contextual and device data as well as streaming data to identify where devices are and what they are doing. This behavioral profile can be fed into fraud detection for cards linked to those devices. It can also be used to improve the profiling and grouping already being used to detect fraud – Falcon Fraud Manager does a lot of work to identify what kinds of transactions might be normal for someone like you and this mobile data can become part of this analysis.

Other products include:

  • New Customer Communication Services designer makes it easy to build customer engagement applications quickly with automatic validation, text to voice and other elements.
  • The Identity Resolution Engine is designed to integrate disparate and incomplete data, organize this into people places and things, uses these to resolve identities to support visualization of relationships for investigations for fraud rings, drive graph analytics to automate social network analysis and create a single view of the customer.
  • FICO recently acquired Tonbeller for Know Your Customer, Anti-Money Laundering and Case Management. These solutions are appealing to FICO because they scale across organizations and throughout the lifecycle while also supporting non-banking customers in insurance or for general corporate compliance and business partner management. Plans for Tonbeller involve best practice sharing, using pooled data to prioritize investigations and seeing where behaviorial profiles differ from the KYC data.

All this is part of a move toward Enterprise Fraud Management with a central fraud hub that detects and prevents fraud across multiple channels and provides fraud-aware communication and integration for other solutions.

One final fraud and abuse product is FICO Falcon Assurance Navigator is a new product designed to detect and manage fraud in the procure to pay process. Driven by new federal regulations, this product leverages FICO’s fraud technology and expertise from Stanford to track 100% of transactions to prioritize those that need to be investigated, applying analytics and rules based on different compliance regimes (federal grants v other university funds for instance). It can check POs in advance as well as invoices later, check time and expenses and integrate with procurement.

FICO made a series of announcements today at FICOWorld 2016. The event kicked off with a fun retrospective of the 60 year history of FICO. Bill Fair and Early Isaac founded the company in 1956 to use data and analytics to improve decision-making. This focus has not changed really in all the years since – FICO is still focused on analytical decision-making.

The products being launched fall into three categories – FICO Decision Management Suite 2.0, fraud detection and prevention, and cybersecurity. All these products, of course, are focused on high speed, high volume decision-making and on using analytics to improve decisions.

The capabilities being developed are designed to solve three classes of business problem

  • Deal with complexity
  • Develop a sustainable competitive advantage
  • Defend against criminals

FICO’s vision for its Decision Management Suite is designed to support the whole decision-making sequence – gather and manage data, analyze it, make decisions based on this analysis and take appropriate actions. Good organizations do this in a thoughtful, coherent way and the suite is designed to support a lifecycle for this (authoring, managing, governing, executing and improving) and make this all accessible. The suite is designed to be both a general purpose platform for customers to use and a basis for the product solutions FICO develops itself.

Lessons from the 1.0 Decision Management Platform led to 5 business drivers for the new release:

  1. Capturing subject matter expertise
    Most organizations don’t capture business expertise well or systematically and they need to do so to prioritize decision and management improvement efforts
  2. Intelligent solution creation
    Despite investments in rapid application development there was still work to do making it easy to build solutions
  3. Faster insight to execution
    Time to market, time to using analytics, is critical.
  4. Building institutional memory
    Organizations are increasingly focused on how to build institutional memory, in a more leveragable way, especially as expertise is being embedded into systems.
  5. Greater analytic accessibility
    Organizations need to have more people using analytics and to have analytics be more pervasive.

SuiteMajor upgrades and new capabilities to address these issues drove the Decision Management Suite 2.0 designation. New and significantly improved capabilities then include:

FICO DMN Modeler

Decision making is hard because there is a lack of timely, relevant data; because some of this data is contradictory or opinion based; there’s not enough planning; and communication is poor. To address this there has been a real effort to define decision-making formally and separate from business process or data – the Decision Model and Notation (DMN) standard. Part of the new suite is a modeler for building decision models based on the new standard. I blog about the standard a lot so here’s a link to other posts on decision modeling.

FICO Optimization Solutions

FICO has been doing optimization a long time and made a number of acquisitions to build out its product portfolio. Their focus has been on not just developing innovative algorithms but also on rapid operationalization of these models based on templates and on copying with poor data. New features in the 2.0 suite include new templates around pricing and collections problem, an improved business analyst interface, improved collaboration for those working on optimization models as well as improved performance.

FICO Decision Modeler

FICO Decision Modeler is the evolution of Blaze Advisor, FICO’s established Business Rules Management System, on the cloud. The cloud focus makes it easier to engage business users, extending testing and validation in particular. Faster deployment and operationalization is also critical. All the decision rule metaphors have been redesigned and built as native HTML editors. Rapid deployment of SAS, PMML and other models without recoding allows analytics to be combined with the rules built using these metaphors.

FICO Text Analyzer

New tools for extracting structured, usable insight from unstructured text through entity analysis and related algorithms. FICO’s particular focus is to make unstructured data available for predictive analytics cost effectively.

FICO Strategy Director

Strategy Director is based on long experience with managing decision-making strategies in the account and customer management space (FICO’s TRIAD product). It is designed to provide a common environment that supports collaboration, shifts the balance to the business from IT and means teams are not starting from scratch each time. This is particular good at managing groups, scoring them, doing champion/challenger (A/B) testing, segmenting customers and then reporting on all this. The new Strategy Director is available for configurations beyond account/customer management – using configuration packages based on data fields, variables, scores and defined decision areas. These can be defined by FICO, customers or partners and can be updates based on what is learned. New configurations are coming for pricing, deposits and in industries beyond banking.

FICO Decision Central

This is the evolution of FICO Model Central (reviewed here). This now records how all the decision was made, not just what the analytic scores were. All the decisioning assets used in the decisions are recorded, all the outcomes and performance data is pulled together, and all the logged information (scores calculated, rules fired etc). A tool for reviewing how decisions are being made, improve them and capture institutional memory.

FICO Decision Management Platform 2.0

All of these capabilities need to be deployable and manageable. It has to be scalable, resilient, easy to integrate and all the rest. The platform includes the DMN Modeler, a new data modeling environment for business vocabulary/business terms, Decision Modeler for logic and one-click execution on Apache Spark – turning a DMN model into the Spark graph for native execution. Plus platform management capabilities and visualization all running on AWS (with FICO cloud and on-premise to come).

FICO Decision Management Platform Streaming

The final piece is one to handle streaming data and make very low-latency decisions by embedding decisioning in the stream. Not just handling the data stream, but using rules and predictive analytics to make in-stream decisions. This platform is designed to allow drag and drop assembly of steps (rules, models, connectors) into stateful models that are agnostic of the data source. And execute them very fast with very low latency.

FICO uses the new platform itself to develop solutions such as its FICO Originations Manager – now built and executed 100% on the new platform. The new platform will be available on the FICO Analytic Cloud, with much of it available already and the rest soon – with free trials and some free usage.

Back when I was attending the SAS Inside Intelligence analyst day they briefed us under embargo about their new Viya platform. This was announced today from SAS Global Forum. Ultimately all the SAS products will be moving to the SAS Viya platform and the platform is designed to ensure that all SAS products have some common characteristics:

  • All HTML5, responsive user interfaces supporting both point-and-click/drag and drop interactions and an interactive programming interface across the products on the platform. This is intended to allow some to program and some to work more visually while sharing key underlying components. SAS has also invested in making the interfaces more usable in terms of providing improved visual feedback and interactive suggestions as users work with data for example.
  • Support for Python, Lua and Java not just SAS’ own programming language. In addition REST APIs will be available for the services delivered by the products on the platform so that these can be integrated more easily and accessed from a wide variety of environments. This is based on a micro-services architecture designed to make it easy to take small pieces of functionality and leverage them.
  • Multi-platform and cloud-centric to try and remove some of the impedance created as companies mix and match different computing platforms. This is true especially of the deployment capabilities with a much greater focus on SDKs, APIs and deployment more generally. Viya products will provide support for deployment in-Hadoop, in-database, in-stream and in-memory.
  • SAS is committed to delivering a wide range of new analytic and machine learning (and cognitive) algorithms on this platform as well as making it easier to integrate their algorithms with others’. Many of the new algorithms should be available as services in the cloud, allowing them to be easily integrated not just leveraged inside SAS tools.

More to come on this but I think this is a good direction for SAS. The years of development behind SAS products give them some heft but can also make them “lumpy” and result in layers of technology added on top of each other. Viya will let them re-set a robust and powerful set of capabilities on a modern and more open platform.

Lisa Kart and Roy Schulte recently published a new research report Develop Good Decision Models to Succeed at Decision Management (subscription required). This is the first piece of formal research published by Gartner on decision modeling. Their introduction text says

The industry trends toward algorithmic business and decision automation are driving wider adoption of the decision management discipline. To succeed at decision management, data and analytics leaders need to understand which decisions need to be modeled and how to model them.

I really like this phrase “algorithmic business.” I was just co-hosting the TDWI Solution Summit on Big Data and Advanced Analytics with Philip Russom and we discussed what “advanced analytics” meant. We concluded that it was the focus on an algorithm, not just human interpretation, that was key. This phrase of Gartner’s builds on this and I think it is clear that advanced analytics – data mining, predictive analytics, data science – is central to an algorithmic business. But it’s not enough as they also make clear – you need decision management wrapped around those algorithms to deliver the business value. After all as an old friend once said “predictive analytics just make predictions, they don’t DO anything.” It is this focus on action, on doing, that’s drives the need to manage (and model) decisions.

Lisa and Roy make three core recommendations:

  • Use Analytic Decision Models to Ensure the “Best” Solution in Light of Constraints
  • Use Business-Logic Decision Models to Implement Repeatable Decision-Making Processes
  • Build Explicit Analytic and Business-Logic Decision Models at Conceptual, Logical and Physical Level

All good advice. The first bullet point relates to the kind of decision models that are prevalent in operations research. These are a powerful tool for analytical work and should definitely be on the radar of anyone doing serious analytic work.

The second point discusses Business-Logic Decision Models, the kind of model defined in the Decision Model and Notation standard. These decision models are focused on defining what decision-making approach (both explicit logic and analytic results) should be used to make a decision. While using these to structure business rules is the more known use case, these kinds of models are equally useful for predictive analytics as Roy and Lisa note in their paper. Business logic models can embed analytics functions such as scoring to show exactly where in the decision-making the analytic will be applied. More importantly we know from our clients using this kind of decision modeling in their advanced analytics groups that the model provides a clear statement of the business problem, focusing the analytic team on business value and providing requirements that mesh seamlessly with the predictive model development process.

As for the third point, we see clients gaining tremendous value from conceptual models that cover decision requirements as well as more detailed models linked to actual business logic or analytic models to fully define a decision. Any repeatable decision, but especially high volume operational decisions, really repays an investment in decision modeling.

Roy and Lisa also address one of the key challenges with decision modeling when they say that “many data and analytics leaders are unfamiliar with decision models.” This is indeed a key challenge. Hopefully the growing number of vendors supporting it, the case studies being presented at conferences, books and the general uptick in awareness that comes from consultants and others suggesting it to projects will start to address this.

They list some great additional Gartner research but my additional reading list looks like this:

One of the interesting and useful things about the Decision Model and Notation (DMN) standard for decision models is how it handles the data required by a decision. Simply put, a Decision in DMN may have any number of Information Requirements and these define its data dependencies – the Decision requires this information to be available if the decision is to be made. These Information Requirements may be to Input Data (what DMN calls raw data available”outside” the decision) or to other Decisions. Because making a decision creates data – you can think of making a decision as answering a question, producing the answer – Input Data and Decision outcomes are interchangeable in Information Requirements. This has lots of benefits in terms of simpler execution and isolation of changes as models are developed and extended.

Recently a group I belong to was asked (indirectly) if an Information Requirement can be met by a Decision in one situation and an Input Data in another. The context was that a specific decision was being deployed into two different decision services. In one case the information this decision needed was supplied directly and in the other it was derived by a sub-decision within the decision service. For instance, a calculated monthly payment is required by an eligibility decision that is deployed in two decision services – in one it is calculated from raw data input to the service and in the other it is calculated and stored before the service is invoked and passed in to the service.

This question illustrates a critical best practice in decision modeling with DMN:

If the information is ever the result of a decision then it is always the result of a decision.

The fact that it is calculated inside one decision service (because the decision is inside the execution boundary) and outside the other (the decision is outside the execution boundary and supplies information for a decision that is inside it) does not change this. This should be shown in every diagram as a decision. If it must be decided then it’s always a Decision and if it just input as-is to the decision-making then it’s Input Data. The fact that its value is sometimes stored in a database or passed it as an XML structure and then consumed as though it was a piece of data does not change its nature.

Why does this matter? Why not let people use Input Data when it is “looked up” and a Decision when it is “calculated”? Several reasons:

  1. Having two representations makes impact analysis and traceability harder – you will have to remember/record that these are the same.
  2. More importantly, there is an issue of timeliness. Showing it as a piece of data obscures this and would imply it is just “known” instead of actually being the result of decision-making.
    For instance we had a situation like this with a Decision “is US entity”. This is a decision taken early in an onboarding process and then stored as a database field. This should always be shown as a Decision in a decision model, though, as this makes people think about WHEN the decision was made – how recently. Perhaps it does not matter to them how long ago this decision was made but perhaps it does.
  3. DMN has a way to show the situation described. A decision service boundary can be defined for each decision service to show if it is “passed in” or calculated.
    I would never show this level of technical detail to a business user or business analyst but it matters to IT. The business modeler should always see it as a Decision (which it is) and simply notes that they have another Decision that requires it. They can treat it as a black box from a requirements point of view so it does not make the diagram any more complex – it just means its a rectangle (Decision) not an oval (Input Data).

DMNBookFrontCoverThis is the kind of tip Jan and I are working to bring to our new book, Real-World Decision Modeling with DMN, which will be available from MK Press in print and Kindle versions in the coming months. To learn more and to sign up to be notified when it is published, click here.

I am faculty member of the International Institute for Analytics and Bob Morison has recently published some great research (to which I made a very modest contribution) on Field Experience In Embedded Analytics – a topic that includes Decision management. If you want access to the full research you will need to become an Enterprise Research client but the three big ideas, with my comments, are:

  • Embedding analytics in business processes makes them part of the regular workflows and decision-making apparatus of information workers, thus promoting consistent and disciplined use
    We find that focusing analytics, especially predictive analytics, on repeatable decisions that are embedded in day to day processes is the most effective and highest ROI use for analytics. Understanding where the analytics will be used is critical and we really emphasize decision modeling as a way to frame analytic requirements and build this understanding.
  • Unless decisions and actions are totally automated, organizations face the challenges of adjusting the mix of responsibilities between automated analytics and human decision makers
    Again decision modeling can really help, especially in drawing the automation boundary. Of course when decisions are wholly or partly automated you need to embed the analytics you build into your operational systems using Decision Management and PMML for instance.
  • When embedding analytics to assist smart decision makers, you’ve got to make them easy to understand and use – and difficult to argue with and ignore
    As our research into analytic capabilities pointed out, the need for visual v numeric output from analytics was one of the key elements in picking the right analytic capability to solve a problem.

Enterprise Research clients can get the report here and if you are interested in analytics you should seriously consider becoming an Enterprise Research client.

One of the fun things going on over at the Decision Management Community is a series of challenges based on various real or real-ish problems. For each the site encourages folks to develop and submit a decision model to show how the problem described could be solved. This month there was one on Port Clearance Rules

We are looking for a decision model capable to decide if a ship can enter a Dutch port on a certain date. The rules for this challenge are inspired by the international Ship and Port Facility Security Code. They were originally developed for The Game Of Rules, a publication of the Business Rules Platform Netherlands. The authors: Silvie Spreeuwenberg, LibRT; Charlotte Bouvy, Oelan; Martijn Zoet, Zuyd University of Applied Sciences.

DMCommunityShipExampleFor fun I worked this up in DecisionsFirst Modeler (with a little help from Jan Purchase, my co-author on Real-World Decision Modeling with DMN). I deliberately did not develop the decision tables for this as I wanted to show the power of a decision requirements model. Looking at the model, and asking only the questions that are immediately apparent as you develop the model, was revealing and to me showed the value of a decision model:

  • It’s clear what the structure of the problem is
  • It’s clear what’s missing
  • It’s much easier to see the whole problem than it is to get the gist from a list of rules

I have done this kind of exercise many times – building an initial model from a set of rules or documents – and it never fails to be useful.

The full report generated from DecisionsFirst Modeler is in the solution set.

DMNBookFrontCoverJan Purchase of LuxMagi and I are working away on our new book, Real-World Decision Modeling with DMN and one of the questions we have been asking ourselves is who we are aiming the book at – who builds decision models? Jan had a great post on Who Models Business Decisions? recently to address this question and I wanted to point you to it and make two quick observations as it relates to analytic projects (both “hard” data science projects and “soft” business analytics projects) and subject matter experts.

We have done a number of analytic projects using decision models to frame analytic requirements. We work with data scientists using decision models to make sure that the data mining and predictive analytic efforts they are engaged in will connect to improved business results (better decisions) and that the models built can be deployed into production. Decision models put the analytic into context and ensure the analytic team stays focused on the business. We also work with data analysts building dashboards or visualizations. These are sometimes just focused on monitoring the business but increasingly are designed to help someone make decisions. By focusing on the decision making first and then designing a dashboard to help, data analysts avoid letting the structure of the data or the availability of “neat” widgets drive the design. Decision models keep you focused on the business problem – improving decision-making. We have a neat white paper on this too – decision modeling for dashboard projects. Don’t only use decision models on rules projects – use them for analytics too.

We are also increasingly bullish on the ability of subject matter experts, business owners, to participate in decision modeling. Our experience is that the simplicity of the DMN palette combined with the logical structure of a DMN decision model makes it easy for SMEs to participate actively in developing a model. They do have to watch their tendency to describe decision-making sequentially but rapidly pick up the requirements/dependency approach critical to DMN models. Don’t limit DMN decision modeling to modelers and analysts – bring in the business!

Don’t forget to sign up for updates on the book so you know when it is published here.

The Call for Speakers is open for DecisionCAMP 2016 until April 1st. This year DecisionCAMP will be hosted by the International Web Rule Symposium (RuleML) on July 7, 2016 at Stony Brook University, New York, USA. This event will aim to summarize the current state in Decision Management with a particular focus on the use of the Decision Model and Notation (DMN) standard. As always it will show how people are building solutions to real-world business problems using various Decision Management tools and capabilities. If you are interested in speaking at this event, the call for speakers is open through April 1st, 2016

We are currently seeking speakers on a variety of topics such as:

  • Decision Modeling
  • Business Rules Management Systems
  • Predictive Analytics and Data Mining
  • Decision Optimization
  • Decision Management Use Cases in Different Industries
  • Best Practices for using DMN and Decision Management Technologies
  • Right now we are looking for some great presentations so if you want to present at this event please submit the abstract of your presentation using EasyChair.

If you don’t feel you have something to share then at least make sure you put it on your calendar. Take advantage of the opportunity to share your unique insights to empower industry with the latest advances in decision management – apply to speak here by April 1st.

And don’t forget there are still a couple of days to apply to speak at Building Business Capability 2016 too.

I got an update from a new player in the decision management market today – ACTICO. They aren’t really new, though, as they are using the Bosch SI business rules management system, Visual Rules (last reviewed by me in 2010). The Visual Rules business has been split with Bosch SI focusing on IoT and manufacturing and ACTICO focusing on the financial industry (banks and insurance).

Visual Rules has been in the business a while now (Innovations Software Technology was founded in 1997) and has 100+ customers across 30 countries with a concentration of customers in Europe. The product is currently called Visual Rules for Finance (reflecting the historical strength of the product in banking and insurance) with Bosch SI continuing to use the Visual Rules name.

The product has the same components you are familiar with:

  • A visual modeling studio
  • A Team Server for collaboration and a Builder for testing and deployment
  • Execution tools including a runtime, execution server and batch execution
  • Integration capabilities for database and other data sources as well as Identity Management (supporting multi-tenant)

Plus there is the Dynamic Application Framework for building rules-based UIs.

Visual Rules continues to support flow rules (something a little like a decision tree), decision tables and state flows (classic state transition diagrams). These rule editors are integrated with the Dynamic Application Framework. This framework leverages the rules and combines them with user interface, data designs and integration as well as processes defined using the state flows, to design complete rules-based applications.

Decision Management is the core focus for Visual Rules for Finance moving forward. The suite is now focused on building dynamic business applications to drive business decision-making using rules, analytics, processes and user interfaces. Users can take data, apply business rules and analytics to make decisions, and deliver this in a process or workflow framework for decision-makers with a dynamic user interface while tracking all the documents and supporting information that can be used to drive new analytics and new rules.

Later this year the company is going to re-brand the product around the new ACTICO brand – the ACTICO Platform will have Modeler, Team Server, Execution Server and Workplace (a new name for the Dynamic Application Framework). The ACTICO platform will support business rule management, analytics, process management and Rapid Application Development.

Besides the re-branding, a number of enhancements are planned:

  • Support will be added to allow the import of PMML Decision Trees and the conversation of these into a flow rule. This is a nice feature as it allows the creation of flow rules that can execute an analytic decision tree as part of an overall decision. They are working with a couple of analytic tools to make sure this works with several analytic vendors.
  • A new repository is planned to sit between the collaboration/team server and the execution server. This will handle code generation, test/simulation, approval and deployment. Separating this from the collaboration and execution environments will improve approval processes, traceability and testing.
  • The UI modeling and rendering in the Dynamic Application Framework will be formalized with dedicated editors for UI models. The rendering will be replaced with a modern Angular JS and REST environment so it can support mobile, responsive UIs.
  • Monitoring and execution statistics are being extended to include process state, data values and decision outcomes – information that is known to the Dynamic Application Framework wrapped around a rule execution. All this information will be integrated with the current rule execution statistics (which are nicely displayed in the modeler).
  • Finally, there is a plan to do a cloud offering – putting the server products and the workplace (and the products based on this) into the cloud.

Beyond that there is consideration of more support for simulation and monitoring of deployed services as well as some support for DMN. In addition, various customers are using advanced analytics in fraud, money laundering and abuse detection systems. These projects are driving additional research into analytics and the team is very focused on how to make sure the data being produced by the platform is easy to consume by analytic teams as well as working on how to integrate analytics into the platform.

You can get more information on ACTICO here and they are one of the vendors in our Decision Management Systems Platform Technologies Report.

[Minor edits for clarity 3/18/16]

Last session of the day is a freeform executive Q&A so I will just list bullet points as they come up:

  • Open Source R is obviously a hot topic in analytics and SAS’ focus on more open APIs that are broadly accessible and their renewed focus on academic partnerships are designed to “leave R in the dust”
  •  The SAS team recognize a need to get their brand in front of new audiences – Small/Medium Enterprises, developers etc – and this is a key marketing challenge this year and one of the reasons for an increasing focus on partners.
  • The move to a more API-centric view is going to create opportunities for new and different pricing models especially with OEMs and other analytic service providers.
  • Open Source is something SAS is happy to work on – so Hadoop is widely used and integrated, most of their installations are running on Linux etc.
  • Clearly an expectation that future sales will be more consumption based given the way the platform are evolving at SAS and the growth of cloud.
  • In particular the evolution of industry clouds and industry-specific functionality built on SAS available through those clouds will be key.
  • SAS clearly sees opportunities for lowering entry barriers, especially price, so that new customers can explore the ROI of capabilities.
  • Competitive pressures have changed in the last few years with very large competitors increasingly offering broad analytic portfolios while also having to compete with niche startups. SAS is focusing on its core analytic strength and history while also recognizing that it must keep changing in response to changing competitors.
  • SAS sees simplicity in analytics, power in visualization and machine learning all as part of how analytics continues to expand in organizations.
  • Unlike many vendors in the analytic and data infrastructure space, SAS overwhelmingly sells to the Line of Business with a business value proposition and does not see this changing. At the same time they need to make sure IT is behind their technology choices and understands the architecture.
  • The expansion to smaller enterprises involves driving their solutions through inside sales and partners – new pricing and positioning, new sales enablement but not really new products. Plus more OEM and Managed Analytic Service Providers.

And that’s a wrap – lots of open and direct responses from the executive team as always.

Another product/solution focused session, this time on Cybersecurity. This is a relatively new product for SAS but they have production customers and an aggressive development plan. The core focus for this detecting attackers who are on a network before they execute their attacks. For instance in the Sony hack the attackers were probably on the network for 90 days downloading data and more days before then doing reconnaissance. The challenge in doing this comes from a set of issues:

  • Detection avoidance by criminals
  • Limits of signatures and rules that are time consuming and complex to manage
  • Economics of data scale given the amount of data involved
  • Analyst fatigue caused by false positives

NIST talks about five steps

  • Identify
  • Protect and Detect
    Lots of technology focused here like firewalls, identify management etc.
  • Respond
    More technology here focused on generating alerts and having analysts prioritize and focus on the most serious
  • Recover

The key problem is that this still focuses on a “chase” mindset where everything is analyzed post-fact.

SAS Cybersecurity ingests network traffic data in real time and enriching it with business context such as that from a configuration management database (location, owner etc). This is used to identify peer groups.  In-memory behavioral analytics are applied and presented through the investigation UI for analysts to focus on the most serious problems.

Critical to this is identifying the normal baseline (so you can see anomalies) when the number of devices is in the thousands and all the devices could be communicating with each other. A network of 10,000 devices might product nearly 100,000,000 relationships for instance. With this baseline you can detect anomalies. Machine learning can be used to learn what is causing these anomalies before driving analytically-driven triage so that analysts target the most serious problems.

Customer Intelligence is a core focus for SAS. Over the last year, real-time next best action, optimization, marketing efficiency are driving investments in Customer Intelligence in the SAS customer base. More organizations have initiatives focused on improving the customer experience, integrating digital silos for a digital experience and big data.

The Customer Intelligence product is designed to handle inbound and outbound channels through a common customer decision hub that handles the rules, actions and insight (analytics) for about customer treatment. The current product has a strong history in banking and financial services but also has retail, communication and insurance.

Four big themes are driving Customer Intelligence:

  • Optimizing marketing
    Differentiated analytics and an optimized marketing processes
  • Customer Journey
    Engage customers the way companies and their customers want across channels and devices
  • Unify customer data
    Even as the range of data increases and specifically across multiple channels
  • Digital ecosystem
    Support the huge array of marketing players – means being cloud, API-driven etc.

This leads to an extension of the customer intelligence suite deeper into additional channels – mobile and web content, email and advertising that are customized, analytic and learn from new data.

A detailed walkthrough of how a marketing analyst might refine their targeting of their customers showed some nice task management, designing flows for offer targeting, analytic tools for comparing A/B test results, integrating anonymous behavior across multiple channels with profiles to drive interactions and much more.

SAS is taking its existing customer data hub and marketing operations investments and extending them deeper into newer digital channels and adding more sophisticated and yet more accessible analytics. Integrating with commercial marketing systems and content delivery systems in a more open way is a critical component so that the intelligence can be embedded into a typical heterogeneous marketing environment.

Next up at the SAS Inside Intelligence event are some technology highlights, each based around a day in the life of a particular role. Much of this is under NDA of course.

Ryan Schmiedl kicked off with a quick recap of last year’s technology – 150 significant releases across the SAS focus areas. In analytics for instance Factory Miner was rolled out (review here), Hadoop was a big focus in the data management area while Visual Analytics and Visual Statistics delivered new visualization capabilities and much more.  Customers, he says, are asking for simplicity with governance, new methods, real-time analytics, solutions that work for big and small problems and new use cases. They want a single integrated, interactive, modern and scalable environment. And that’s what SAS is planning to deliver. With that he introduced the first day in the life presentation – Data Scientist.

SAS loves Data Scientists, they say, and Data Scientists need three things:

  • The right analytic methods – a broad and deep array of these – that are scalable and readily available on premise or in the cloud.
  • A good user experience so they can exploit these methods. Organizations need this to work also for both experienced data scientists and new entrants.
  • Access to these methods in the programming language they prefer. They also need to be able to mix visual and interactive tools with this programming plus they need to be able to automate tasks – to scale themselves.

Business Analysts are the second role to be considered. SAS Visual Analytics is SAS’ primary tool for business analysts with BI, discovery and analytics capabilities in an easy to use UI. As was noted earlier, new visual interfaces for data wrangling as well as new data visualization capabilities are coming in the product along with suggestions to help analysts when they get stuck. Mobile interfaces are popular with users for consuming analysis and making it easy for business analysts to deliver reports or visuals that work on every UI. Meanwhile the Visual Analytics UI is being simplified.

Next up is a new one – Intelligence Analyst. These folks sit between data scientist and business analysts and are increasingly found in fraud and security prevention where an automated environment uses analytics to flag items for investigation and those investigating also need to be able to do analytics interactively as part of their investigation. Providing a combined interface for these analysts is a key capability for the new fraud and security environment. This handles text analytics, search, network analysis and a bunch of other SAS capabilities in a nice integrated and configurable environment.

Final role-based demo is for IT Analysts. IT are focused on how fast they can fix problems, making sure problems stay fixed and on keeping costs under control. New tools for managing the SAS environment and the events generated by it are designed to make it easy to find out about problems, program automated responses and do investigation of persistent problems.

A bonus demo focused on IoT – the Internet of Things. IoT has use cases in everything from connected cars to manufacturing, from retail to healthcare. IoT requires analytics – to turn all that data into something actionable – and it requires real-time, streaming analytics. IoT means access to data from devices, filtering and transformation of this data at source before transmitting it, analytics applied to the streaming data, storing and persisting the right data, and actively monitoring and managing deployed models as data changes. And then you need to be able to do ad-hoc analysis to see what changes you need to make moving forward.

There was a lot of new stuff demonstrated but it was not 100% clear what was under NDA and what was not so I was pretty conservative about what I blogged.

I am at the SAS Inside Intelligence event in Steamboat getting the annual update on all things SAS. First session of the day is the Executive Viewpoint. Jim Goodnight and Randy Guard kicked things off.

Creating a single global organization was a big part of last year with legal, finance, sales, marketing and more becoming global efforts. Marketing and sales in particular were rather too country-local. Marketing and sales globalization focused on GTM alignment, sales enablement, regional and global services and brand/creative direction. SAS has refocused its marketing in particular away from a channel-specific approach to a more customer-journey focused one (using, of course, SAS software). Each product line has been integrated into this approach to create a more coherent, global GTM plan. Added to this has been a set of industry templates, sales plays and use cases designed to make SAS and its partners more able to sell the capabilities it has by focusing on particular use cases. More advertising, new and expanded events also driving this message harder into the market.

Sales in 2015 were strong – over $3.0B – with a 8-17% growth rate worldwide for first year license with increasing deal sizes and the number of large deals both up as well. Overall revenue grew a little slower – 5-11% – but also pretty strong. Risk and data management were particularly strong with business visualization also showing good results. Modernizing existing customers to the latest and greatest was also a focus and apparently went well. Partner supported revenue grew 57% showing an increase in partner engagement. NOt much change from a sales perspective for 2016.

This modernization program is, I think, really important. Getting customers off the old versions of software and the old servers they run on is critical to sustaining these customers. Modernizing means customers are using the latest, scalable technology (like grid and the high performance analytics) as well as the newer tools like Enterprise Miner and Visual Analytics. Good story from some customers seeing dramatic increases in performance especially thanks to SAS Grid.

The history of the R&D program to improve performance runs back to 2009 and includes the core High Performance Architecture, the SAS LASR server and SAS CAS – Cloud Analytics Server – a massively parallel architecture developed in 2013 that balances in-memory and cloud. This new server has load balancing, fail over, easy install and a highly scalable architecture to deliver elastic compute as well as this support for managing datasets that won’t fit in memory. This will ship in Q2 and then add REST, Java, Python and LUA interfaces in the fall so that it can be integrated into a modern IT environment.

SAS is also planning to fight back against the growth of R  in university. SAS Analytics University edition is free and complete for academics and has 500,000+ downloads and provided an on-demand (40,000+) and AWS (3,000 users) versions.  SAS has also partnered with 37 Masters of Analytics programs and over 30 new joint certificate programs were added in 2015.

SAS continues to grow internationally with offices opening in Dublin and Paris as well as some new offices in the US (like Detroit). Plus the Cary campus is getting another new building. It continues to rank well on the great places to work surveys and to have local offices and presence.

A few additional facts on the business

  • 49% in Americas, 38% in Europe and 13% in Asia Pacific
  • 26% in Banking, 15% in Government, 10% in Insurance. Interestingly only 5% in retail.
    • Banking growth being driven a lot by risk with the expansion of stress testing and regulatory requirements. Fraud also drove growth in Banking and Government.
    • Life Sciences at 7% started to include more sales and marketing not just R&D and this growth also came with a willingness to use the cloud.
    • Manufacturing is at 6% and IoT is an increasingly big deal for SAS in this area as manufacturers start to instrument their products.
  • SAS consistently invests heavily in R&D – 25% of revenue v an industry average of 12.5%.
  • Partnering is an increasing focus:
    • They wanted in 2015 to become the analytics partner of choice.
    • Their target is to have partners participate in 35% of new revenue by the end of 2018 while driving 25% of new deals.
    • For 2015 they hit 30% partner participation in new sales and 18% led by the partner, so good progress.
    • Partner resell revenue grew 3x with 200 resellers, 2 new OEM agreements and 9 Analytic Service Provider agreements.
  • SAS is investing more in its brand this year, building on the confidence people have in SAS products and adding a focus on clarity and compassion.

Driving forces for the SAS business are pretty obvious:

  • Data growth, new sources, new types
  • Analytics – consumable by everyone from data scientists to business user/application users
  • Self service and discovery and the enthusiasm for this in companies – expanding from visualization and into data wrangling/data blending.
  • Connected everything but so what…

And this results in a set of 6 focus areas for SAS

  • Analytics
  • Data Management
  • Business Visualization
  • Customer Intelligence
  • Risk Management
  • Fraud and Security Intelligence

Plus some emerging areas including Cybersecurity and the Internet of Things.

Enabling Technologies for all this include

  • Data+Processing Power+hadoop – put processing close to the data
  • Event Stream Processing as more data is “in flight”
  • In-memory Processing
  • Visualization

All of this being brought together with a strong focus on common user experiences and integration across products.

Lots of interesting additional news and some good choices by SAS presented under NDA. More on the technology later in the day.

Predictive analytics is a powerful tool for managing risk, reducing fraud and maximizing customer value. Those already succeeding with predictive analytics are looking for ways to scale and speed up their programs and make predictive analytics pervasive. But they know there is often a huge gap between having analytic insight and deriving business value from it – predictive analytic models need to be added to existing enterprise transaction systems or integrated with operational data infrastructure to impact day-to-day operations.

Meanwhile the analytic environment is getting increasingly complex with more data types, more algorithms and more tools including, of course, the explosion of interest in and use of R for data mining and predictive analytics. Getting value from all this increasingly means executing analytics in real-time to support straight through processing and event-based system responses.

There is also increasing pressure to scale predictive analytics cost-effectively. A streamlined, repeatable, and reliable approach to deploying predictive analytics is critical to getting value from predictive analytics at scale. This must handle the increasingly complex IT environment that contains everything from the latest big data infrastructure like Hadoop / Storm / Hive / Spark to transactional mainframe systems like IBM zSystems.

PMML – Predictive Model Markup Language – the XML standard for interchanging the definitions of predictive models is a key interoperability enabler, allowing organizations to develop models in multiple tools, integrate the work of multiple teams and deploy the results into a wide range of systems.

I am working on a new paper on this and if you are interested you can get a copy by signing up for our forthcoming webinar – Predictive Analytics Deployment to Mainframe or Hadoop – on March 3 at 11am Pacific where I will be joined by Michael Zeller of Zementis.

 

DMNBookFrontCoverJan Purchase of LuxMagi and I are working away on our new book, Real-World Decision Modeling with DMNReal-world Decision Modeling with DMN will be available from MK Press in print and Kindle versions in the coming months and I wanted to take a moment to talk about why Jan and I are the right people to be writing this. Our aim is to provide a comprehensive book that explains decision modeling as well as the DMN standard and that gives practical advice based on real projects.

Regular readers of the blog will have a perspective on me and why I am in a position to write such a book but for those without the history, here are some highlights:

  • I have been working in Decision Management since we first started using the phrase back in 2002 – there is at least one witness that claims I came up the phrase – and have really done nothing else since then. Throughout this period one of the key challenges has been how best to structure, manage and document a decision so it can be effectively managed. We tried rule-centric approaches, process-centric approaches and document-centric approaches but until we started using decision modeling none of them we really satisfactory. This context makes me really value decision modeling and gives me a wide range of counter-examples!
  • As I got interested in decision modeling, I wrote a chapter for Larry and Barb’s book on their decision modeling approach, wrote the foreword to Alan Fish’s book on decision modeling and included the basic outline of what would become the DMN standard in my book Decision Management Systems.
  • Decision Management Solutions was one of the original submitters when the OMG requested proposals for a decision modeling standard and I have been an active participant in every step of the DMN process, both the original and subsequent revisions.
  • Our work with clients has involved building decision models for rules projects and for predictive analytics projects as well as for manual decision-making and dashboard efforts. We have built models in financial services, insurance, healthcare, telco, manufacturing and travel. We have also taught hundreds of people across dozens of companies how to do decision modeling.
  • My personal work with decision management technology vendors has exposed me to their clients too, providing a huge pool of experiences with decisioning technology on which to draw.
  • Plus I have written and co-written several books, including most recently working with Tom Debevoise to write the first book to introduce DMN – The MicroGuide to Process and Decision Modeling in BPMN/DMN

So why Jan? Jan too has a depth of experience that makes him a great choice for this book:

  • Jan has spent the last 13 years working with business rules and business rules technology. While structuring and managing business rules is not the only use case for decision modeling, it is a very powerful one and the one that is the primary focus of this book. Jan’s long time focus on business rules gives him a huge array of examples and experiences on which to draw.
  • Part of this experience is with lots of different Business Rules Management Systems. Like me, Jan has seen the industry evolve and used multiple products giving him a breadth of perspective when it comes to how business rules can be presented to business owners, how SMEs can be engaged in managing business rules and much more.
  • Jan’s experience is intensely practical, working to develop the business rules directly as well as mentoring others who are developing business rules, providing training in decision modeling and business rules best practices and acting as a reviewer and advisor.
  • Jan has spent 19 years working in finance and has worked with 8 of the top 15 investment banks, for instance. He has worked on everything from liquidity to compliance, accounting to loans and asset classification – he has tremendous experience in one of the most complex, heavily regulated industries out there. Decision modeling has a critical role to play in a modern regulatory architecture so this experience is invaluable.
  • Before working with DMN on projects Jan worked with The Decision Model extensively giving him a perspective on decision modeling influenced by the two major approaches out there.

Between the two of us we have a depth of experience we believe can make Real-world Decision Modeling with DMN not just a book about the notation and how to use it but a genuine best practices guide to decision modeling.

To learn more and to sign up to be notified when it is published, click here.

This year DecisionCAMP will be hosted by the International Web Rule Symposium (RuleML) on July 7, 2016 at Stony Brook University, New York, USA. This year’s event will aim to summarize the current state in Decision Management with a particular focus on the use of the Decision Model and Notation (DMN) standard. As always it will show how people are building solutions to real-world business problems using various Decision Management tools and capabilities – everything from Business Rules Management Systems to data mining tools, optimization and constraint-based environments to machine learning and prescriptive analytics. DecisionCAMP gives you a chance to:

  • Learn about new trends in Decision Management technologies, and how they can be used to address your business problems
  • Share practical results of using of various decision management technologies in business settings
  • Exchange best practices for using DMN and decision management technologies.

Right now we are looking for some great presentations so if you want to present at this event please submit the abstract of your presentation using EasyChair.

If you don’t feel you have something to share then at least make sure you put it on your calendar. See you there.

My friends at TransUnion have an interesting job opening for someone with a background in decision management and consulting. They are looking for a Sr. Director, Decision Services CoE

The Sr. Director, Decision Services Center of Excellence (CoE) will be responsible for developing and driving strategy to grow the Decision Services business for TransUnion’s International business. The Sr. Director will leverage a direct and matrixed group of business and product professionals to meet business goals and drive superior internal and external customer satisfaction.

The job is based in Atlanta (I think) but focused on their international business and specifically on their use of their DecisionEdge decision management platform to deliver solutions around the world. TransUnion, for those who might think of them only as a credit bureau, is “dedicated to finding innovative ways information can be used to help people make better and smarter decisions. As a trusted provider of global information solutions, our mission is to help people around the world access the opportunities that lead to a higher quality of life, by helping organizations optimize their risk-based decisions and enabling consumers to understand and manage their personal information.”

Details here – apply direct with them.

Decision Management Solutions joined the OneDecision.io consortium back in September and we have been working with them ever since both within the Decision Model and Notation (DMN) standards process and to provide some integration between the OneDecision.io Java-based reference implementation for DMN execution (which supports basic decision tables, JSON data types, and the standardized DMN XML interchange format) and DecisionsFirst Modeler our decision requirements modeling platform.

We believe that the best way to integrate execution-oriented environments like OneDecision.io (or IBM Operational Decision Manager and other commercial Business Rules Management Systems) is by linking the decision requirements diagrams you build to the matching implementation in your target environment. We have now completed the initial prototype for the integration of DecisionsFirst Modeler Enterprise Edition with OneDecision.io and you can see the results in the video.

If you are interested in this integration, or any others, please get in touch – info@decisionsfirst.com.