≡ Menu

It’s opening keynote time at IBM’s World of Watson 2016 and we kicked off with a video history of Watson from Jeopardy to today. Dr John Kelly of IBM got us started, emphasizing how rapidly interest in Watson has grown over the last year or two.  In August 2007, he says, a small team of researchers proposed to build an AI/Cognitive system – they felt they had the key techniques and technology developed to succeed even if many (all) previous attempts had failed. 5 years later Watson won its Jeopardy appearance.

Now, he says, Watson is starting to fundamentally change the way decision-making is done across many industries. In a few years, he says, things are going to really change – people making complex decisions will all want to consult a cognitive system to help them make a better one. Whether they are discussing mergers, considering a difficult cancer patient or something completely different. And beyond that, he thinks, Watson will begin to predict not just assist. But his focus remains on how Watson can and will help people make better decisions.

Tom Friedman, the author of The World is Flat, joined Dr Kelly on stage. He has been working on a new book called “Thank You For Being Late” – all about the value of giving people the time to pause in an era of acceleration. He told a great story about meeting a guy at a parking garage and interacting with him about how to write a column. He likes to provoke or illuminate with his columns which takes understanding one’s position on the world, how one things the world works and what you think about this.

Right now he sees the way the world “machine” works changing – the digital globalization of the market, Moore’s Law and technology and the rapid change of nature due to climate change and population growth. All three are hockey-stick graphs and all three interact with each other.  As he was looking at this he saw that 2007 was an important year – the iPhone, Facebook, Google buying Youtube, the price of sequencing a genome or solar power fell off cliffs, Intel moved off Silicon and much more – and IBM’s researchers started Watson. He thinks 2007 was a technology inflexion point – and we missed it because of the crash of 2008. In addition, the political impact of this was that much of the social and legal framework needed to cope with this change did not get built and so there is a major disconnect.

All of this technology change gets lumped into the cloud but he finds this too “soft” a word as it is really a supernova of change. Storage, compute power, connectivity all came together in 2007 to deliver an invisible technology platform. This changes four things – it changes:

  • Power of 1 person to build or break is greater than ever
  • Power of many, of groups,  to change the world is greater
  • Power of flow as ideas flow around the world
  • Power of machines

And the power of machines was demonstrated by Watson winning Jeopardy in 2011. And the world was never the same since…

Politics, geopolitics, the workplace, ethics and community are all being dramatically changed by these trends of market, nature, and technology. The challenge is how to reimagine them. He talked briefly about three of them:

  • For the workplace, for instance, has to figure out how machine technology and AI are going to change things – how to use intelligent assistance to change people’s jobs. This means new skills, continuous learning by employees and much more.
  • Politics is being blown up as things change – the parties were structured around old problems and not about these challenges of climate, technology and globalization. He used nature as an example of coping with change – sustainable, experimental, fill niches, patient, willing to kill failures etc. Politics, he says, is going to be overrun by the pace of change and only parties that can be adaptive to this new world will survive…
  •  Ethics is also going to change. As everything we do becomes digital – friendships, relationships, work and much more – we need to rethink value systems to work in this connected but not hierarchical environment. The power of one person in this environment is completely different – to make or to break everything. This means that how people think and act – their ethical view – really matters. We have to scale the golden rule – do unto others as you would have others do to you – to include everyone. Family, values, teaching, ethics all really matter.

A great speaker and a great speech. David Kenny, GM of Watson, drew the short straw and had to follow Tom.

David reiterated the focus on augmentation – that AI is augmented intelligence rather than artificial intelligence. We have a long history of using technology to augment our cognitive capabilities. As the world becomes awash in data the need to apply analytics and cognitive to make better decisions becomes even more important. David reiterated the four elements that IBM sees supporting this:

  • Cloud – the IBM cloud and hybrid cloud particularly
  • Content – managing both structured data and unstructured text and content
  • Compute – algorithms and services to understand and extract value from this content
  • Conversation –  human ways to work with these elements in conversational applications

Watson must be able to understand language, reason at scale, interact naturally – this last includes some new announcements for Apple iOS applications that can be connected to Watson. David used a few customer stories to illustrate Watson:

  • Bradesco, a South American bank, recently used Watson to support mobile applications and connected it to their legacy applications so employees could support customers more effectively.
  • Staples illustrated their Watson powered “easy button” – offering a chat bot or text or audio to help their staff and their customers. It handles more and more transactions, freeing customer service staff to work on more complex problems.
  • GSK – Theraflu – uses Watson to power an interactive tool for helping people find out how over the counter medicine might help or if they need something more. By answering questions they hope to help but also build brand loyalty.
  • KPMG uses Watson in its audit practice using it to help auditors find very detailed information about loan portfolios for instance and presenting it in a traditional format and with explanation/justification.
  • OmniEarth discussed how they work with municipalities to converse water, especially outdoor watering. Satellite and other image data is processed. Watson is used to process the images and classify surfaces to see how much water is being used.

Pearson (publisher of both Smart (Enough) Systems and Decision Management Systems) came up next to announce a partnership with IBM. As the number of students explodes the challenge is making sure these students get a great teacher – how can higher ed be scaled. The partnership is about using Watson to help teachers and help students be more prepared.

Sebastian Thrun of Udacity wrapped up the session. He began by talking about teaching AI at Stanford when it was still a niche topic and the shift, 5 years ago, when the class went online and 100,000 students took it. He uses self-driving cars to illustrate a critical point about cognitive technology – that everyone benefits when it learns from a mistake. People find it nearly impossible to learn from the mistakes of others, but self-driving cars and other cognitive systems can. Anything repetitive can be improved therefore more rapidly by cognitive systems. And this ability is going to help cognitive systems accelerate past people in many tasks at an ever increasing rate. Udacity is partnering with IBM to deliver a nanodegree program on artificial intelligence.

Continuing in the analyst program at IBM’s World of Watson event with Beth Smith, GM Offerings and Technology for IBM Watson, introducing some Watson elements for Conversation – one of the four C’s of Watson (Cloud, Content, Compute and Conversation).

Watson, at its core, is about finding knowledge in noisy data at enormous scale. Watson listens to signals, uses machine learning and deep learning to find patterns and then makes recommendations that it can explain. For instance, in the Conversation piece, there is Watson Conversation for developers (designed to be used with other content and services) and a configurable apps – Watson Virtual Agent for customer service (built on top of the developer service).

Both kinds of products – developer services and configurable apps – are delivered continuously as cloud solutions with new capabilities being added. This kind of development is increasingly done with the interactions the deployed services are managing.

Watson Conversation is a service that is free for developers to engage with. It comes with the tone analysis service integrated. The service has four main pillars:

  • The intents
    Each intent can have lots of strings representing different ways to say the intent. The system uses these as a base line set of ways to identify the intent but will also learn other ways to identify the intent.
  • The entities
    Can define new ones and can use system entities like date and time, percentage etc.
  • The dialog
    A flow can be defined for a dialog using steps and links
  • Improvement
    Can see interactions by intent, entities etc so can rapidly see what would help it be better.

It’s worth noting that there is no additional training step – as you add things to the definitions they are part of the system’s behavior. At any point the developer can use a try panel to see how a particular string is handled.

Watson Virtual Agent is preconfigured on top of the Conversation service with pre-defined intent, entities etc. The service is configured using a more business-oriented UI based on tiles. There are various handlers for each tile – redirect, invoke your own workspace, escalate to a human agent, give a text response. This allows the business user to configure one of the 90+ predefined cross-industry intents (there are also a number of Telco-specific ones) with the option to link to a custom solution.

The app has some lightweight metrics and reporting built in around intents and entities – for instance what are the intents that lead to human interaction most often. In addition all the data can be exported for analysis elsewhere.

Another key piece of tooling is the Watson Knowledge Studio, designed to support exploration and discovery. This is designed to move beyond these kinds of structured conversations to a more general understanding of a domain. This allows a Watson service to apply a more domain-specific view of the content. Examples are viewed and mapped to defined entities. Relations can be defined simply by linking them graphically – allowing the organization that manufactures a product to be shown, for instance. The engine will then use patterns in the example to find similar patterns and so identify additional entities. It also uses the patterns to avoid over-classifying things based on simple format or data type. Domains defined in this way can then be applied using the Watson Alchemy Language Service or Watson Explorer.

In summary, the ability to merge and layer these services, using a rich set of APIs, as well as the ability to customize their behavior and apply customer domain knowledge, are critical to scaling Watson services.

Watson also processes in a wide variety of languages and more are being added – these are not translations but learning done in a different language. In addition, there are starter kits, demos, sample code etc on the Watson Developer Cloud.

I am attending IBM’s World of Watson and will be blogging as much as I can. First up is a two-part session on Advanced Analytics and how you can put advanced analytics at the heart of a Cognitive strategy. Paul Zikopoulos kicked things off talking about the potential for data to transform business and the huge amount of data being generated 24×7. Yet, he points out, 24×7 real-time decisioning not so much. The IoT, of course, will explode even the current levels of data and that, he says, leads you to need Cognitive capabilities.

Self service, he says, has largely failed in most organizations. People are not really self-serving, they are still using other people’s creations – the new tools made it easier to build things but did not really change the paradigm. Self service though requires data that can be trusted, tools that allow business and IT to collaborate, ways to check and manage bias and much more. This complexity is what led IBM to create Watson Analytics. And at the end of the day it’s all about improving the quality of decision-making.

Mark Altshuller joined him to discuss the vision for analytics at IBM:

  • Expand the user base to include citizen analysts
  • Rethink the UI around IBM’s design thinking principles across products
  • Make it easier to connect and use data
  • Smarter self service

He presented the data to insight lifecycle – Operational Reports to Data Discovery/Predictive Analytics to Enhance/Operationalize to Smarter Decisions and repeat. This is going to frame the discussion in the session, he says. I like the focus on decision-making but I prefer to start with the decision and work back to the data 🙂

A short video of the new capabilities as part of the continuous delivery of IBM’s analytics platform followed with lots of new UI and functions. The new UI is also becoming more embeddable and customizable with new visualization capabilities (shared between Cognos and Watson Analytics), geospatial mapping and data management capabilities.

The new geospatial mapping capabilities are focused on a wide range of geospatial problems, up to and including displaying real-time data streams on maps (demonstrated by Mapbox, one of the new partners), analyzing tweets to see who is a tourist and how they move relative to locals etc. Indeed processing some data, like phone operating systems, can show the map without the map and show divides like gentrification of city based on Apple v Android. More accurate processing allows the lanes on roads to be identified and analyzed separately.

Data discovery was introduced in Watson Analytics and the usage of this original interface was instrumented and analyzed, allowing IBM to simplify and streamline the UI in more recent versions. This usage also identified some very common and strong use cases to be identified to allow these to be made much easier also. Most recently, Cognos data packages can be included in Watson Analytics. Moving forward, Watson Analytics is focused on adding new algorithms, new uses for Cogntive.

For operationalization, Ritika Gunar came up to discuss IBM’s commitment to Apace Spark as the “Analytics Operating System”. This original commitment has led IBM to become a major Apache contributor. The new Analytics IDE – the Data Science Experience – was next and has been widely extended with partners. This IDE is focused on three things:

  • Built-in learning because things are changing fast
  • Create using open source or commercial add-in
  • Collaborate around the data and analytics

Very familiar modeling workflow UI has been used to make it easy for people to use the machine learning capabilities in the new environment. The environment also has versioning, collaboration tools, notebooks that include multiple environments and job scheduling. SPSS models can be integrated and new models can be built in a drag and drop canvas. All on Apache Spark. This can be integrated with Watson ML for deployment.

The CIO of Ameritas came up to tell a customer story. Their focus was on self-service (no data scientists – business and actuarial people), cloud for agility and a vendor with longevity. They use Watson Analytics, SPSS Modeler on the Cloud and Cognos Analytics on the Cloud.  And they are watching Cognitive a lot as they see the application of Cognitive to analytics as a critical next step.

Alistair Rennie introduced some of IBM’s planning and forecasting capabilities. A Cognitive business, he says, is a thinking and agile business. It’s not just about improving decision-making but being able to share and extend this drives agility. In particular he says, this really changes the performance management and planning environment. The platform increasingly connects sales, operations and finance, delivering capabilities for planning and forecasting that are more integrated.

Bill Guilmart introduced some clients: Zions Bancorporation came up to discuss their use of the integrated planning and compensation management tools as well as Watson Analytics. They particular liked the more integrated, more real-time/on-demand approach as well as better explanations. GCI Corporation (Alaskan Telecommunications) also came up and talked about more integrated capital investment management, project tracking and more. Bill gave a quick run through of the new collaboration and visualization capabilities in the planning and management capabilities.


I spoke at the IBM Process Transformation Summit today on Transforming Business Operations One Decision At A Time. I began with some examples of operational excellence, showing how four pillars really matter – data, metrics, processes and decisions. Of these, it is decisions and processes that offer opportunity to transform business operations.
Organizations that transform themselves in this way:

  • Focused on a decision
  • A decision about a single interaction, a single customer, a single transaction
  • Automated that decision
  • Leveraged this decision automation to drive process transformation

Many organizations don’t automate decisions because decisions:

  • Decisions are rich in business meaning, making them hard to code
  • Decisions need business and IT collaboration and business people don’t read code
  • Decisions are constantly changing but code is hard to change

To automate decisions, organizations need to adopt a Business Rules Management System to address these issues and deliver the transparency, collaboration and safe agility they need. Business rules are the baseline decision automation technology but being a decision automation hero is going to take more – technologies that can answer more complex questions are required.
beheroAnalytic and Cognitive capabilities can be added on top of a business rules platform to expand the range of decisions being automate, improve the quality and precision of decisions and drive more value.

These technologies can be added inside decision services, allowing ever increasing sophistication of decision-making without process change.When these technologies are applied to automate decisions, processes can be transformed in four ways:

  • Cheaper by eliminating manual steps, waste
  • Sooner by driving straight through processing
  • Better by targeting precisely and uniquely
  • More flexibly by providing decision agility




  • Treat decisions and processes as peers
  • Adopt a Business Rules Management System to automate decisions
  • Add Analytics and Cognitive to improve decisions

Jerome Boyer of IBM presented a methodology – Best Practices for Managing a Cognitive Business Operations (CBO) Journey Proven Method. Cognitive, in this context, is about extracting intent from unstructured text using natural language and thus improve process execution. By adding Watson services to a BPM/ODM installation, organizations can improve interactions, improve assignments, improve advice and deliver faster reactions. For instance, using Cognitive to triage emails and then decide how route them appropriately to the right process. The challenge is to do this kind of thing quickly so IBM uses a design-thinking approach and create a “garage” feel so a small team works incrementally.

The Bluemix Garage Method is based on a lean startup method focused on strong collaboration, iteration, design thinking etc:

  • Ignite with an innovation workshop
  • Design with a design-thinking workshop to create a quick win/minimum viable product (MVP)
    Move directly to a broader design
  • Realize with a prescriptive project
  • Scale with a lean program

He focused on the early steps in this process. First the initial Innovation Workshop. This is designed to deliver:

  • Business Goals
  • Persona
  • Business Entities – nouns or objects
  • Business Decisions including mapping to data types (unstructured or structured)
  • Data Science Feasibility to see how it will be to acquire, process and use the data

A good CBO opportunity will almost always have a process context. Business drivers for CBO include engagement, scaling expertise or processing new “dark” data – especially unstructured data. Common Watson services for these opportunities include natural language and speech handling, visual recognition and sentiment/tone/personality identification.

Second the Design Thinking workshop is focused on user outcomes and on very rapid iteration, accepting failures but measuring and assessing progress – even using A/B testing to formalize this testing. The workshop is short and goes through understanding the problem, exploring the area and defining the solution. Example deliverables:

  • Understanding: Empathy map for each persona in terms of what they see, hear, feel, say and do. This identifies potential problems that can be addressed, ways to support the persona or eliminate something that is painful for them.
  • Understanding: Add some detail to how decisions are made with the identified data, specifically the cognitive services and logic needed to make each of the decision (I would use a decision model).
  • Exploration: A model of the to-be process
  • Exploration: Feasibility matrix for unstructured data
  • Exploration: MVP goals and validation for those – Hills in design thinking terms – and no more than 3 of these saying who will be able to do what when the solution is done
  • Realization: MVP efforts based on 2 week efforts: many iterations, rapid feedback and assessment against the goals identified. Learn, Build, Measure, repeat.  Each component has its own cross-functional squad.

The end result generally involves a coordinate set of cognitive and decisioning services on Bluemix, connected to the organization’s business processes. Several of these services will be based on machine learning technologies that will also need to trained with data in the usual analytic way – whether unstructured, semi-structured or structured data. The detailed methodology is available from IBM.

Neil Ward-Dutton of MWD Advisors kicked off the day at IBM’s Process Summit talking about digital transformation and business processes.  Neil began with a key point – that “Digital Transformation” is more than just hype but that it is also a complicated and multi-faceted concept.

interconnectedCompanies are being digitally disrupted because the internet and mobile technology allow very small companies to compete on something of a level playing field. This disruption generally hits in employee engagement, external (customer) engagement, operations and strategy/products. Different groups see these different areas as critically important – HR think about employees, marketing about customer engagement, COOs about operations and IoT etc. But these areas are increasingly blurred and interconnected because it is all about the more efficient, more effective, coordination of resources. Technology allows these perspectives to be linked and integrated. And this is key as none of these pieces can be delivered separately – customer engagement depends on the engagement of the employees the customer interacts with, the operational environment and much more.

The focus of many companies facing this disruption focus on a future based on cloud, social, IoT etc. But the key is actually how to get to this future – how to instrument products and services (so you can understand what is going on) and create agility in your processes and business models (so you can respond and adapt as you learn).

Organizations need to “weave a digital thread” that connects an increasingly distributed value and supply chain to a cross-channel, customer-centric experience. This creates a need for true process transformation, creating processes that tie all these pieces together. Good process platforms, he says, create digital threads. They allow knowledge sharing and work coordination; make it easy to change behavior and policy; and track/manage business performance.

This distributed approach to value chains shows up in how large companies are valued – organizations own fewer tangible assets and intangible assets like brand now dominate. The knowledge of how to coordinate this becomes a critical asset and this knowledge is concentrated at the front-edge of a company. But this is where turnover and outsourcing has the greatest impact – and turnover is higher than ever. A digital platform must therefore do more than just ensure efficiency – it must make those people effective.

This is where the changes in how systems work, the use of cognitive and analytic technology, start to matter. In particular they allow systems to be developed that adapt themselves to the way people work rather than the reverse. Systems are shifting from explicitly programmed to learning and adaptive systems. Becoming more predictive, more able to make recommendations. This change in how decisions get made is matched with a change in how process platforms work – supporting more flexible, more dynamic workstyles too. Bring processes to where the work is by decoupling the process from the user experience so process can be embedded in different user interfaces.

A new platform is emerging that combines mobile, cloud, social and analytics to drive recommended decisions, tasks and processes while supporting collaboration, bringing customers into the process and extending the process out to the field. A new set of choices have to be made about how best to design the work of an organization. You need a digital platform that can support this – model-driven for agility, instrumented for measurement, open for varied deployment choices, and designed for broad collaboration across business and IT.


CiGAVqoWwAAALq3Available now! Real-World Decision Modeling with DMN is done! Jan and I have put the book to bed and we are just working through the remaining steps to getting it completely available. But if you are coming to Building Business Capability 2016 you can get the book there.

First, if you sign up for my workshop Decision Modeling with DMN on Monday October 31, you will get a free copy of the book! Plus you get a half-day introduction to decision modeling and the Decision Model and Notation (DMN) standard.

Second you can buy the book at the bookstore at the show. The bookstore is open during the exhibit hours on November 2nd and 3rd. I will be signing books at 4:20 on Thursday November 3rd, right after David Herring and I speak on our work with decision modeling at Kaiser Permanente. The book is going to retail at $49.95 but will be available at a special show discount price of $39.95.

Decision modeling with DMN is a great way to manage business rules requirements and analysis, to frame analytic requirements and to define decision-making, whether manual or automated. If you are focused on business rules, on analytics or on decision management then decision modeling and DMN should be in your toolbox of techniques.

If you are coming to the event and have questions about decision modeling just grab one of us and ask – several Decision Management Solutions folks will be there including me and Gagan Saxena, who is speaking on business architecture with Andrew Ray of Goldman Sachs.

Many of you already own Decision Management Systems: A Practical Guide to using Business Rules and Predictive Analytics – we’ve sold over 7,000 copies after all – but if you have been looking for the book recently on amazon you may have noticed something odd: the Kindle version is gone… This is not because the Kindle version does not exist – you can still buy it at IBM Press, on InformIT, or on Google Play/Apple iBookstore – but due to a strange pricing algorithm at Amazon. Essentially once someone sells the book at below the Kindle price, Amazon no longer sells the Kindle version. Seems illogical but there it is.

So, if you don’t yet have the Kindle version, or if you want a PDF version, check out the  IBM Press or InformIT sites.

I recently participated in a webinar on Modernizing Your Legacy Platform to Deliver an Optimal Customer Experience with Benjamin Baer of FICO

We discussed how, with today’s consumers expecting a compelling digital experience, organizations trying to develop modern, mobile and secure UIs are struggling with the limited capabilities of their legacy systems. An ‘optimal digital experience’ requires real-time transactions, seamless connectivity to any channel and ease of use. It’s hard to deliver the best digital customer experience when doing a ‘rip and replace’ of your legacy systems is not possible. The webinar covered:

  • The critical characteristics of legacy systems that affect the customer experience
  • The key elements of delivering an effective, digital customer experience
  • Decision management technology – its capabilities and benefits
  • How a decision management platform can deliver both digital innovation and legacy modernization

Watch the recording here.

I was recently listed as one of the top Big Data/Data Science leaders on LinkedIn by KDnuggets. I was delighted to be on the list and to share it with some great thinkers and writers.

My interest, as regular readers of the blog will know, is not really on how you build analytic models or apply data science techniques to Big Data (or small data for that matter). My focus is on how you make sure that the analytic models you build, the data science you do, impacts the business. How do you make sure that your data science focuses on the right business problem and will be fit for purpose when you are done? And how do you make sure you can make the organizatonal and system changes necessary to adopt it when you are done. Recently I wrote this post on the broken links in the analytics value chain to set out the problem.

Our experience is that decision modeling, especially decision modeling using the new Decision Model and Notation (DMN) standard, is an essential tool for data science/data mining/predictive analytic projects. If your problem is at the beginning of the value chain, specifying business understanding correctly, check out this post on framing the problem using decision modeling or download this brief, 6 Questions To Ask Your Business Partner Before You Model. If your problem is not getting the business problem framed correctly but actually getting it deployed, then check out this post on operationalizing analytics or download the brief 5 Things You Need to Know Before You Deploy Your Model.

Regardless I hope you will follow me on LinkedIn and reach out if you have questions or if we can help.

I participated in a webinar Building Outstanding Customer Relationships: Delivering Relevant Next Best Actions for Retail Bank Customers with Steven Noels of NGDATA.

We discussed next best action marketing, where each customer becomes a “segment of one” versus a “segment of many,” improving marketing action precision and relevancy. Implementing next best action marketing requires the right technology to give you a complete understanding of each and every customer so you can decide on the right actions to take, at the right time, in the right channel. The webinar covered:

  • Key concepts of next best action marketing
  • The importance of understanding your customers in an omni-channel environment
  • How to get your organization aligned around the strategy
  • How to get on the road to success with the right technology in place

You can watch the recording here.

We are kicking off a number of business rules projects this months – some new, some part of existing programs – and we are going to be applying decision modeling in all of them. Why? Because decision modeling with DMN (the Decision Model and Notation standard) really works for business rules projects. When we work with Business Rules Architects to use decision modeling as part of their business rules management system (BRMS) implementations we see 3 key benefits:

If you want to learn more about the role of Decision Modeling in BRMS implementations, check out our webinar recordings:

Enova Decisions was launched in January of 2016 as an outgrowth of Enova International’s existing technology and analytics capabilities, which are used to offer online consumer and small business loans to 11 brands in six countries including NetCredit and Headway Capital. Launched in 2004 as an online lender, Enova does all its own analytics for credit risk, fraud, operations and marketing and has 1,100 employees and nearly 5M customers around the world. Enova’s core business relies on easy application, rapid online underwriting and multi-channel service. This requires real-time decisions based on analytics around risk, fraud, marketing etc. To deliver this, Enova developed its own platform for deploying analytics and wrapping these analytics with rules for decisioning. The original platform became limiting so the Colossus platform was developed both to support the internal brands and for sale to their clients through the Enova Decisions analytics-as-a-service brand.

The Colossus platform separated out analytics in a service oriented architecture. Colossus can run a wide variety of algorithms from regression to machine learning. It deploys models built in SAS, R, Python and is integrated with a wide range of third-party data providers. This platform supports all the Enova business and is the basis for Enova Decisions.

Enova Decisions then is a real-time analytics platform for real-time predictive analytics and on-demand decision-making capabilities. Enova Decisions supports both a decisioning interface and a reporting interface. The decisioning API allows requests for decisions to be made that are then processed using rules in Enova Decisions and analytics processed using the Colossus analytic platform. Data and distributions about decisions, scores and outcomes are stored and available through a performance dashboard on the reporting side. Thresholds can be set to trigger alerts and model updates etc.

Enova Decisions is positioned as analytics as a service. Initially Enova Decisions is focused on customer experience across customer acquisition, fraud and alerting and customer operations/growth (retention, debiting and collections). These are the areas where Enova has expertise — Enova has a 50+ person analytics team with experience in these areas — but the platform itself is completely agnostic and can deploy and integrate models developed by customers too. Because the platform was built to support the Enova business, it supports complex decision-making with multiple models, A/B or champion/challenger testing and many rules as a single API call.

The platform runs on AWS and is typically set up and integrated before being billed based on usage. Enova Decisions does an initial set up engagement around integration, third party data integration and data analysis as well as (generally) an initial set of analytic models. Once it is set up, Enova Decisions provide operations support and may develop additional or updated analytics. Enova Decisions tries to keep the up-front integration and analytics development costs low so that customers can focus on the ongoing service cost.

The platform is evolving to support PMML to allow models to be integrated from a wider array of analytic tools, and business user rule management capabilities are under development to allow customers to manage rules directly rather than by working with the Enova Decisions team.

You can get more information on the Enova Decisions website, and they will be included in our Decision Management Systems Platform Technology Report.

The folks over at ZS Associates sponsored a study by the Economist Information Unit on analytics titled “Broken Links: Why analytics investments have yet to pay off”. This report showed the classic challenge of analytics – 70% think analytics is very or extremely important but only 2% say their analytics efforts have a broad, positive impact. In response I recently wrote a series of blog posts – How To Fix The Broken Links In The Analytics Value Chain – over on our company blog. You can find the posts here:

  • How To Fix The Broken Links In The Analytics Value Chain
    The first step is to understand what is broken. The study showed two areas where analytic adoption fails – in problem framing/solution approach and in taking action/managing change. Analytic technology works but the analytic value chain is broken at the start and at the finish.
  • Framing Analytics with Decision Modeling
    Fixing the first broken link means accurately framing your analytic problem. What CRISP-DM calls Business Understanding is critical for analytic success yet most analytic teams jump straight from identifying a metric to building analytic models. Framing the problem in terms of the decision-making that must be improved is critical and decision modeling is the right way to do this.
  • Operationalizing Analytics with Decision Modeling
    Fixing the first link and applying analytics to the right problem is necessary but not sufficient – you still need to actually change organizational behavior and take action. Operationalizing your analytics so that the decision-making you identified is actually changed, doing this fast enough and tracking the effectiveness of this change are all critical. Decision modeling is key here too.

Decision modeling, specifically decision modeling using the Decision Model and Notation (DMN) standard,  can fix the broken links in your analytic value chain. To learn more, check out these briefs:

I recently got a chance to catch up with the IBM SPSS team for an update. Analytics, in IBM’s view and mine, are increasingly necessary as digitization increases the scale of business data and digital disruptors increase the difficulty of making good decisions. For those being disrupted analytics offers a powerful way to fight back. Those CEOs that are outperforming in this difficult environment are focusing increasingly on predictive analytics (not just analytics) and streaming/operationalized solutions not just visualization. In this environment IBM wants to offer a comprehensive platform for analytics with data connectors for all kinds of data, data preparation, analytics at scale, and insight to action with deployment. The full suite includes

  • IBM SPSS Predictive Analytics (Last review here)
    • Statistics
    • Modeler
    • Analytic Server
  • IBM Prescriptive Analytics
    • CPLEX Optimization Studio
    • Decision Optimization Center
    • Decision Optimization on Cloud (Reviewed here)

Plus there’s pre-configured and configurable content on Customer Analytics, Operational Analytics and Thread/Fraud Analytics. All of these – SPSS Modeler, the decision management capabilities and the optimization engine – are part of the IBM SPSS Modeler Gold.

The Predictive Analytics stack is focused on creating value faster by offering a mix of long-standing and new capabilities:

  • Simplified, scalable, code-free deployment
  • Advanced Model Management including Champion/Challenger
  • In-database and In-Hadoop modeling
  • Batch/Real Time/Streaming deployment
  • Analytic Decision Management deployment

One of the key areas of focus is scaling these capabilities on big data systems because customers overwhelmingly intend to deploy to Hadoop, Spark, cloud and streaming environments. Customers really want to move to this environment and this has to be reflected in the way the products work. IBM SPSS has two approaches for this scale:

  • Parallelism with support for Hadoop, Spark and streaming
  • In-database across a wide range of database technologies

Spark is clearly a critical element with IBM making a large commitment to Spark. IBM SPSS allows users to deploy models on Spark clusters for instance. Recently IBM has made more of the algorithms in SPSS massively parallel so that they scale up to support Big Data volumes without the need for Analytic Server. New algorithms have been added in the area of geospatial analytics.

IBM is also focused on involving developers, data scientists and business analysts in the predictive analytic process. This means allowing the Watson Analytics smart data discovery environment to collaborate with those using more advanced predictive analytics tools like SPSS Modeler. Some of the same underlying technology is used in Watson Analytics albeit with a different UI but the intent is to allow users of Watson Analytics to access models developed using the more robust workflow management in SPSS Modeler.

Open Source is, of course, a big deal in analytics, so SPSS has been supporting R, Python and Spark. These can be scripted directly but data scientists can also encapsulate this code behind a simple UI and made available as a node in SPSS Modeler. An increasing array of these extensions are available in the IBM SPSS Predictive Analytics Gallery. Several of these also use the Watson APIs. Various Python and other extensions can also be loaded into the Modeler environment to make it easier to use a wider range of open source algorithms and scripting approaches in the workflows being managed in SPSS Modeler.

From a deployment perspective, Predictive Analytics on Bluemix allows models to be easily deployed to and then used in the cloud. The developer just needs to have access to the model project and they can create a scoring service in the cloud.

IBM has also recently launched the Data Science Experience leveraging RStudio, Notebooks and more. This is focused on a community environment for programmers and “hackers” and is web based, very focused on downloading examples, tutorials, community etc. All of the open source tooling can be integrated into the notebook metaphor and Apps can be created using Shiny. This is primarily focused on exploratory data science and deployment today means taking the scripts and loading them up into SPSS – the different environments have a shared understanding of open source scripting languages. IBM sees this a complementary to SPSS Modeler and sees more integration and overlap in the future.

You can get details on recent adds to SPSS Modeler here.

We have a set of online training coming up in July:

  • Introduction to Decision Management to introduce the key concepts and terminology of Decision Management and provide a framework for successful business rules and analytic projects
  • Decision Modeling with DMN to learn how to model requirements using the Decision Model and Notation standard, the cornerstone technique for specifying requirements for these powerful technologies
  • Decision Table Modeling with DMN to take your decision modeling to the detail you need for execution with decision table expert Jan Vanthienen.

These online training classes will help you position your projects and programs for ongoing success. Decision Management and decision modeling can help you improve risk management, become more customer-centric, deliver increased business agility, and dramatically improve your business processes. We offer the most complete, standards-based and vendor-neutral, Decision Management and Decision Modeling training curriculum. Proven with 800+ students and delivered live online in multiple, short sessions, our training is highly rated by students. Plus the early bird and multiple attendee/multiple class discounts make it really affordable.

Ready to sign up?  Click on the link for more information and registration.
Upcoming Training

Our training is great value and our live online delivery makes it easy to attend without disrupting project schedules. Even so, getting approval to pay for training can be tricky. To help you, we have developed an outline proposal to get support from your organization.
Convince your Boss

I recently participated in an International Institute for Analytics webinar,  Prescribing The Right Decision With Prescriptive Analytics (I am a faculty member of IIA). In the webinar we did some surveys that had some interesting results.

First off we asked the audience what they were using analytics for. This was interesting to me as it overlapped with a question I asked as part of the Analytics Capability Landscape research we did last year.  I took a subset of the answers and corrected for it being a multi-select answer this time to come up with the graph below that shows the degree to which analytics are focused on:

  • Reporting on data
  • Monitoring business performance
  • Improving decision-making

I ended up with three sets of answers – one from the Analytic Capability Landscape (ACL) survey focused on what folks were doing then, one from the same survey focused on where they expected their focus to be in 12-24 months and the IIA results from this week.

IIA Why Use Analytics

You can see that when we surveyed before we got a strong focus on analytics being about reporting and monitoring, with a lot less on decision-making. The IIA results, on the other hand, showed a bigger focus on decision-making while the survey that asked what people expected to be their focus in the future showed a clear trend – away from reporting, away from monitoring and increasingly focused on decision-making. This is why we like to say “Begin with the decision in mind” – stay focused on the decision to maximize the value of analytics.

IIAPrescribingActionsThe second survey was asking about prescription – to what extent were companies using analytics to prescribe action not simply as a way to present insight. Here you see that over half the respondents are doing analytics but not driving to prescribed actions while another 40% are only prescribing action sometimes. This is a missed opportunity – using Decision Management to drive actions from predictive analytics is key to getting value from them.

IIA Deployment ResultsThe final survey asked about deployment – time to deploy analytics and see results. As usual well under half the respondents said they were able to get their analytics into deployment in weeks or less – most took months or never really managed it. Focusing on how the analytic can drive action is one way to improve this – it focuses deployment efforts – but the other is to ensure that deploying and integrating the analytic is part of the same project as developing the analytic. As one client likes to say “minimize the white space between analytic success and business success”.

We strongly recommend decision modeling for analytic clients to address all these issues. Using decision modeling:

  • Focuses everyone on the decision-making to be improved
  • Makes sure that the actions that are being guided or prescribed by the analytic are clear
  • Puts the analytic into a deployment and usage context right from the start

If you want to see the webinar, check out the recording here. If you want to learn more about decision modeling, check out this white paper on framing analytic requirements with decision modeling.

We work with a lot of business rules architects and we see that more and more of them are using decision modeling as part of their business rules management system (BRMS) implementations. I recently wrote a series of blog posts – 3 Reasons Rules Architects Are Adopting Decision Modeling – over on our company blog. You can find the posts here:

  • #1 Business Engagement
    Rules architects find that building a decision model (especially one using the Decision Model and Notation (DMN) standard immediately engages their business partners – business analysts, subject matter experts and business owners – because decision modeling let’s everyone see the forest not just the trees.
  • #2 Expanded Traceability and Impact Analysis
    Decision models link the business context to the business rules, enabling traceability all the way to the knowledge sources that drive rules and impact analysis all the way to the business objectives.
  • #3 Using Agile Not Waterfall to Write the Rules
    Unlike traditional rules-first approaches, decision models lend themselves to iterative development and agile project approaches.

Of course you need a decision modeling tool to make this work, one that is integrated with your BRMS. If you are interested, you can see how we have done this with our DecisionsFirst Modeler – BRMS integrations in action in these demonstrations:

One of my students (from the UCI Extension Predictive Analytics Certificate in which I teach Business Goals for Predictive Analytics) sent me this article on Toyota Financial Services and its use of data science, predictive analytics, in collections. It’s a great example of how to use analytics to improve your business outcomes and well worth a read. Three key points leap out at me:

  1. What Toyota Financial Services did is a classic example of what I call Micro Decisions (a phrase Neil Raden and I came up with for Smart (Enough) Systems). Instead of treating everyone the same – using a “broad brush” as the article puts it, analytics are used to drive a decision for each specific customer. What will help keep this customer in their car while lowering overall delinquencies.
  2. Solving this kind of problem – a Decision Management problem – often involves a mix of technologies and you need to be solution-focused not technology-focused as a result. As the article says “the whole is greater than the sum its parts”).
  3. It’s essential to keep the analytics team focused on the business problem, not just on the data or the analytic itself. The team co-located and kept its eyes on the decision-making they were trying to improve – “This is a team effort, not just the department, and you have many players that all have to cooperate”.

There’s a great quote in the article:

“Analytics is all about making decisions. Focus on what decisions you have to make and what actions you have to take, rather than starting with data or systems. Understand the business process. Involve the statisticians, and fit the analytics to the corporate culture.”

It’s well worth the read and if you like the article, check out this white paper on framing predictive analytics projects – something that will help you do what Toyota Financial Services did.

There is a great article from Bain and Company from 2013 that Elena Makurochkina (@elenamdata) pointed me to today – Infobesity: The enemy of good decisions. This is not only a fabulous phrase – infobesity feels viscerally correct as soon as you see it – but a great article too. Some quotes:

Companies have overindulged in information. Some are finding it more difficult than ever to decide and deliver…
Useful information creates opportunity and makes for better decisions. Infobesity does not.

Closeup portrait of business executive drowning under water

These are great. More information – infobesity – will not improve decision-making. Simply overloading decision-makers or decision-making systems with every more data will not get it done. This has been true for a long time – how long have we been talking about “drowning in data” after all?

Big Data makes the problem even greater, making it ever easier to drop more data on a problem and declare victory. We can mitigate this somewhat by using analytics to summarize and increase the value of our data but we run a real risk even so of overwhelming decision-makers.

And frankly decision-makers themselves are part of the problem. Ask them what data they need to make a decision (or that a system would need to make it) and they will rattle off a long list.

So what can we do about it? Well the folks at Bain suggest four things:

  • Focus clearly on the data you need
  • Standardize data where you can
  • Watch the timing of which data, when
  • Manage the quantity and source of your data, especially Big Data, to make sure it is relevant to decisions

But how to do this in an analytic project? We have found that decision modeling, especially decision modeling with the Decision Model and Notation standard, is a great tool. A decision model identifies the (repeatable) business decision at issue, decomposes it into its component sub-decisions, identifies the data that must be input to each piece of the decision-making and shows where the know-how to make (or automate) the decision can be found. Plus it gathers business, application and organizational context for the decision. Experience with these models at real customers shows just how these models can tackle infobesity:

  • One decision model showed that the data a group of medical experts had requested included lots of data that, while interesting, was not going to actually impact their decision about a patient.
  • The same one showed that all the data in the system could not replace one particular piece of data that had to be gathered “live” by the decision-maker.
  • Another showed that a large amount of claims history data did not need to be shown to claims adjusters if analytics could be used to produce a believable fraud marker for a provider.
  • A third model showed that adding more data, and even analytics, to a decision could not result in business savings because it happened too late in a process and all the costs had already been incurred. A cheaper decision meant an earlier decision, one that would have to be made with less data and less accurate analytics.
  • A model showed the mismatch between how management wanted the decision made – their objectives – and how the staff that made the decision were actually incented to make it.

and much more. In every case the clear focus on decisions delivered by the use of decision modeling cured actual or impending infobesity. For more on how you can model decisions for analytics projects, check out this white paper on framing analytic requirements with decision modeling.

I’ll leave the last word to the folks at Bain:

At root, a company’s performance is simply the sum of the decisions it makes and the actions it takes every day. The better its decisions and its execution, the better its results.