≡ Menu

As part of the Building Business Capability conference I have gave a workshop on Business Analysis and Architecture with Decision Modeling. It’s hard to blog my own sessions but here are the takeaways:

  • Focus on decisions
    • Transactional, operational decisions
    • Decisions that control your processes
    • Manual or automated, rules-based or analytical
  • Model Decisions
    • Decision Requirements Models clarify requirements
    • Decision Requirement Models decompose decision-making
    • Decision Requirement Models scope rules, analytics
  • Work Top-Down
    • Top-down models reveal context quickly
    • Decisions first, rules second
    • Iteratively extend the model, write rules, develop analytics

If you want to know more about decision modeling with DMN why not download our white paper on DMN or sign up for our next online DMN training class in November?

Gagan Saxena, also of Decision Management Solutions kicked off the first day of sessions for me at this year’s Building Business Capability show in Las Vegas. Gagan has been working on a large financial services client’s regulatory initiative and presented on the role of decisions and DMN-based decision models on regulatory compliance. A business architecture with decisions at its center, he says, is increasingly central to the way you can manage regulations and laws.

The problem at issue is how to ensure compliancy and traceability with respect to large, complex regulations. These regulations represent knowledge that is meant to influence our decision-making yet in most organizations there has been no attempt to understand and formally describe these decisions. This is made more complex by the number of players – layers of interpretation of the regulations get created in addition to the core regulation. All this verbiage is condensed into requirements and then implemented, creating potential confusion and a lack of traceability/explicability.

This organization tried to use business rules in this process but this remained very complex as it is easy to create many rules without it being clear how to manage and structure these rules – by line of business, geography, or something else? All choices seem equally bad due to overlap and reuse between the rules. In addition the different players were adding their own layers of rules, policies, guidelines and interpretations. This adds to the complexity.

All this led the organization to refocus on decisions. It became clear to some folks in the organization that the decisions in their operational processes, the day to day decisions that are meant to be constrained by the regulation, were the persistent and most applicable element in their solution. Decisions they saw could be used as a structuring element for all these different rules, grouping them into the decisions that must be made regardless of the source of the rules. This decision-centric approach is a way to deliver on the promise of the knowledge economy, building knowledge-driven processes by focusing explicitly on the decisions involved.

Decisions are real and tangible, they can be identified, described and managed. Once identified, decisions allow you to inject your knowledge (and your regulations) into your processes effectively. This knowledge might be rules-based but it could also be analytic or mathematical knowledge. And an understanding of the decision making allows you also to identify the information that is needed.  This is particularly critical when thinking about organizational decision-making rather than just individual decision-making, especially in a large organization where there many players.

The moment of clarity for this client was the recognition that decisions break down into pieces – into other decisions that are required before the overall decision can be made. For this client this allowed them to re-assess how much decision-making can be automated – where does the automation boundary fall today and tomorrow? It also let them see how the various decision-making technologies (business rules, data mining, predictive analytics, optimization….) fit together, how they can be used in combination.

To model these decisions the client picked the new Decision Model and Notation standard. DMN allows you to describe the question you have to answer to make the decision and then model the information and knowledge needed to make this decision as well as the other, smaller decisions that must be made first. And this needs to be put into an organizational and business context – who is involved, who cares, who has to approve and facilitate. All of this can be captured in a decision requirements model and mapped into the business context.

Critically for this client these decisions are made in the context of the processes the client used to run their business. It also had to be linked to the other elements of this client’s business architecture – i’s data architecture, systems architecture and so on.

To develop the Decision Architecture, this client began with the regulation itself. Identifying the decisions that it is intended to influence. These need to be modeled, using a decision requirements model, and put into the context of a process model. These decision requirements models can then be extended with a specific information model and decision logic in decision tables to fully define how the decision should be made.

The regulations themselves are generally logical and mathematical, laying out rules about how you are supposed to make decisions. One can find the decisions in these regulations by framing the questions that the rules are meant to impact. Threshold values, definitions and more can be turned into knowledge while the data being considered can be mapped to the information in the decision. Critically the client found that a decision requirements model built from the regulations could be used to engage all the various groups involved. Legal teams could add their interpretation into the model directly, business owners could see the implications of the regulations and more. And overlapping pictures from different sections of the regulation could be combined to see reuse, common decision-making, standard information etc. This is where this model started to become an architecture with shared data, shared decisions, multiple elements referencing the same specific sections of the regulation.

The decision models built are widely usable and understood, supporting a wide range of analysis techniques – prototyping, agile development, workshops, executive support and sponsorship. Many of these models were developed collaboratively by groups from very different silos – from legal to IT.

Challenges including shifting data and systems architecture. In particular some old systems contained logic that had to be replaced with this decision model and this kind of legacy modernization is tricky.

Moving forward this client is looking at rules beyond those driven by regulations. A focus on the business concepts and a more decision-centric culture is also helping simplify business processes by externalizing and managing decisions. There is also some interest in adding big data analytics into these decisions, using the models to see how machine learning and data mining could drive the decision making. Finally could this approach be used to specify how dashboards and visualizations need to support decision-making consistently and accurately?

I got an update from CRIF recently. CRIF is headquartered in Italy with $450M in revenue and 1,884 employees. They have customers in over 50 countries, offer credit bureau services in over 15 countries and credit decisioning solutions worldwide. Most customers are financial institutions though they have customers in other industries such as telco, utilities, energy and media. They offer the technology, data and management consulting around credit decisions. The core of their offering is the CRIF Credit Framework which includes the CRIF Credit Management Platform that has configurable products and pre-configured applications.

The CRIF Credit Management Platform’s configurable products contain business user configuration tools, web portals, reporting, auditing, compliance and security. Multi-country support is built-in with support for multiple languages and currency, different data privacy rules and data sources as well as various lending instruments. The platform is available on premise, cloud and as a BPO offering. The configurable products are three core elements:

  • CreditFlow – BPM for credit processes with workflow, document management and authentication as well as user interface elements for multiple channels
  • StrategyOne – A Decision Management Engine
  • CreditBility – Additional reporting and ETL capabilities

Focusing in on the decisioning component, the core of StrategyOne is a decision flow, consisting of calculations, rules, models and outcomes. StrategyOne’s business user configuration tool runs in a Windows application and is aimed at non-technical users in the credit management space. Elements include

  • Decision Trees
    Expandable table-like layout showing the conditions/thresholds as well the distribution of records between branches for the current simulation distribution. KPIs can be defined too in terms of a calculation resulting in a true/false and how each branch does relative to these KPIs is displayed also.
  • Scoring Models
    Predictive analytic scorecards can be specified by hand or imported from PMML generated from a customer’s choice of analytic workbench.
  • Decision Tables
    Business rules can be specified using cross-tab and rules-as-rows tables with a wide range of standard features like merging cells, multiple outputs etc.
  • Calculations
    Built with a point and click expression editor.
  • Exclusion rules
    Standard if-then rule structures
  • Champion/Challenger
    Specific branching in the flow to support A/B testing and champion/challenger, records are randomly assigned based on the user’s experimental design allowing for reporting and comparison of results later.

Each flow has an input and output data structure used throughout. The decisioning strategy defined can be tested against a specific record or simulated with a set of historical data to see what the overall outcome distribution would be along with the distributions of results through the flow. Simulations can be compared between versions and all the differences reported. The whole decision flow can be documented in a report with a detailed and readable document and deployed.

The CreditFlow tool has a business user configuration editor for laying out the process and a multi-device thin client user interface. One of the node types is a link to the StrategyOne environment to include decisioning. CreditBility, the reporting, is a rich reporting and analysis tool supporting reports, analysis and dashboards. All the data generated by StrategyOne and CreditFlow (simulation data, champion/challenger data and production execution data for instance) is also available for reporting.

Moving forward CRIF is focused on expanding their pre-configured solutions, improving the business empowerment of the product, improving optimization capabilities and improving documentation/logging/control. Big Data integration is coming too.

More information is available here.

Jeff Ma, he of the book Bringing Down the House about winning at Blackjack, gave the closing keynote on advanced analytics.

First he asked what can you learn from Blackjack

  • Reduce the casino’s (or your competitor’s) edge For instance basic blackjack strategy executed perfectly reduces your losses by 50%
  • Avoid omission bias People favor risk caused by inactivity than by activity in that they hate taking action that causes risk and so they do nothing even when that is wrong
  • The fallacy of “gut feeling” All decisions are based on data, even if we are not clear what that data is
  • The right decision is different from the right outcome Making the right decision does not mean you will always get the right outcome, just that you will do so more often

Always favor data over gut!

These leads to some characteristics of an analytically inclined organization:

  • Trust in the team (about what a good decision is for instance)
  • Intrinsic motivation (to make better decisions)
  • Communication
  • Metrics (as few as possible but no fewer)
  • Transparency
  • Competition (but with shared success)

What are some of the limitations of analytics? How might you challenge these limits

  • Job security is critical – you must be focused on the long term win not on keeping your job for the next little while
  • Provide context but not necessarily control
  • Collect the data you need to measure good decisions
  • Remember that you are only trying to make better decisions not perfect ones – hard problems don’t need perfect solutions

Finally he says believe in the numbers. Use the data, believe in the data, make good decisions even in the face of individual challenges and bad (if unlikely) outcomes. And remember that this is a long term thing not a one-time thing.

Mark Clare, the Global Head of Data/Information Management and CDO at HSBC, came up next to be interviewed by Jill Dyche of SAS. Mark has been a CDO previously and took his latest role in large part because it is a business role focused on driving data into the operations of several division. He and his colleagues are targeting a digital world in which analytics and data are central to everything. As this is an interview, here are some takeaways in a random order:

  • Executives who get it often have a background in data and/or technology
  • Discovery and operations are both essential – must be able to find new insights and be able to operationalize this and manage compliance, regulations etc.
  • Work the whole leadership team to make sure everyone is on the same page, including technology and finance
  • Shadow IT and shadow analytics arise when the core capability is inadequate to a business need and longer term it requires a focus on what should be a shared service and what should not be
  • It’s really powerful to have an injection of digitally native experience even into regulated industries but this is not a substitute for industry domain expertise
  • It’s important that business units are part of the prioritization discussion – it’s good to listen to their priorities and interpret them but engaging them directly in the discussion works better
  • Acting as a connection between business units can start driving shared analytics
  • As a global organization they are very focused on developing all the skills they need in every region – this means recruiting locally, moving people around and investing in skills training and development
  • Sustainable data requires sustainable processes, data ownership will require process ownership
  • Shifting the focus from reporting and monitoring to insights, deciding and acting.
  • Mix and match packaged and bespoke capabilities, focus “build” where there is unique know-how and business experience inside the organization
  • Speed requires strong business alignment and engagement – can’t move quickly without it
  • Still a bit of a technology space so hard to be an analytics leader without some technical skills but industry and business skills are essential
  • Unstructured data is data whose pattern you can’t see yet and new technologies make it easier and faster to find these patterns so you can use all your data
  • Some analytics projects have an obvious revenue target but it’s useful to always have one. Sometimes though all you can do is link your analytic success to your business partner’s metrics
  • Customer journeys cross lots of silos and business areas requiring a complex mapping of revenue generated to the contributors to that revenue.

It is he said an exciting time so get in to analytics and data now!

Jack Philips kicked off this year’s International Institute for Analytics Chief Analytics Officer Summit emphasizing the power and pervasiveness of analytics. Analytics, he says, is a classic disruptive innovation with most companies seeing a slow start and then a sudden inflexion point to rapid adoption and growth. Now the challenge is how to scale analytics, how to drive an analytic operating model forward. In particular an analytic operation needs a leader that is conversant in the technology and methods of data science but also a visionary and communicator, a program builder and manager, and someone comfortable with risk. Dan Magestro, head of research, joined Jack and highlighted the wide range of research and engagement IIA has offered its members. They wrapped up with the key theme of the event:

  • Advancing Analytics
    • Leadership – how to drive it forward
    • Value – how do I focus and show the value
    • Culture – how do I change the company to be data driven

With that, Bob Morison came up to open up the event talking about his recent research into “Analytics and Data Leadership: Focusing the roles (subscription required)”. This research was developed from 20 Chief Analytics Officers or Chief Data Officers, 16 of whom were the first to hold the role at their company. Many of them had both roles and they were skewed toward financial services. When it comes to these roles, three things drive their adoption: Enterprise Need for the role, Enterprise Commitment to the role and Candidate Capabilities.

Practically speaking organizations need both roles filled – either by one person or by two working closely together. This is hard because the roles are both new and evolving – role clarity was not the norm creating risk. In particular if both roles exist they must have some distinction such as demand v supply, offense v defense – adding value to data with analytics v managing data quality and consistency. But enterprises need to be ready – in particular when data is being identified as an asset by the CEO and executive team. CDOs tend to be driven by fragmented data environments, regulatory challenges, customer centricity. CAO tends to be driven by a focus on improving decision-making, moving to predictive analytics, focusing existing efforts.

He presented a 2×2 grid: Supply (is you data in shape?) v Demand for analytics:

  • Strong demand and supply drives a single person handling a combined role.
  • Demand but poor supply tends to drive a need for both roles with a team acting as the analytic data team in between them
  • No demand and no supply means you only need a CDO really

Regardless many report to the CIO while others reported to the CEO. One of the challenges of those that report to the CEO is that they become part of the C-Suite and that tends to drive a focus on the business as more of a GM and less of a specific analytic focus. Also a general sense that sometimes one role or the other is easier to “sell” and that starting there is good.

Some disagreement in the research as to the way the roles should work – one person for both (with a strong sidekick for data), two people reporting to the same person or two people reporting separately (CDO reports to the CIO while CAO reports to a business executive). There’s some clarity advantage to a single combined role but a split allows CAO to focus on business while CDO focuses on technology and infrastructure/processes.

Regardless of how the roles get set up, they are evolving:

  • Scope is expanding from tactical to strategic, from local to cross-functional and enterprise, standalone to an ecosystem
  • Focus is shifting from defense to offense, to more advanced analytics, to decision-making not just data

Success is being measured in terms of capabilities (data governance, big data platform, predictive analytics, model management) and in terms of business outcomes. Challenges are about alignment, building teams, prioritization, business usage, enterprise management of analytics, legacy and culture shift. Pretty usual barriers. What’s next? Lots of very ambitious agendas including product development, marketing, customer experience, digital business and more. Focusing on crossing the inflection point to deliver results.

Connecting with the business is always critical for these roles and all of them spent a lot of time working with business partners as educators and evangelists. They try and triage their customers to focus and generate the successes that will build long term growth in demand and partnering. Some had good working relationships with IT though some found this more challenging.

Organizationally some covered everything, some split their data governance teams out, some divided analytic teams by type of modeling. Data management generally more centralized and more advanced analytics generally more centralized than BI. Lots of CoEs.  CAOs in the audience talked about aligning with the business structure especially for more advanced analytics, and about the balance between centralized and distributed teams. Where teams are embedded in the business some clear focus on making sure all the analytics professionals are connected with dotted line associations. Variation, no cookie-cutter answer and ongoing adaption as things change.

Final session today focused on systems and architecture for Big Data Analytics. It began by talking about the friction between business and IT and how this is increasing, especially around information and analytics where business users want to be able to work with data without worrying about IT. This creates challenges for IT specifically:

  • The data furball Data governance and quality initiatives are long running and complex yet don’t seem to prevent silos and data inconsistency, both of which can be a challenge for analytics. Increasing needs for speed, big data volumes and velocity don’t mean that old problems go away.
  • Becoming agile and incremental As the world becomes more fast moving, especially with respect to data, it becomes essential that data activities like governance become more incremental. It must be possible to do more of this as data is needed or adopted.
  • Deliver lower latency And all of this needs to be done faster so there is less latency in getting data from input to decision-making. The data, or analytic, supply chain must match the timeliness or latency of the business. Batch data preparation may be fine sometimes but not others for instance. The speed of decision-making is driven by business need and the data pipeline has to support it.
  • Reduce IT costs Eliminate or reduce costs caused by bad data as well as the “raw” cost of the infrastructure. Plus it must be proportional to the value being created, incurred more gradually not monolithic. Cloud, of course, is a big potential driver of cost reduction.
  • Provide security and compliance Finally all this has to remain secure and compliant, especially as companies use the cloud more, access more external data and adopt hybrid cloud/on premise architecture.

From a solution perspective IBM sees the need for a data reservoir (data lake) based on but not limited to Hadoop. ING came on stage to join the IBM team and talk about their big data infrastructure. ING wanted to bring all their data together but it was essential to them that they could do this under control so they could address their regulatory and compliance needs. They saw a mix of IBM and open source technologies as appropriate and developed an evolving architecture to deliver it. Short, rapid iterations have been key to selling this value – 5 week proof of value projects for instance -as well as working top-down with the board of directors. They have also found that using simple analogies helped the business see the value and that it was critical to do this incrementally.

From a metadata perspective they see a move to a more incremental and crowd sourced approach. Moving to Big Data can hide metadata back in code again if you are not careful. Learning as people do things is likely to be key given have fast things can move.  Of course some things need more control and more centralized management.

APIs are another aspect of this infrastructure – data and APIs are increasingly coming together and must be used together in solutions. Analytic digital transformation is generally driving customer engagement. But to get real value this must also transform the operational environment through automation. APIs allow data to be made more available and allow analytics to be driven into code. This is what will drive a cognitive business. Making this work requires old apps to be exposed, new open APIs to be used, different speeds of interaction to be managed. A hybrid cloud of micro services seems to work best.

IBM sees a final model that has several elements:

  • Discovery, insight, search, visualization and self service for analysis
  • Collaboration and governance
  • Information visualization fabric that delivers a consistent and holistic view
  • Built on a ubiquitous open source model

This has to enable and empower rapid, iterative self-service as well as more formal data science access and rapid deployment of the results.

Next up at IBM Insights is the Advanced Analytics keynote with a focus on how to get insight out of all these new data sources and infrastructure. The session is focused on Spark, on hybrid cloud-on premise solutions and on trusted data. Companies are moving up maturity curve they say, moving from cost reduction to a modern BI infrastructure and thence to self-service and new business models (not a maturity curve I would have used as self-service is orthogonal to maturity not a step on the path).

A cognitive business is based they say on analytics. For a cognitive business every business process (every business decision) is enhanced by analytics. Analytics is used to solve problems and make better business decisions across the whole organization while continuous improvement and learning drive ongoing change. This takes a range of analytic tools from those aimed at business users, powerful data science and advanced analytics capabilities, strong data integration etc etc. Specifically a platform that is:

  • Hybrid – cloud or on premise access to data of various types no matter where it is stored
  • Trusted – data and insight that is believable and usable
  • Open – based on and leveraging open source analytic capabilities

Starting with data scientists, one of the two key roles that the analytic stack must support. The IBM Predictive Analytics stack has recently been extended with Spark-based algorithms. This means that R, Python and Spark routines can be included in modeling. The coding of these algorithms can be encapsulated so that not everyone has to code. With more data, more options for analyzing the data, data scientists have had to become coders and data engineers too. Examples of needs for external data that must be integrated with internal data, open algorithms, streaming data and more abound.

Analysts too need support from the stack which is where Watson Analytics comes in (I just blogged about Watson Analytics). In addition, they say, analysts need support fort external data like weather and twitter data as well as unstructured content managed in box.net for instance. Enhance first, they say, and then analyze the data without preconceived notions and therefore without bias.

In parallel with the investment in Watson Analytics for discovery and analysis, the user interface for the core Cognos product has also been improved. IBM Cognos Analytics is the evolution of the Cognos BI stack with a new user interface, a simpler one with more work space and less menus and one that is mobile-friendly. In particular the search is designed to be very context aware to make it easy to find content. While all the content developed with Cognos BI is brought forward, the new environment allows more interactivity so that users can add filters or otherwise interact with data in a report build by someone else.

It also allows end users to bring in different data sources, easily blend them and then report across the now integrated data. The product will suggest possible approaches to integrate the data based on intent expressed by the user. Having dynamically built a data model in this way the user can now self-serve reporting or visualizations against the data. The environment allows a relatively non-technical user to do more while still allowing a more sophisticated user to gradually configure and extend them.

It’s clear from the panel that follows this overview that Cognos customers really like the new UI, the mobile features, the ability for less skilled users (and mobile users) to develop their own dashboards and reports, and more rapid access to more data.

Three announcements then – support for Spark in SPSS Predictive Analytics, Watson Analytics enhancements especially around data access and Cognos Analytics as a new generation of Cognos BI with a focus on mobile and self-service.

After a quick catch up with the IBM Operational Decision Manager (business rules) and IBM SPSS Modeler teams to talk about their cloud and Spark enhancements respectively, it’s time for more on Watson Analytics. I last blogged about Watson Analytics last year so it will be interesting to see what’s been going on since then.

IBM Watson Analytics now has 500,000 registered professionals with widespread adoption across multiple industries. It’s positioned as a way to get an initial cognitive focus on your business. Last week they announced a set of data connections for data sources, an ability to do secure data access and some new social analytics.

From a product point of view, several new features have been introduced:

  • From a usage perspective the most effective driver of successful use is the availability of suitable data – once people have uploaded data they work with it more for instance. The new sample dataset marketplace is a way to get rapid access to data they can work with to learn.
  • Data replacement has been automated, ensuring that existing explorations and dashboards refresh automatically when data is replaced by updated data.
  • Integration with DataWorks provides a set of additional data connections, new data quality and shaping services and a secure gateway to on premise data sources. These connectors also allow the quality and shaping services to be applied to the data before it is uploaded for access in Watson Analytics.
  • The tool supports conversations for collaboration, allowing users to discuss an asset and poll colleagues for instance. Users can save the conversation with the asset also.
  • Results can be shared by downloading explorations, predictions, dashboards as images, PDFs or PowerPoint files. Private view links can also be used to share results inside Watson Analytics.

Meanwhile in the lab they are developing expert storybooks that are co-branded templates to deliver best practice analytic presentation templates. In addition there has been a significant focus on social analytics, especially social analytics beyond engagement.

A Watson Analytics customer gave a couple of good use cases. In particular there is great value in rapidly identifying data sources as potentially useful, allowing effective analysis of them without having to integrate them into the data warehouse/ETL process first. Essentially they can load up the data and immediately get some suggested ways to view and analyze the data. This saved a bunch of time in terms of data preparation, conversion, analysis etc.

My second session at IBM Insight is Hamilton Faris Chief Data and Analytics Officer of MetLIfe – a past customer of ours – talking about Predictive Analytics.  MetLife of course is a big insurer with 100M customers in 47 countries. The group Hamilton runs focuses on bringing analytics to the forefront of the decision-making process across the whole organization. Insurance companies have all sorts of insight professionals like actuaries in business units so the group initially started with a focus on operational analytics. It provides services, tools and technology across the range of analytic solutions and has a strong relationship with IT.

Hamilton starts with a key insight – you can’t get value from analytics without activating those insights and that means continuously moving toward Decision Management. It means moving from ad-hoc and one-time use to increasingly automated deployment and execution of analytics to drive ongoing decision management and embedded decision support – whether the decision is automated or not the analytic must be embedded into the systems and processes that people use and that handle transactions.

Three key elements:

  • Business, focused on model development, data requirements, SLAs and KPIs.
  • Technology deliver the data stream, integration, deployment activities and optimization.
  • Activation Enablement means tracking KPIs and SLAs as well as break/fix support.

Deployment is critical for analytics and this means understanding the decision you are trying to influence and how often do you make it, what latency do you need, what kind of answer is required and who is involved? How automated is this going to be and how often might it need to change? All great things to know and part of a decision-centric modeling approach – the kind of thing we capture in a decision model to frame analytic requirements.

As projects move into more complex decision-making the issue of how the analytic fits with the business rules that are also part of the decision. These used to be embedded in legacy systems but increasingly MetLife is carving out decisions and managing the business rules and analytics for those decisions in a more agile, flexible way. Especially when there is value in moving past ad-hoc, one-off execution they find this becomes critical. The ability to take a set of models and get them quickly deployed with automated updates as necessary drives significant incremental value by making them part of the day to day business cadence. These vary from presenting scores to running batch updates to a more real-time scoring and decisioning environment – Decision Management.

Some lessons:

  • In one example they were able to deliver changes in 24 hours and the team gradually extracted legacy business rules from the old system into the decision management system.
  • They focus on making sure that data elements being passed to the model for scoring are as wide as possible so that changes in the model don’t have to be re-integrated each time the model is updated.
  • Some groups want reports or ask for analytics without a decision-making context. Others understand the need to focus on decision-making but need help with the organizational change. Others are more decision-centric and focused on turn-around time and getting the next thing deployed.
  • Takes time to grow this kind of business and being results focused, building a capability that is scalable and repeatable is key as it an ability to find the right opportunities.

It’s interesting how much value they can add even without changing the “core” decision like pricing or underwriting – improving queuing, focusing resources, selecting between alternative approaches can all make a huge difference to operations. As their business partners get more comfortable with the approach they are increasingly also looking for ways to apply it to the “core” business decisions too – taking advantage of the scale and flexibility the team offers to improve the actual actuarial and underwriting decisions for instance.

I am attending IBM Insight 2015 and will blog about a few of the sessions. First up, the opening keynote on the insight economy. IBM is focusing on the disruption caused by analytics and insight across all industries. Internal and external data and increasingly sophisticated analytics are changing how companies interact with customers, manage risk and more.

Bob Picciano, who heads up IBM Analytics, kicked things off. IT value, he says, is increasingly focused on how it can generate value from the data it is managing – unlocking the insight in the data to drive what IBM calls the Insight Economy. Of course the data itself is also changing, with more unstructured data and more device data, while cloud is changing how this data can be managed. You want, he says, to inject this insight into every decision in every business process.

IBM’s latest positioning of course is Cognitive – encouraging its customers to become Cognitive businesses that learn and adapt. There’s a little bit of a distinction between Cognitive Computing (Watson) and a Cognitive Business (which might also use other things). This focus on analytics and Cognitive is driving a huge business and rapid growth for IBM.

The Internet of Things (IoT) he says is generating a huge amount of data that is mostly just dropped, not stored or analyzed at any point. New technologies however mean that this data can be put to work more effectively. This focus on IoT brought the first customer on stage, Whirlpool. 3 out of 4 homes in the US have at least one of their products and they have a huge range of product lines. Whirlpool gave some simple examples like machines that detect a buildup of lint that is making a dryer less efficient, suggest maintenance that a machine needs etc.

Next up was a discussion of the role of twitter data. Coke discussed how they use twitter data to listen to the sentiment around their brand and how they hope to use twitter data in a more analytic way to personalize interactions. Some good examples of overall market sentiment and brand discussion get reflected in twitter conversations though still just plans to apply twitter in personalized marketing.

Mike Rodin was up next to start the discussion of Watson and Cognitive computing. He began by making the point that Cognitive systems are particularly good at unlocking data that is “dark” to traditional systems like unstructured documents, sound, images etc. Watson, he says, has been evolving rapidly over the last few years. There are now over 30 Watson APIs including NLP, Neural Networks and many other machine learning techniques. Cognitive systems he says have some differences: They understand natural language or natural image inputs, they reason against this understanding and they learn continuously. It’s not clear how these Watson components interact with their other reasoning and analytic technologies but there’s clearly a lot of investment in the Watson platform and ecosystem.

GoMoment came on stage to talk about their use of Watson. The key role it seemed to play was to be able to read a text and apply context – in this case being in a hotel and a specific location and time – to give an immediate response. The ability to interpret the texts allows for a fundamentally different interaction than a more structured app. This increases engagement and allows for more rapid response to problems. It’s a pity he did not talk more about how the application was developed – the balance between coding and learning for instance. Another Watson app, VineSleuth, was up next. This is designed to analyze wine and then personalize recommendations wine4.me. As you try wine and review it the recommendations change. Recently Watson was added to provide natural language and speech to text so that a kiosk would be developed that would take a verbal request and turn it into a recommendation. This time it’s clearer that the analytics and expert rules underpin the recommendations while Watson makes it easier to interact with these analytics in a natural way.

More of a focus next on social data with StatSocial talking about using Watson with social profile data to create a rich profile for consumers – personality traits derived from your social interactions. This allows retailers to customize their direct mail and other interactions.

New research is focused on helping Watson interpret images – giving it the ability spot abnormalities and variations in MRIs and other images. This allows it to identify critical images out of a set, compare those with historical images to find similar ones and then present summaries of the diagnoses resulting from those other images. Watson, as always, presents its reasoning and the strength of its suggestion. The ability to include image data in this is potentially huge, of course.

Then of course we had to have a robot, Pepper from Softbank. It’s not clear how much of this is programmed or scripted. Very cute though.

Enough Watson apparently, back to Mike and the Weather Company. The Weather Company collects a ton of data about weather and works with IBM to provide this data and analysis/decisions based on it to companies. The focus on apps and localized weather prediction has driven a massive increase in the number of forecasts and the number of locations for which predictions are required. In parallel the company is focusing on decisions that companies make where weather should be part of that decision-making whether that’s shopping or routing for instance. Helping organizations like the Red Cross deploy assets in a more focused, more accurate way. Precision and timing are critical plus the ability to find people who are out and about and target them with specific instructions. Mike wrapped up by announcing IBM’s new Insight Cloud Services for delivering various IBM analytics and insight (including the work with The Weather Company and Twitter) through the cloud for embedding into mobile apps.

Box came on stage next to talk about integrating unstructured content stored in box into the IBM analytic and content management ecosystem. All seems pretty straightforward in terms of value proposition – makes perfect sense but not much to say about it yet.

Finally a recap of Watson Analytics and its ability to empower “citizen analysts”. A demo of the Watson Analytics user interface followed with its nice visual interactive style. Still no way to deploy the predictive models you can build though. And of course there’s still the question of when it make sense to empower citizen analysts – check out our Analytic landscape research bit.ly/1NwdJGU for a point of view.

Lots of interesting stuff here, though the cloud services availability was a critical announcement that got buried in the Watson hype. Still not clear how Watson and the rest of the IBM analytic stack should interact but hopefully this will become clearer.

SAS has been focusing on in-memory analytics recently with its new Visual Analytics products for instance. Teradata and SAS have been working together to enable these in-memory products for Teradata customers and today announced a new appliance. The expansion to the Teradata Appliance for SAS, Model 750 now supports SAS High Performance Analytics (HPA) Products, Visual Analytics, Visual Statistics and IMSTAT (in-memory STAT).

The Model 750 uses new Intel chips for faster computation and adds more memory options – 256 GB 512GB or 768GB per node. Scalability has improved also with support for clusters of over 600 nodes. SAS Managed Server nodes can be included also to run traditional SAS products that don’t have a specific in-memory version.

In the new appliance, data for these in-memory products is loaded directly from the Teradata warehouse into memory without any duplication. Because the Model 750 sits in the Teradata Unified Data Architecture and connects to Teradata BYNET using Infiniband, all the data available inside the Teradata UDA can be processed in-memory using the appliance, regardless of where it is stored (data warehouse, discovery platform, Hadoop etc).

The Model 750 also provides highly parallelized access to data so that hundreds of streams can be processed in parallel. The in-memory analytic processing works without imposing any costs on the rest of the Teradata infrastructure, allowing high performance analytics to be executed against the same data involved in typical SQL transaction processing.

SAS and Teradata have over 200 customers in common. The combination of the new appliance and in-memory software can reduce the time run complex analytics dramatically. Because the whole environment is integrated into Teradata BYNET, data governance and security is easier and the in-memory analytics are integrated directly into the data architecture.

There’s more information on the Teradata/SAS appliances here.

The Decision Model and Notation (DMN) standard is starting to get some real traction in the market. We have a growing number of clients adopting the modeling techniques (described in this white paper if you are not familiar with decision modeling with DMN) and a number using our decision modeling software DecisionsFirst Modeler. In a few weeks the annual Building Business Capability conference will kick off in Las Vegas (register here and use SPKLV15 for a 15% discount) and this year there are no less than 9 sessions talking about DMN. Hope to see you there – say hi if you see me!

Monday – Tutorial

Tuesday – Tutorial

Wednesday

Thursday

Friday

I got a chance recently to catch up with the folks from XMPro. XMPro is a .NET-based Intelligent Business Operations Platform with headquarters in Dallas and offices in Denver, Sydney, London and South Africa.

The folks from XMPro see the world as a very event-driven environment with work arriving in an increasingly dynamic fashion day by day. These events might be planned events like onboarding a customer or a purchase event.  Planned events generally have planned actions and structured work – a process – that needs to be done.

Unplanned events also occur and these may have plans in place – planned actions and structured work that will be done if the event occurs – or they may be completely unexpected and cause unplanned actions and unstructured work.  XMPro is designed to handle both kinds of events/work in an event-centric way.  XMPro talk about the value of promptly responding to business events.

XMPro uses a Sense Decide Act metaphor for their platform and have various products to support this metaphor:

  • Sense – do you know what is going on?
    This requires situational awareness (XMMonitor)

    • Intelligent Operations Monitor to manage streaming data as queues that can be analyzed for patterns.
    • Business Activity Monitor to detect new events like emails, new records in a database.
  • Decide – should you do something about it?
    This requires decision support (and I would say Decision Management too)

    • Business Rules to drive explicit decision-making (XMRules)
    • Predictive Analytics to use data to drive better decisions (XMAnalytics)
    • Best Next Action to select between possible actions
    • Social Collaboration for decision-making that involves people
  • Act – who should be doing what, by when?
    This requires process management and improvement (XMWorkspace)

    • Business Process Management
    • Workflow
    • Case Management

Some specific elements of the product stack are worth calling out:

  • XMPro is tied together and based on the XMConnect integration platform that can connect to any .NET API or web service and through those to applications, devices etc. this allows events to be pulled from a wide range of sources.
  • The Intelligent Operations Monitor runs on premise, in the cloud or as a hybrid and provides the standard queue, threshold and pattern matching capabilities for a streaming event handler. The same queue can be linked to explicit filters that trigger events and predictive models to trigger cases based on likelihood.
  • The Decision Support components have dashboards, reports and pivot tables for human-centric decision support along with collaboration tools for human decision-makers. More real-time dashboards, machine learning and business rules push toward more real-time automated decision management and best next actions – so called prescriptive analytics.
  • Within the analytics piece (XMAnalytics) there is a machine learning component as well as an XMAnalyzer component that analyzes process execution patterns to see which ones worked out best for the organization. It’s also common for customers to use their own business rules environment as well as their own BPM or workflow solution.
  • The Act components include XMDesigner to configure the processes, XMWorkspace for users to manage their tasks and XMAnalyzer to evaluate process patterns to see what works and how the process can be improved. These support structured, unstructured (case) or hybrid processes. XMWorkspace is mobile ready and supports offline tasks.

XMPro group the Sense and Decide elements into their Operational Intelligence suite and the Decide and Act pieces into Business Process Management. Some customers start with the BPM elements and gradually add the operational intelligence. Others begin by watching the data on the operational intelligence side and then link the sense/decide elements to XMPro or third-party BPM or case management products.

XMPro use cases generally combine event processing (looking for patterns in events) with decision support/management (rules and analytics to decide what to do) to drive an appropriate workflow or action. Sometimes the decision-making in this pattern is more automated (decision management in my terms) and sometimes less so (decision support). Examples include routing inbound communication, master data management, production line optimization.

You can get more information on XMPro here.

Karl Rexer sent me a few highlights from his 2015 Data Science Survey. This was formerly known as the Rexer Data Miner Survey but the term Data Scientist has surged in popularity so it has been renamed. I have blogged in the past about the survey and like many in the business I look forward to the results each year – full results should be released later this year. Meanwhile, some interesting factoids from the survey:

  • Most people reported using multiple tools with a mean of 5 tools
  • Top few tools / vendors when companies’ tools are combined together:
    1. 36.2% — R
    2. 11.7% — IBM SPSS Modeler or SPSS Statistics
    3. 11.5% — SAS or SAS Enterprise Miner or SAS JMP
    4. 7.9% — KNIME (free or commercial version)
    5. 5.1% — STATISTICA
    6. 4.6% — RapidMiner (free or commercial version)
  • Over 60% of R users report that R Studio is their primary interface to R.
  • Job satisfaction is high, but not as high as in 2013
  • Only 40% feel their company has a high (or very high) degree of analytic sophistication
  • The analytic workload is expanding – most analytic professionals foresee an increase in analytic projects (of corporate teams 89% foresee more analytic projects)
  • Analytic teams are growing

The top analytic goals continue to revolve around customer data:  improving understanding of customers, retaining customers, improving the customer experience, and improving selling. The most frequently used algorithms have remained consistent for many years:  regression, clustering and decision trees have been the top algorithms since the research began in 2007. In 2015 more people report that their company has an active or pilot big data program (38% in 2015, compared to 26% in 2013).  However, the size of the datasets people report they typically analyze has not grown.

I was struck once again by the poor rate of deployment and measurement – only 63% of analytic professionals report that their analytic results are usually or always deployed/utilized and only 50% report that their company usually or always measures the performance of analytic projects. We are big believers in the power of good analytic requirements to drive this up – check out this white paper on framing analytic requirements for instance.

You can get more info on the survey at rexeranalytics.com

I have blogged about IBM ODM (Operational Decision Management – their Business Rules Management System) many times before (most recently about ODM Advanced here). IBM has a new cloud offering – IBM ODM on Cloud – that was announced in early October.

IBM ODM has offered various deployment options for a while – on premise, Bluemix, Pure Application System, SoftLayer etc (Private, IaaS and PaaS clouds). What’s new is the IBM ODM on Cloud as a pure SaaS offering. Key drivers for the new initiative were to deliver fast time to value for smaller projects at an affordable price and with a focus on decisions not operations. The new offerings is designed to be complete –a “born on cloud” environment – rather than just a deployment option.

IBM ODM on Cloud is:

  • An IBM ODM product
  • That runs on the IBM cloud (SoftLayer)
  • Rule designers continue to use Eclipse to define object models, decision services etc
  • A rules-based decision service is then pushed to the cloud
  • Business experts and rule administrators can then update rules, test them, simulate them, monitor and deploy rules directly in the cloud using cloud-based user interfaces etc.
  • The deployed service can be invoked like any cloud based service

IBM ODM on cloud offers a collaborative cloud service to capture, automate and manage rules-based business decisions. It uses the same artifacts as ODM on premise allowing them to be shared. It is quick to get up and running (less than 48 hours) and pricing is completely based on a monthly subscription with no initial capital expense. IBM manages the servers, provides backup and high availability, disaster recovery, updates, maintenance etc leaving users to only worry about the development of the decision service and its integration to their application(s). Instances can be provisioned worldwide in any SoftLayer data centers. The cloud offering also offers a try for free version, allowing a customer to trial the cloud deployment at no cost for two weeks.

The product offers out of the box development, test and production environments (each runs on their own server) with web-based tools for everything except the initial rules designer work. The three environments are accessible from a user portal and the development environment offers both Decision Center and a Decision Server while the test and production environments support Decision Server.

Once the rule designer is downloaded it is easy to point it to the ODM on Cloud install (identifying the test, development and production environments automatically). The designer is focused on creating and managing a decision service using ODM best practices and the standard rules product. The rule designer can reuse components developed for on premise installs also.

When a user logs in (after an initial project has been pushed to the cloud) they see a portal with the development, test and production environments to which they have access. Pre-defined roles are established that restrict things like editing or deployment and each user is assigned to appropriate roles. In development they have access to download the rules designer, Decision Center (Business and Enterprise) and execution server console. Test and production servers expose other capabilities.

Once the service is set up the usual Decision Center capabilities are available – Business or Enterprise Console options are both available, allowing a more or less technical view of the rules for editing. Simulations can be defined and run, tests defined etc. – all the functionality provided in Decision Center. Moving the rules to production and testing is all based on the standard governance framework, allowing all the usual capabilities for rule promotion and management to be used. Users with the right roles can make a rule change, test it, simulate the impact and deploy the change all from the web environment.

ODM on Cloud is designed to leverage exiting customers’ investments in ODM while also making it easy for new customers to get started. It can be provisioned quickly and all the platform management activities are handled by IBM (including performance, security, back up etc.) as part of the subscription. The automated management of development, test and production environments and the focus on the standard best practice approach to developing and managing decision services make it easy to get started quickly. Three tiers are provided that handle different levels of complexity and scale.

While it would be nice if the whole process could be managed without the rule designer download, the product is otherwise very cloud-centric offering customers a genuine SaaS option for deploying and managing rules-based decision services.

More information on ODM on Cloud is available here.

RBDecisionModelingCoverI am a faculty member for the International Institute for Analytics (IIA). I have been working on some research on framing predictive analytics project requirements. As part of this research I have published an IIA research brief and recorded a briefing. These discuss decision modeling, a new technique for framing a business problem so it can be effectively solved with predictive analytics. In them I introduce the basic concepts behind decision modeling, discuss a real world example, and explain how decision models can be effective tools for defining predictive analytic requirements.

IIA Members can listen to the briefing or read the research brief. If you are not already a member and are interested you can get details and sign up here.

Those of you who follow the news on the new Decision Model and Notation (DMN) standard will have seen that we have become a supporter of the new OneDecision.io open source initiative. OneDecision is working on making it easy to execute a well defined DMN decision table – essentially taking a DMN interchange file defined using the DMN XML schema and executing it as a server. An editor for such decision tables and other capabilities are also likely moving forward.

Here at Decision Management Solutions we are big believers in DMN, using it on all our projects and teaching hundreds of people how to use it for everything from requirements analysis to business rules specification, from business modeling to predictive analytic requirements. We use our modeling tool, DecisionsFirst Modeler, for these projects. We’re focused on the business modeling side of DMN – what are your decision requirements, how do the pieces fit together, what are the business implications etc. For implementation we like to link to your varied implementation environments, whether those are Business Rules Management Systems or analytic platforms. Working with OneDecision will give us another way to deliver implementations and we are excited to be working with them.

There’s more detail on the project at OneDecision.io and I’ll blog more as we make progress with them.

I last got an update from Dataiku in November of 2014. Since then they have raised money and opened an office in New York. New features and capabilities have been added to the product and they are seeing good interest in the product from US customers as they expand here.

The 2.0 version has been available since April of this year and has a significantly redesigned user interface. Much has not changed: Projects still have a visual workflow to describe what’s going on but it has been streamlined to make it easier to read. Plus there is now a redesigned search and a sidebar palette of activities, including support for specific data manipulation activities and integration of python and R.

Data can still be read from hadoop, SQL and no-SQL databases. The interactive editor for datasets is largely unchanged, providing a rich set of tools for handling missing or bad data, fixing problems in the data, suggesting changes based on analysis of the data etc. The user interface has been updated and some new transformations have been added such as extracting numeric values from text columns.

New in this release you can create new predictive models from this same data environment, allowing a modeler to move back and forth more easily between the data and the model development environment. They package algorithms from various third parties, trying to offer users all the best algorithms from the open source environment. They add some expertise on parameter setting, training approaches etc to make these algorithms easier to use while still exposing some of these settings for customization. The tool keeps out validation sets etc and as part of a general update in is results reporting now also offers K-fold cross testing (to test the variability of the scores being developed).

The resulting project can be executed on hadoop as MapReduce jobs and in 2.1 the model too will be executable using Spark and MLlib. Jobs can be scheduled so that the data scientists and analysts can manage deployment of their projects. Most projects are still deployed in batch but Dataiku also allows the project to be generated as code for deployment into a transactional environment. Future plans include a server for deploying projects for real-time scoring against production databases (which may not be the same data environment as that used for training).

More information on the product can be found here.

I got a briefing from a company that’s new to the US but that I have been aware of for a while – BusinessOptics. The company was founded in South Africa and started about four years ago. The product itself was first released about two years ago and the initial customers are primarily in South Africa with a focus on financial services, telco and some manufacturing/retail.

The product is described as a prescriptive analytics platform and can be thought of as a combination business modeling and predictive analytics environment. It is cloud based (currently on AWS though moving to cloud agnostic and potentially on-premise versions) and built on Hadoop and Spark. It has a strong focus on APIs and a robust security model. The product has four layers:

  • Data from various sources.
    The data layer can handle relational and non-relational data, flat files, HDFS or APIs etc.
  • Modeling, analytics and optimization – a knowledge layer.
    The knowledge layer handles modeling, integration with machine learning and optimization, version control etc.
  • Automation and APIs.
    These support integration with mobile apps, systems and machines while also providing tools for feedback into the data layer.
  • Visualization and scenario analysis.
    Focused on managing and refining the models in the knowledge layer with charts, dashboards, maps, pivot tables etc. Dashboards can also be embedded in other applications.

From a functional point of view, data is connected to and then mapped into the modeling space. Data is read in, keys identified, discrete and continuous variables identified, tables linked etc. Data in the sources can be easily visualized and displayed.

Within the knowledge layer the user can define functions that leverage the data, creating calculated or filtered data for instance. Such functions, or ideas as they call them, can be built on, with higher level functions using lower level ones. Time based analysis, aggregation, geospatial analysis, analytic and optimization and a wide range of other built-ins can be applied. Within these functions, conditional logic can be added in a table layout for instance. The model can be managed by dividing it up into namespaces and by using tags. Tools for zooming in and out as well as navigating a model allow very large models to be managed. The knowledge layer focuses on allowing analysts to efficiently model complex problems and systems in detail.

Unlike most visual programming/business modeling environments the analytic / machine learning and optimization functions are built-ins and tightly integrated. Analytical ideas (machine learning, optimization) can be defined and hooked up to the data needed to train the models. These ideas can then be input to others and filter expressions defined to control the data flowing through the model – should training data be fed to the analytic model or should a transaction be fed into it for scoring. The vast majority of their customers use analytics as part of the solution they develop and the tool makes it easy to include a number of machine learning providers such as scikit Learn and Spark MLlib.

Views can be added to visualize the output created by these ideas. The formatting can be controlled in some detail. Dashboards can be created that combine multiple visualizations. Visualizations can be parameterized so that users can interact with the view. Dashboard elements can be linked too, so that as users interact with data in one of the visualizations it impacts the others e.g. filtering lists based on selections in another pane. The logic behind the visualized data, defined in the model, can also be viewed through the dashboard to explain the data.

Execution is logged at a very granular level, providing complete execution transparency and this data is available through an API. Changes to the models are logged and versioned. Machine learning components can be set up to automatically update or not, allowing for review and control as necessary.

Once deployed the same model can be used to drive dashboards and aggregated metrics as well as  transaction-level execution.

More information on BusinessOptics can be found here.