≡ Menu

Like many of you I am awash in digital photos and have been trying to find a good way to manage them. For various reasons I picked Amazon Photos. One of the key benefits was the family vault – a way to let several people upload photos to a shared space. Of course, another reason is to take advantage of Amazon’s facial recognition AI.

So the good news is that both of these seem to work. The shared files are managed nicely on AWS (no surprise that Amazon can do this) and the AI does a pretty good job of recognizing faces and letting you edit this when it makes mistakes. So far so good.

But the facial recognition is account by account and there’s no way to use it in the family vault:

I can look at my photos and identify faces in them. The other members of my family can look at their photos and do the same.

But we can’t say that these two people are the same. I can’t even use the faces they have identified when looking at a family member’s photos – only they can.

Plus there’s no way to see all the photos the family has of a particular person. Which was kind of the point of the family vault.

So why do AI programmers smart enough to do facial recognition fail to realize that they should make this work across a family vault? I suspect because like far too many AI programmers they are focused on the technology. Their success criteria is to make the technology solve a technical problem – identifying faces – not solve a business problem – managing a photo collection.

If you want your AI programmers to focus on the business problem and not the technical problem, check out this paper on our DecisionsFirst™ approach for success with AI.

I noticed the other day that my phone, which used to have a relatively sophisticated (if horribly complicated) “do not disturb” approach had reverted to only allowing a simple version. Online, many users complain that now they can’t really use it because it’s too “dumb” regarding when it rings and when it doesn’t. How did it come to this? Why did the feature get removed? Because business rules broke it.

The old approach let you specify business rules. You started by specifying rules about being asleep or in meetings and then you started adding exceptions. But this gets complicated quickly. How does a phone do all the things real people need it to do?

For example, don’t ring:

  • When I am in meeting, unless the caller is my significant other and it’s the second time they have called in the last 5 minutes
  • Unless it’s one my kids and this the third time they have called, unless I am driving because then I can’t switch to the incoming call from the dial in anyway
  • When I am at home, unless it’s someone that I call regularly or a favorite, unless I am asleep
  • If I am driving and on my headset
  • Unless I’m not in a meeting; but if someone important calls, answer, but otherwise don’t
  • Unless it’s my emergency contact calling for the third time in five minutes, then ring even if I am not using my headset

You get the picture. Too hard to do effectively manage in a phone interface. Trying to use rules to solve this problem broke the feature.

But what if you thought about decisions instead of rules? Check out this decision model I came up with:

  • The phone needs to decide what to do (ring or not)
  • To do that it needs to decide where you are, what’s happening, who’s calling and how persistently are they calling
  • It can do all of these by itself using things it knows (sensors), basic settings (sleep hours, calendars), contact info that you already have and the call log
  • All you would have to do is specify a simple decision table to say given the set of conditions you wanted it to ring or not ring

Of course there’s some UI design to see how best to capture this decision logic (a table, reasons to mute then reasons to ring or what) but the logic would be simple because of the sub-decisions.

To learn more about Creating Agility and Operational Efficiency with Decision Modeling, read this whitepaper.

Ron Ross recently posted a question “What Happens When Behavioral Business Rules and Decision Logic Collide?” in which he asks whether a behavioral rule or decision logic should “win” when they disagree. The problem is that this is the wrong question.

Take his example about a city charging for its facilities. The behavioral rule is “A senior citizen must not be charged a recreational fee for use of facilities.” The decision logic is a table shown in this image:

But this table is clearly wrong – if Senior Citizens should not be charged recreational fees than this logic is incomplete/inaccurate. Asking if this should “win” relative to the behavioral logic is a pointless question – the logic is broken. This table is just the default or generic calculation and the point of a decision is to decide for any SPECIFIC transaction not for a GENERIC one.

The right question is to ask how the city decides on recreational fees. One of the decisions in this model would be to calculate the base hourly fee. Others would be to identify exclusions (I bet Senior Citizens are not the only exception) and discounts. Each decision would be described by appropriate decision logic. The overall decision model would then decide on the fee correctly in all circumstances.

Ron argues that adding exceptions to decision tables complicates them. Well it can, especially if you do it wrong or build one great big decision table. A decision model handles them just fine.

So, my answer to Ron is simple. Build the right decision logic and then you won’t have to answer the question.

I recently wrote a blog over on Decision Management Solutions’ blog called 5 Reasons to fire your rules consultant to highlight the worst offences of business rules consultants. Here’s the list:

  1. They want to do rule harvesting
  2. They call it a rule engine
  3. They put off business user enablement to phase 2
  4. They smush process and rules in one project
  5. They want to use ORs, ELSEs, ELSEIFs…Oh My!

Read the full post over on the Decision Management Solutions blog and subscribe to our newsletter for more articles like this in the future.

Ryan Trollip, CTO of Decision Management Solutions, and Charlotte DeKeyrel, one of our experienced decision modeling consultants, and I are all going to be at IBM THINK February 11th – February 14th, 2019 in San Francisco. Ryan and I are speaking on Thursday and you’ll find us at events involving Decision Management and IBM’s decisioning products like ODM, Watson Studio etc.

If you are going to be there and would like to talk Decision Management and digital decisioning, get in touch – info@decisionmanagementsolutions.com. We’re doing some great stuff with the IBM product stack and would love to share our DecisionsFirst™ approach to see if its a fit for your problems.

I am speaking with Ryan Trollip, CTO of Decision Management Solutions, and Stephane Mery of IBM on Delivering Excellent Customer Experiences with Analytics and Automation at IBM THINK.

Many customers are struggling to deliver consistent digital experiences across customer channels and touch points. Adopting a decision-first approach is a step in the right direction to provide this consistency and decisioning support over the entire customer journey. Operational decisions are mixing business rules, analytics and machine learning. This talk will share some use cases and illustrate how decision modeling can be used as a framework to inject AI into your business operations.

Come hear us discuss a proven approach to delivering digital decisioning that operationalizes predictive analytics and positions you for success with ML and AI.

I have known Tom for many years and enjoyed his books. He recently sent me a copy of his latest one – The AI Advantage: How to Put the Artificial Intelligence Revolution to Work (Management on the Cutting Edge).

Tom does his usual excellent job of introducing a technical topic – AI and machine learning – and focusing on what business leaders need to know about it. While he has a chapter on the various approaches to adopting AI technology, the book’s key message is that a technology-first approach to AI is a bad idea. Instead of technology-led “moonshots”, enterprises should use AI to solve practical, immediate, operational problems.

The book begins with a discussion of the role of AI in the enterprise and surveys what companies are doing today – both successes and failures. It was particularly refreshing to see failures discussed and Tom does a nice job of using some of these failures to illustrate how best to approach AI. He believes AI is going to transform companies, albeit perhaps more slowly than some believe, and encourages companies to identify a coherent AI strategy. He provides some good material on the elements of an AI strategy and outlines how different companies might take different approaches.

The biggest takeaway from the book is that success will not come from “moonshots” but from a more practical approach. As Tom says:

There are relatively few examples of radical transformation with cognitive technologies actually succeeding, and many examples of “low hanging fruit” being successfully picked

In our experience this is critical. Taking a big “we’ll just use AI” approach rarely works. Developing a comprehensive approach to a decision that mixes and matches AI with other technologies like descriptive analytics, predictive analytics and business rules works much better. Tom recommends that companies develop a series of less ambitious applications in the same area that in combination have a substantial impact. Each is less risky than a moonshot and you will have time to adapt to each piece. But, when combined, the overall impact is high.

We like to do this by building a decision model to break down a specific collection of closely related business decisions into their component sub-decisions. Some of these sub-decisions will be best done by people, some can be codified as business rules and some will need ML or AI. This lets you identify a set of smaller, less ambitious AI decisions and shows you how they will contribute to an overall more effective decision.

As Tom says:

Given all the media and vendor hype in the cognitive technology space, companies often feel pressure …to take on a cognitive project. It’s much better for a company to try and see beyond marketing blandishments about AI and to create the best fit with the organization’s strategy, business model and capabilities.

That requires that you really understand how you make decisions and how (and where) AI can help. Our experience is that a design thinking approach to decisions – DecisionsFirst Design Thinking as we call it – let’s you redesign the decision-making and take advantage of AI. Too many companies have used AI to “pave the cow path” by automating existing work process (particularly true with RPA technology). Really taking advantage of AI will require structured and controlled re-thinking of your decision-making.

Usefully for business executives, he provides a very accessible survey of the capabilities of AI:

  • Create highly granular predictive and classification models
  • Perform structured digital tasks (RPA)
  • Manipulate information (OCR and data integration)
  • Understand human speech and text
  • Plan and optimize operations
  • Perceive and recognize images
  • Move purposefully and autonomously around the world
  • Assess human emotions

For each of these he provides a concrete discussion of how they might reasonably be used, not in the future but now – discussing in passing how work is likely to need to be redesigned to take full advantage of these technologies.

He also discusses how, while ML and AI represent an extension of the world of advanced analytics, they also differ from traditional data mining and predictive analytic approaches in 3 ways:

  • They are usually more data intensive and detailed, limiting when they can be applied to scenarios where there is a lot of data
  • They need therefore to be trained on a subset of the data because there is so much available.
  • They can learn continuously as data is fed through the resulting algorithm, rather than waiting for the next formal update.

AI and ML can be used in many of the same circumstances that predictive analytics and data mining can be used. Think of them as both an extension of these techniques and as something distinctly different.

In later chapters he talks about jobs and skills in a world where AI is increasingly pervasive and about some of the social and ethical issues such as transparency and bias. As he says:

As cognitive technologies are developed, organizations should think through how work will be done with a given new application, focusing specifically on the division of labor

He has a good discussion of transparency and our experience has been that considering AI as one part of the solution, mixing more opaque AI with more transparent business rules for instance, really helps. In addition, new technologies such as LIME and AI Open Scale help explain AI model results in a way that can be combined with the explanations produced by these other more transparent technologies.

One of the themes in the book is the challenge of getting AI to really affect an organization’s core business operations. In a cognitive-aware executive survey conducted by Deloitte, for instance, 47% said it was “difficult to integrate cognitive projects with existing processes and systems”. As Tom points out, this integration is essential if you want to make a real impact. This matches a recent McKinsey survey that found analytic “leaders” investing much more heavily in this “last mile” integration than others. A Decision Management platform -a digital decisioning platform as some call it – is a key ingredient in tying advanced analytics, ML and AI into your day-to-day operations.

Finally, from a Decision Management perspective, Tom has some great illustrations of the value of a decision-centric approach and of how AI is integrated into an overall approach. Early in the book he quotes Jeff Bezos’ Letter to Amazon Shareholders from 2017 (my emphasis added):

But much of what we do with machine learning happens beneath the surface. Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more. Though less visible, much of the impact of machine learning will be of this type – quietly but meaningfully improving core operations.

“Quietly but meaningfully improving core operations since 2002” could be a motto for Decision Management! That’s the focus of Decision Management and always has been. In that sense, machine learning (ML) and AI represent powerful new tools for doing what we have always done rather than something requiring a radically different approach. Tom even makes this point, identifying how rules-based systems have been addressing many of the scenarios for which AI is being considered. Like Tom, we see potential in combining these rules based and machine learning approaches to produce adaptive systems.

I will end with one of Tom’s quotes from early in the book:

The businesses and organizations that succeed with AI will be those that invest steadily, rise above the hype, make a good match between their business problems and the capabilities of AI, and take the long view.

This is not a book that is going to add to your technical knowledge about AI but it’s a great book for business executives and for those who want to think more deeply about how AI will change their business. You can buy it here The AI Advantage: How to Put the Artificial Intelligence Revolution to Work.

 

We’ve blogged recently about some of the challenges in analytics and AI – How More Companies Can Maximize the Potential of Analytics and 80% of insurance carriers aren’t delivering high impact analytics – building on some great McKinsey research. They recently published another article, this time targeted at Chief Analytics Officers – Rebooting analytics leadership: Time to move beyond the math.

This was a great article. First, I loved that it made it clear Chief Analytics Officers should think about AI and Analytics together. This is key as it focuses AI on decision-making (where analytics is already focused) not just conversational AI. It’s not that chatbots aren’t useful, they’re just REALLY different from decision-making AI and are better thought of as user experience technology.

I was also glad to see them identify the things you can’t rely on to drive analytics/AI success like being “born digital”, having an analytical CEO or being in an existential crisis that drives people to act. You need to be able to succeed even when these things are NOT true.

The whole paper for me was summarized by this one great customer quote:

just getting the math right doesn’t drive the change

So true! McKinsey had some great suggestions that align with what we have seen work in our customers

  • Build a coalition of equals across the business/operations, analytics and IT.
    • This means you need ways to talk about the problem everyone understands – not math but decision models.
  • Put business value front and center and align analytics opportunities and innovation with the business unit’s vision and priorities.
    • Identify the business’ key metrics, find the decisions that will improve these metrics if they are improved. Focus on those decisions.
  • Focus on IT as a strategic partner rather than simply as an execution arm.
    • Engage them early to think about the last mile deployment of analytics.
  • Heavily invest in integrating advanced analytics into the workflow.
    • We use decision-centric design thinking to get the decision model the business need, build analytics to support that and then use the decision model and a modern Business Rules Management System to push that decision into systems and processes.
  • Be a change agent and advise boards and executives on what’s possible.
    • Don’t let those executives throw-up their hands and say “I don’t understand AI so do what you want”. Make them work for it!

Our DecisionsFirst approach is exactly what you need to succeed as a Catalytic CAO so if you are a Chief Analytics Officer, or want to become one, check out our Chief Analytics Officer resource page or contact us to chat.

Recently we have posted on the Decision Management Solutions blog about a couple of interesting pieces of McKinsey research that discuss the unfortunate truth – most companies are NOT succeeding with advanced analytics.

First, there’s this general research How More Companies Can Maximize the Potential of Analytics:

“Senior executives tell [McKinsey] that their companies are struggling to capture real value. The reason: while they’re eking out small gains from a few use cases, they’re failing to embed analytics into all areas of the organization.”

McKinsey identifies three key challenges:

  1. Aligning on strategy
  2. Building the right foundations of data, technologies, and people
  3. Conquering the last mile by embedding analytics into decision making

Second is a piece specific to insurance, but likely typical of companies in other industries, that identifies that 80% of insurance carriers aren’t delivering high impact analytics (and I suspect most the others are only doing so very narrowly).

Why aren’t they succeeding? 38% of those surveyed cited a failure to integrate analytics into the frontline – more than cited poor data quality or a lack of strategic support.

What these research reports have in common is the identification of the importance of not just developing analytic insight, but actually embedding it in front-line workflows and systems – by using it to drive better operational day-to-day decision-making.

We have found that a straightforward, three step approach addresses this:

  1. Begin with decisions, not with data
  2. Begin with operational decisions, not strategic ones
  3. Begin with an agile analytic deployment platform, not with visualization

Check out the blog posts and the underlying research. And when you’re ready to succeed with advanced analytics, contact us.

Frontline Solvers has been in business for over 25 years and focused on democratizing analytics for the last five years. They identify themselves as an alternative to analytic complexity with a focus on leveraging broadly held Excel skills and a large base of trained students. They offer several products for predictive and prescriptive analytics and have sold these products to over 9,000 organizations over the years. Their customers are both commercial and academic with hundreds of thousands of students using the tool and 500,000 cloud analytics users. Their commercial customers include many very large companies, though generally they sell to several distinct business units rather than at a corporate level.

Frontline began with their work in solvers (prescriptive analytics) and have worked “backward” into predictive analytics. Their approach is very focused on avoiding analytic complexity:

  • Smart small and keep it simple, with a focus on rapid ROI.
  • Recognize that companies have more expertise than they think – Excel and programming skills for instance plus all the students who have used the software in MBA classes.
  • “Big Data” and more complex ML/AI technologies are not essential for success – ordinary database data is often enough.

Frontline is focused on decision support today but rapidly moving into decision automation – decision management systems.

Their core products are modeling systems and solvers for optimization and simulation, used to build a prescriptive model, rather than the analysis of lots of data (though they have data mining routines too). These kinds of optimization models are often called prescriptive analytics because they recommend – prescribe – specific actions for each transaction. Prescriptive analytics can, of course, also be developed by combining predictive analytics and business rules – driving to a recommended action using the combination. Frontline recognizes this and envisions supporting business rules in their software.

Solver-based prescriptive analytic solutions generally focus on many transactions in a set not a single transaction – what Frontline call coordinated decisions. Sometimes these decisions also have no data, no history, so a human-built model is going to be required not one based on data analysis. Indeed, any kind of prescriptive analytic approach to decision-making is going to require human built models – either decision models to coordinate rules and analytics or a solver model (or both, as we have seen in some client projects).

Frontline argues Excel is the obvious place to start because Excel is so familiar. Their RASON language allows you to develop models in Excel and then deploy to REST APIs. They aim to make it easy for business domain experts to learn analytic modeling and methods, to provide easy to use tools and then make it easy to deploy. Working in Excel, they provide a lot of learning aids in the product that popup to help users. They also have an online learning platform (solver.academy) with classes and there are over 700 university MBA courses using Frontline’s software to introduce analytics methods.

The core products are:

  • Analytic Solver – a point and click model builder in Excel, including the cloud-based Excel version which they have been supporting since 2013.
  • RASON – modeling language that can be generated from the Excel-based product or edited directly.
  • SDK – supporting models in written in code, developed in RASON and/or Excel and deployed as REST APIs.

Their base solver is built into the desktop Excel (OEMed by Microsoft). As the cloud Excel does not have this, they have built online apps for optimization, simulation and statistics that work across Excel Online and Google Sheets. The latest version of Excel Online is now ALMOST able to support the full Analytic Solver Suite and this is expected to be complete in Q1 2019, allowing them to unify the product across desktop and online Excel.

To bring data in to the solver, they use the Common Data Service as well as standard Data Sources for data access. This makes it easy to connect to data sources. They also use the Office Workbook model management tools (discovery, governance, audit) which are surprisingly robust for those with corporate licenses to Excel.

The engine has four main capabilities:

  • Data mining and forecasting algorithms.
  • Conventional optimization and solver.
  • Monte Carlo Simulation and decision trees.
  • Stochastic and robust optimization.

For very large datasets (such as those used in data mining), the software can pull a statistically valid sample from, say, a big data store. The data can be cleaned, partitioned into training and validation sets and various routines applied. The results are displayed in Excel and PMML (the standard Predict Model Markup Language) used to persist the result. Obviously the PMML is executable both in third party platforms and in their own RASON language.

RASON is a high-level modeling language that allows the definition of data mining models, constraints and objectives for optimization, and distributions and correlations for simulation. A web presence at rason.com allows this to be written in an online editor and executed through their REST API. RASON is JavaScript-like and can embed Excel formulas too. RASON can be executed by passing the whole script to the API using a JavaScript call. An on-premise version is available too for those who wish to keep execution inside the firewall.

The Solver SDK has long supported coding of models. Since 2010 the SDK has been able to load and run the Excel solver models. The RASON service came in 2015 and in 2017 they added integration with Tableau and Power BI, and this year to Microsoft Flow. These integration steps involve generating apps from inside the Excel model using simple menu commands. Behind the scenes they generate the RASON code and package that up in a JavaScript version for consumption.

You can get more information on Frontline Solvers here.

Cassie Kozyrkov, the Chief Decision Intelligence Engineer at Google wrote an article recently titled  Is your AI project a nonstarter in which she identified 22 check list items for a candidate AI project. It’s a great article and you should definitely read it. In particular you should note the quote at the top:

Don’t waste your time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.

And what it will do for you is often help your organization make better decisions.

We always begin customer projects by building a decision model. Working directly with the business owners, we elicit a model of how they want to decide and represent it using a Decision Model and Notation (DMN) standard decision requirements model. This shows the decision(s) they want to make and the requirements of those decisions – the sub-decisions (and sub-sub-decisions), the input data and the knowledge sources (policies, regulations, best practices and analytic insights) that describe their preferred approach.

These models address several of Cassie’s early points (1. Correct delegation and 2.Output-focused ideation) by focusing on the business and on business decision-making. We also link this decision model to the business metrics that are influenced by how those decisions are made, which addresses couple of her key points on metrics (18. Metric creation and 19. Metric review).

This decision model is often a source of analytic inspiration, as business owners say “if only…”- “if only we knew which emails were complaints”,” if only we knew who had an undisclosed medical condition”, “if only we knew if this text document described the condition being claimed for”…. These are the analytic and AI opportunities in this decision. Like Cassie, we often find that existing data mining and description analytics projects can be used to see how a decision could be improved with AI/ML (3.Source of inspiration).

Now the decision model has sub-decisions in it that are either going to be made by a person or by an AI algorithm. Because you know what a better decision looks like (thanks to the link to business metrics), you can make sure an AI algorithm is likely to help (20. Metric-loss comparison) and you can consider if the specific decision you identified is a good target for AI (4. Appropriate task for ML/AI). Critically we find that often the whole decision is not suitable (there are too many regulations or constraints) but critical sub-decisions ARE suitable.

When it comes to putting the resulting AI algorithm or ML model into production, the decision model makes it clear how it will be plugged in and how it will be used in the context of the business decision (5. UX perspective and to some extent 8. Possible in production). Keeping the end – the decision – in mind in this way means that project teams are must more focused on how they will operationalize the result of the algorithm than they would be otherwise.

If you automate the decision model, as we do, using a BRMS then you will also be able to simulate the decision against historical data (17. Simulation). The decision model means you can simulate the decision with and without your AI/ML components to prove the ROI too.

Finally, this focus on decision-making means you know when the AI/ML model will be used (other sub-decisions are likely to address eligibility and validity of the transaction, for instance, narrowing the circumstances in which the AI must work) and you can see what accuracy is required. This is often much lower than AI/ML teams think because the decision model provides such a strong frame for the algorithm. (21. Population and 22. Minimum performance).

Decision models are a really powerful way to begin, scope, frame and manage AI and ML projects. Of course, they don’t address all Cassie’s 22 points and the others (6. Ethical development, 7.Reasonable expectations 9. Data to learn from, 10. Enough examples, 11. Computers, 12 Team, 13 Ground truth, 14 Logging sanity, 15 Logging quality, 16 Indifference curves) will need to be considered decision model or not. But using a decision model will help you frame analytic requirements and succeed with AI.

IBM has been developing Decision Composer since 2017 and is releasing it as part of its core Business Rules Management System, Operational Decision Manager, in December 2018. Decision Composer is a browser-based tool, currently available on the IBM cloud, that uses a decision model metaphor to design decision logic and deploy it as a decision service. You can think of it as a free, streamlined (and simplified) version of ODM. Decision Composer allows you to define multiple projects, each containing a decision model for a particular decision service. It has components for modeling data, modeling decisions, specifying decision logic (business rules) and testing.

  • The design page shows a decision model for the project. Using a subset of the Decision Model and Notation (DMN) format, the design includes Decisions and Input Data shapes that can be linked using Information Requirements or dependencies. The diagram supports the normal kinds of features and layout is automated with a limited number of ways to rearrange things. The result of each decision is defined, using a standard datatype or a type defined for the project.
  • From each decision you can access the Decision Logic Editor. This allows you to either edit a decision table or write a business rules. The decision logic is edited in the same language as ODM uses with a natural language-like option (Business Action Language) as well as a tabular decision table. Multiple rules can be written for a single decision or a decision table can be defined. The conditions in the rule or table match to the requirements drawn in the decision model. Decision tables support merged fields, calculated columns, column reordering etc. and provide a description of the rule implied by each row in the table. Overlap and gap checking are provided also. Rules and tables can be controlled through interaction policies, similar to DMN’s hit policies. Supported interactions include sequential, first rule, smallest or greatest value, sum and collect. For example, collect gathers the results of applicable rules as a list.
  • Each Input Data represents a Type. The types can be imported from existing data definitions or defined on the fly in the design environment. Types can use the normal data types, can be lists, can have structure, allowed values and restrictions as you would expect. When complex types with multiple fields are defined, the user can determine which specific fields are used by a decision when writing the decision logic.
  • A simple test interface is provided to make it easy to submit sample data and confirm results. Multiple test datasets can be defined and saved. Executed rules and their respective input and output values are visible to help understand how the logic was processed.
  • Decision projects can be shared with others. The different parties can then collaborate on the same decisions, with simple version control.

New versions can be saved, undo is supported etc.

Decision services can be deployed as REST services in the IBM cloud and using the ODM infrastructure. and the deployed services have well defined signatures. Lazy deployment to IBM Cloud allows you to put the service immediately into production – the code will be deployed the first time someone tries to use the service.

Decision Composer also allows you to import sample projects for a quick start tutorial experience. You can also import an XSD, JSON file, swagger URL (for an existing service) or a Watson Conversation Workspace to create an initial data model, allowing you to rapidly develop decision logic against a known data set.

For those wondering about the differences between ODM and Decision Composer, ODM offers much more scalable execution, more extensive governance and permission management, more robust testing and powerful simulation tools.

You can try Decision Composer at ibm.biz/DecisionComposer and the IBM team have a blog post about the new version of Decision Composer on ODMDev. You can also check out this post on how DecisionsFirst Modeler, Decision Composer and IBM ODM can all be used together – Decision Modeling and DMN for IBM Customers – New Brief.

.

We are running Jan Vanthienen’s highly reviewed, Decision Table Modeling with DMN training again December 11-13, 9:30am-11:30am Pacific each time.

When representing and analyzing business decisions in real business situations and processes, decision tables have always proven a powerful approach. Decision table methodology, however, is more than putting some rules in a few tables. Hear all about proper methodology, table types, notations, best practices and the Decision Model Notation (DMN) standard, based on years of experience in modeling decisions.

This 3-part online live training class taught by leading decision table expert Jan Vanthienen of the University of Leuven will prepare you to model decision logic, business rules, using decision tables.

You will lean the concepts, objectives and application areas of decision tables for business analysis and business process management. You will see how to model and normalize decision table models and how they can simplify business processes. Mainly you will get many lessons from a long experience on how to build, analyze, verify and optimize decision table models according to simple guidelines.

Prerequisite: Decision Modeling with DMN or experience with decision modeling.

Each session is an interactive online event and is recorded so you can view it again or catch up on things you miss. More details here and you can register here. Early bird pricing and team discounts are available.

We are running our regularly scheduled, and highly reviewed, Decision Modeling with DMN training again December 4-6, 9:30am-11:30am Pacific each time.

Decision modeling with the new Decision Model and Notation (DMN) standard is fast becoming the definitive approach for building more effective processes and for specifying requirements for business rules and predictive analytic projects. With decision modeling, you can:

  • Re-use, evolve, and manage business rules
  • Effectively frame the requirements for analytic projects
  • Streamline reporting requests.
  • Define analytically driven performance dashboards
  • Optimize and simplify business processes.

This 3-part online live training class taught by leading expert James Taylor, CEO of Decision Management Solutions, will prepare you to be immediately effective in a modern, collaborative, and standards-based approach to decision modeling. You will learn how to identify and prioritize the decisions that drive your success, see how to analyze and model these decisions, and understand the role these decisions play in delivering more powerful information systems.

Each step is supported by interactive decision modeling work sessions focused on problems that reinforce key points. All the decision modeling and notation in the class is based on the DMN standard, future-proofing your investment in decision modeling.

Each session is an interactive online event and is recorded so you can view it again or catch up on things you miss. More details here and you can register here. Early bird pricing and team discounts are available.

EIS OpenL Tablets is a product of EIS Group focused on using a spreadsheet paradigm to manage business rules. I last spoke to OpenL Tablets in 2013 and recently got an update on the product. EIS OpenL Tablets is available as open source and in a commercial version. EIS Group is an insurance innovation company founded in 2008 with over 800 employees worldwide.

EIS OpenL Tablets has the usual Business Rules Management Systems components – a web studio, an engine and a web services deployment framework. It also has a plug-in for Maven, templates and support for logging to a NoSQL database. It uses a spreadsheet paradigm for rule management and this is key to engaging business users. They find that 90% of the business rules that business people need to write can be represented in spreadsheet-like decision tables (Decision Management Solutions finds the same on our consulting projects). As a result they focus on creating all the rules needed in Excel and then provide a single web based environment (demonstration available here) for validation and life cycle management.

The web studio allows a user to work with multiple projects and allows various groups of users to be given different privileges and access to various projects. Each project contains Excel files, deployment details and other details. Each Excel file can have multiple tabs and each tab can have multiple decision tables as well as simple look up tables, formulas, algorithms and configuration tables. All these can be viewed and edited in the web studio (where there are various wizards and automatic error checking capabilities) or can be opened in Excel and edited there.

Decision tables can have formulas in actions and conditions, supporting a complex set of internal logic. The user can also define a sequence of calculations using the tables as well as datatypes and a vocabulary (allowed values for inputs), sample data/test cases etc. Test cases can be run directly in the web interface and the rules can be traced to see which ones executed in a specific test case. There is an optional element to store a detailed execution log to a NoSQL database in production also.

Ongoing changes are tracked for an audit trail and changed versions can be compared in the web studio. A revision can be saved to create a snapshot at which point all the small changes are removed from the log and the snapshot is stored. These larger snapshots can also be compared.

Projects can contain service definitions for which rule services should be defined. Various deployment options are supported, and the user can specify which rules can be exposed as entry points. A swagger UI is generated for exposed service calls.

The commercial version of OpenL Tablets supports a dynamic web client, integration with Apache Spark and analytics/advanced modeling. The Apache Spark integration allows very large numbers of transactions to be run through the rules for impact simulation and what-if analysis.

More details on OpenL Tablets available at http://openl-tablets.org/

I blogged last week about IBM’s AI approach and one piece was still under NDA – new capabilities around trust and transparency. These capabilities were announced today.

As part of trying to address the challenges of AI, IBM has added a trust and transparency layer to its ladder of AI capabilities (described here). They see five primary personas round AI capabilities- business process owners, data scientists, AI ops, application developer and CIO/CTO. The CIO/CTO is generally the persona who is most responsible. It is them who see the challenges with trust. To use AI, companies need to understand the outcomes – the decisions – are they fair and legitimate?

The new trust and transparency capability is focused on detecting fairness / bias and providing mitigation, traceability and auditability. It’s about showing the accuracy of models/algorithms in the context of a business application.

Take claims as an example. A claims process is often highly populated with knowledge workers. If an AI algorithm is developed to either approve or decline a claim then the knowledge workers will only rely on it if they can trust and understand how it decided.

These new capabilities show the model’s accuracy in terms defined by the users – the people consuming the algorithms. The accuracy can be drilled into, to see how it is varying. For each model a series of factors can be identified for tracking – gender, policy type, geography etc. How the model varies against these factors can be reviewed and tracked.

The new capabilities can be applied to any model – an AI algorithm, a machine learning model or an opaque predictive analytic model such as a neural network.  IBM is using data to probe and experiment against any model to propose a plausible explanation for its result – building on libraries such as LIME to provide the factors that explain the model result. The accuracy of the model is also tracked against these factors and the user can see how they are varying. The system can also suggest possible mitigation strategies and allows drill down into specific transactions. All this is available through an API so it can be integrated into a run time environment. Indeed this is primarily about runtime support.

These new capabilities are focused on fairness – how well the model matches to expectations/plan. This is about more than just societal bias but about making sure the model does not have inbuilt issues that prevent it from being fair and behaving as the business would want.

It’s great to see these capabilities being developed. We find that while our clients need to understand their models, they also need to focus those models on just part of the decision if they are actually going to deploy something – see this discussion on not biting off more AI than you can trust.

This capability is now available as a freemium offering.

Seth Dobrin wrapped things up to discuss some go to market strategies and client successes. The IBM Data Science Elite Team is a team of experts that IBM offers to customers to help jump start data science and AI initiatives. Specifically they focus on helping new data scientists make the transition from college to corporate. Also combined with some basic lab services offerings.

The team is free but it has to be a real business case, the customer has to be willing to put its own team on the project to learn and the customer has to be a reference. Team is growing, now about 50 people. Mostly experienced people but also growing new staff too. Data science engineers, Machine Learning engineers, decision optimization engineers, data visualization engineers. Client has to provide an SME/product owner. Everything is done in sprints for rapid iteration. Often help clients hire later and focus on helping clients develop new skills in IBM’s toolset.

Have about 104 active engagements with about 70 completed. For instance, connecting ML and optimization – predicting renewable energy to optimize energy sources, predictive  turnover to optimize store locations or predict cash outs in ATMs to optimize replenishment.

In addition, IBM is working with The Open Group for professional certification in data science.  They are also investing in university classes and supporting on the job training (including a new data science apprenticeships and job re-entry programs). Finally they are investing in a new AI Academy for creating and applying AI – reskilling internally and making this available to clients.  These are based on IBM’s methodology for data science involving courses and work.

Shadi Copty discussed one IDE and one runtime for AI and data science across the enterprise as part of IBM’s AI approach. Shadi identified three major trends that are impacting data science and ML:

  • Diversity of skillsets and workflows with demand remaining high and new approaches like deep learning posing additional challenges.
    • IBM: Visual productivity and automation of manual work
    • IBM: Improved collaboration and support for tools of choice
  • Data movement is costly, risky and slow as enterprise data is fragmented and duplication brings risk
    • IBM: Bring data science to the data
  • Operationalizing models is hard with most models never deployed
    • IBM: Ease of deployment
    • IBM: Model management

Two key product packages:

  • Watson Studio is building and training – exploration, preparation etc. Samples, tutorials, support for open source, editors, integrations, frameworks for building models etc.
  • Watson Machine Learning is the execution platform. One runtime for open source and IBM algorithms with standard APIs etc.

Recent changes:

  • Data refinery for better data management
  • SPSS Modeler UI integrated into Watson Studio. One click deployment and spark execution
  • ML Experiment Assistant to find optimal neural networks, compare performance, use GPUs etc
  • Neural Network Modeler to provide a drag and drop environment for Neural Networks across TensorFlow, Pytorch etc
  • Watson Tools to provide some pre-trained models for visual recognition

The direction here is to deliver all these capabilities in Watson Studio and Watson Machine Learning and integrate this into ICP for Data so it is all available across private, public and on-premise deployments. APIs and applications layer on top.

Ritika Gunnar and Bill Lobig came up to discuss trust and transparency but this is all confidential for a bit… I’ll post next week [Posted here].

Sam Lightstone and Jason  Tavoularis wrapped up the session talking about the effort to deliver AI everywhere. Products, like databases, are being improved using AI in a variety of ways.  For instance, ML/AI can be used to find the best strategy to execute SQL. For some queries, this can be dramatically faster. In addition, SQL can be extended with ML to take advantage of unsupervised learning models and return rows that are analytically similar for instance. This can reduce the complexity of SQL and provide more accurate results. IBM Cognos Analytics is also being extended with AI/ML. A new assistant based on conversational AI helps find the available assets that the user can access. As assets are selected the conversation focuses on the selected assets, suggesting related fields for analysis, appropriate visualizations or related visualizations, for instance.  Good to see IBM putting is own tech to work in its tech.

Daniel Hernandez kicked things off with a discussion of data for AI. AI adoption, IBM says, is accelerating with 94% of companies believing it is important but only 5% are adopting aggressively. To address perceived issues, IBM introduced its ladder to AI

  • Trust
  • based on Automate (ML)
  • based on Analyze (scale insights)
  • based on Organize (trusted foundation)
  • based on Collect (make data simple and accessible)

This implies you need a comprehensive data management strategy that captures all your data, in rest and in motion, in a cloud-like way (COLLECT). Then it requires a data catalog so the data can be understood and relied on (ORGANIZE). Analyzing this data requires an end to end stack for machine learning, data science and AI (ANALYZE). IBM Cloud Private for Data is designed to deliver these capabilities virtually everywhere and embeds the various analytic and AI runtimes. This frames the R&D work IBM is doing and where there expect to deliver new capabilities. Specifically:

  • New free trial version available at a very long URL I can’t type quickly enough. This lets you try it.
  • Data Virtualization to allow users to query the entire enterprise (in a secure, fast, simple way) as though it was a single database.
  • Deployable on Red Hat OpenShift with a commitment to certify the whole stack on the Red Hat PaaS/IaaS.
  • The partnership with Hortonworks has been extended to bring Hadoop to Docker/Kubernetes on Red Hat.
  • Working with Stackoverflow to support ai.stackexchange.com

A demo of ICP for Data in the context of a preventative maintenance followed. Key points of note:

  • All browser based of course
  • UI is structured around the steps in the ladder
  • Auto discovery process ingests metadata and uses AI to discover characteristics. Can also crowdsource additional metadata
  • Search is key metaphor and crosses all the sources defined
  • Supports a rich set of visualization tools
  • Data science capabilities is focused on supporting open source frameworks – also includes IBM Research work
  • All models are managed across dev, staging and production and support rolling updates/one-click deployment
  • CPLEX integrated into the platform also for optimization

 

IBM is hosting an event on its AI strategy.

Rob Thomas kicked off by asserting that all companies need an AI strategy and that getting success out of AI – 81% of projects fail due to data problems – involves a ladder of technology with data at the bottom and AI at the top. It’s also true that many AI projects are “boring”, automating important but unsexy tasks, but Rob points out that this builds ROI and positions you for success.

To deliver AI, IBM has the Watson stack – Watson ML for model execution, Watson Studio to build models (now incorporating Data Science Experience DSX and SPSS Modeler), APIs and packaged capabilities. The business value, however, comes from applications – mostly those developed by customers. And this remains IBM focus – how to get customers to succeed with AI applications.

Getting Watson and AI embedded into all their applications, and instrumenting their applications to provide data to Watson, is a long term strategy for IBM.

Time for the first session.