≡ Menu

Frontline Solvers has been in business for over 25 years and focused on democratizing analytics for the last five years. They identify themselves as an alternative to analytic complexity with a focus on leveraging broadly held Excel skills and a large base of trained students. They offer several products for predictive and prescriptive analytics and have sold these products to over 9,000 organizations over the years. Their customers are both commercial and academic with hundreds of thousands of students using the tool and 500,000 cloud analytics users. Their commercial customers include many very large companies, though generally they sell to several distinct business units rather than at a corporate level.

Frontline began with their work in solvers (prescriptive analytics) and have worked “backward” into predictive analytics. Their approach is very focused on avoiding analytic complexity:

  • Smart small and keep it simple, with a focus on rapid ROI.
  • Recognize that companies have more expertise than they think – Excel and programming skills for instance plus all the students who have used the software in MBA classes.
  • “Big Data” and more complex ML/AI technologies are not essential for success – ordinary database data is often enough.

Frontline is focused on decision support today but rapidly moving into decision automation – decision management systems.

Their core products are modeling systems and solvers for optimization and simulation, used to build a prescriptive model, rather than the analysis of lots of data (though they have data mining routines too). These kinds of optimization models are often called prescriptive analytics because they recommend – prescribe – specific actions for each transaction. Prescriptive analytics can, of course, also be developed by combining predictive analytics and business rules – driving to a recommended action using the combination. Frontline recognizes this and envisions supporting business rules in their software.

Solver-based prescriptive analytic solutions generally focus on many transactions in a set not a single transaction – what Frontline call coordinated decisions. Sometimes these decisions also have no data, no history, so a human-built model is going to be required not one based on data analysis. Indeed, any kind of prescriptive analytic approach to decision-making is going to require human built models – either decision models to coordinate rules and analytics or a solver model (or both, as we have seen in some client projects).

Frontline argues Excel is the obvious place to start because Excel is so familiar. Their RASON language allows you to develop models in Excel and then deploy to REST APIs. They aim to make it easy for business domain experts to learn analytic modeling and methods, to provide easy to use tools and then make it easy to deploy. Working in Excel, they provide a lot of learning aids in the product that popup to help users. They also have an online learning platform (solver.academy) with classes and there are over 700 university MBA courses using Frontline’s software to introduce analytics methods.

The core products are:

  • Analytic Solver – a point and click model builder in Excel, including the cloud-based Excel version which they have been supporting since 2013.
  • RASON – modeling language that can be generated from the Excel-based product or edited directly.
  • SDK – supporting models in written in code, developed in RASON and/or Excel and deployed as REST APIs.

Their base solver is built into the desktop Excel (OEMed by Microsoft). As the cloud Excel does not have this, they have built online apps for optimization, simulation and statistics that work across Excel Online and Google Sheets. The latest version of Excel Online is now ALMOST able to support the full Analytic Solver Suite and this is expected to be complete in Q1 2019, allowing them to unify the product across desktop and online Excel.

To bring data in to the solver, they use the Common Data Service as well as standard Data Sources for data access. This makes it easy to connect to data sources. They also use the Office Workbook model management tools (discovery, governance, audit) which are surprisingly robust for those with corporate licenses to Excel.

The engine has four main capabilities:

  • Data mining and forecasting algorithms.
  • Conventional optimization and solver.
  • Monte Carlo Simulation and decision trees.
  • Stochastic and robust optimization.

For very large datasets (such as those used in data mining), the software can pull a statistically valid sample from, say, a big data store. The data can be cleaned, partitioned into training and validation sets and various routines applied. The results are displayed in Excel and PMML (the standard Predict Model Markup Language) used to persist the result. Obviously the PMML is executable both in third party platforms and in their own RASON language.

RASON is a high-level modeling language that allows the definition of data mining models, constraints and objectives for optimization, and distributions and correlations for simulation. A web presence at rason.com allows this to be written in an online editor and executed through their REST API. RASON is JavaScript-like and can embed Excel formulas too. RASON can be executed by passing the whole script to the API using a JavaScript call. An on-premise version is available too for those who wish to keep execution inside the firewall.

The Solver SDK has long supported coding of models. Since 2010 the SDK has been able to load and run the Excel solver models. The RASON service came in 2015 and in 2017 they added integration with Tableau and Power BI, and this year to Microsoft Flow. These integration steps involve generating apps from inside the Excel model using simple menu commands. Behind the scenes they generate the RASON code and package that up in a JavaScript version for consumption.

You can get more information on Frontline Solvers here.

Cassie Kozyrkov, the Chief Decision Intelligence Engineer at Google wrote an article recently titled  Is your AI project a nonstarter in which she identified 22 check list items for a candidate AI project. It’s a great article and you should definitely read it. In particular you should note the quote at the top:

Don’t waste your time on AI for AI’s sake. Be motivated by what it will do for you, not by how sci-fi it sounds.

And what it will do for you is often help your organization make better decisions.

We always begin customer projects by building a decision model. Working directly with the business owners, we elicit a model of how they want to decide and represent it using a Decision Model and Notation (DMN) standard decision requirements model. This shows the decision(s) they want to make and the requirements of those decisions – the sub-decisions (and sub-sub-decisions), the input data and the knowledge sources (policies, regulations, best practices and analytic insights) that describe their preferred approach.

These models address several of Cassie’s early points (1. Correct delegation and 2.Output-focused ideation) by focusing on the business and on business decision-making. We also link this decision model to the business metrics that are influenced by how those decisions are made, which addresses couple of her key points on metrics (18. Metric creation and 19. Metric review).

This decision model is often a source of analytic inspiration, as business owners say “if only…”- “if only we knew which emails were complaints”,” if only we knew who had an undisclosed medical condition”, “if only we knew if this text document described the condition being claimed for”…. These are the analytic and AI opportunities in this decision. Like Cassie, we often find that existing data mining and description analytics projects can be used to see how a decision could be improved with AI/ML (3.Source of inspiration).

Now the decision model has sub-decisions in it that are either going to be made by a person or by an AI algorithm. Because you know what a better decision looks like (thanks to the link to business metrics), you can make sure an AI algorithm is likely to help (20. Metric-loss comparison) and you can consider if the specific decision you identified is a good target for AI (4. Appropriate task for ML/AI). Critically we find that often the whole decision is not suitable (there are too many regulations or constraints) but critical sub-decisions ARE suitable.

When it comes to putting the resulting AI algorithm or ML model into production, the decision model makes it clear how it will be plugged in and how it will be used in the context of the business decision (5. UX perspective and to some extent 8. Possible in production). Keeping the end – the decision – in mind in this way means that project teams are must more focused on how they will operationalize the result of the algorithm than they would be otherwise.

If you automate the decision model, as we do, using a BRMS then you will also be able to simulate the decision against historical data (17. Simulation). The decision model means you can simulate the decision with and without your AI/ML components to prove the ROI too.

Finally, this focus on decision-making means you know when the AI/ML model will be used (other sub-decisions are likely to address eligibility and validity of the transaction, for instance, narrowing the circumstances in which the AI must work) and you can see what accuracy is required. This is often much lower than AI/ML teams think because the decision model provides such a strong frame for the algorithm. (21. Population and 22. Minimum performance).

Decision models are a really powerful way to begin, scope, frame and manage AI and ML projects. Of course, they don’t address all Cassie’s 22 points and the others (6. Ethical development, 7.Reasonable expectations 9. Data to learn from, 10. Enough examples, 11. Computers, 12 Team, 13 Ground truth, 14 Logging sanity, 15 Logging quality, 16 Indifference curves) will need to be considered decision model or not. But using a decision model will help you frame analytic requirements and succeed with AI.

IBM has been developing Decision Composer since 2017 and is releasing it as part of its core Business Rules Management System, Operational Decision Manager, in December 2018. Decision Composer is a browser-based tool, currently available on the IBM cloud, that uses a decision model metaphor to design decision logic and deploy it as a decision service. You can think of it as a free, streamlined (and simplified) version of ODM. Decision Composer allows you to define multiple projects, each containing a decision model for a particular decision service. It has components for modeling data, modeling decisions, specifying decision logic (business rules) and testing.

  • The design page shows a decision model for the project. Using a subset of the Decision Model and Notation (DMN) format, the design includes Decisions and Input Data shapes that can be linked using Information Requirements or dependencies. The diagram supports the normal kinds of features and layout is automated with a limited number of ways to rearrange things. The result of each decision is defined, using a standard datatype or a type defined for the project.
  • From each decision you can access the Decision Logic Editor. This allows you to either edit a decision table or write a business rules. The decision logic is edited in the same language as ODM uses with a natural language-like option (Business Action Language) as well as a tabular decision table. Multiple rules can be written for a single decision or a decision table can be defined. The conditions in the rule or table match to the requirements drawn in the decision model. Decision tables support merged fields, calculated columns, column reordering etc. and provide a description of the rule implied by each row in the table. Overlap and gap checking are provided also. Rules and tables can be controlled through interaction policies, similar to DMN’s hit policies. Supported interactions include sequential, first rule, smallest or greatest value, sum and collect. For example, collect gathers the results of applicable rules as a list.
  • Each Input Data represents a Type. The types can be imported from existing data definitions or defined on the fly in the design environment. Types can use the normal data types, can be lists, can have structure, allowed values and restrictions as you would expect. When complex types with multiple fields are defined, the user can determine which specific fields are used by a decision when writing the decision logic.
  • A simple test interface is provided to make it easy to submit sample data and confirm results. Multiple test datasets can be defined and saved. Executed rules and their respective input and output values are visible to help understand how the logic was processed.
  • Decision projects can be shared with others. The different parties can then collaborate on the same decisions, with simple version control.

New versions can be saved, undo is supported etc.

Decision services can be deployed as REST services in the IBM cloud and using the ODM infrastructure. and the deployed services have well defined signatures. Lazy deployment to IBM Cloud allows you to put the service immediately into production – the code will be deployed the first time someone tries to use the service.

Decision Composer also allows you to import sample projects for a quick start tutorial experience. You can also import an XSD, JSON file, swagger URL (for an existing service) or a Watson Conversation Workspace to create an initial data model, allowing you to rapidly develop decision logic against a known data set.

For those wondering about the differences between ODM and Decision Composer, ODM offers much more scalable execution, more extensive governance and permission management, more robust testing and powerful simulation tools.

You can try Decision Composer at ibm.biz/DecisionComposer and the IBM team have a blog post about the new version of Decision Composer on ODMDev.

EIS OpenL Tablets is a product of EIS Group focused on using a spreadsheet paradigm to manage business rules. I last spoke to OpenL Tablets in 2013 and recently got an update on the product. EIS OpenL Tablets is available as open source and in a commercial version. EIS Group is an insurance innovation company founded in 2008 with over 800 employees worldwide.

EIS OpenL Tablets has the usual Business Rules Management Systems components – a web studio, an engine and a web services deployment framework. It also has a plug-in for Maven, templates and support for logging to a NoSQL database. It uses a spreadsheet paradigm for rule management and this is key to engaging business users. They find that 90% of the business rules that business people need to write can be represented in spreadsheet-like decision tables (Decision Management Solutions finds the same on our consulting projects). As a result they focus on creating all the rules needed in Excel and then provide a single web based environment (demonstration available here) for validation and life cycle management.

The web studio allows a user to work with multiple projects and allows various groups of users to be given different privileges and access to various projects. Each project contains Excel files, deployment details and other details. Each Excel file can have multiple tabs and each tab can have multiple decision tables as well as simple look up tables, formulas, algorithms and configuration tables. All these can be viewed and edited in the web studio (where there are various wizards and automatic error checking capabilities) or can be opened in Excel and edited there.

Decision tables can have formulas in actions and conditions, supporting a complex set of internal logic. The user can also define a sequence of calculations using the tables as well as datatypes and a vocabulary (allowed values for inputs), sample data/test cases etc. Test cases can be run directly in the web interface and the rules can be traced to see which ones executed in a specific test case. There is an optional element to store a detailed execution log to a NoSQL database in production also.

Ongoing changes are tracked for an audit trail and changed versions can be compared in the web studio. A revision can be saved to create a snapshot at which point all the small changes are removed from the log and the snapshot is stored. These larger snapshots can also be compared.

Projects can contain service definitions for which rule services should be defined. Various deployment options are supported, and the user can specify which rules can be exposed as entry points. A swagger UI is generated for exposed service calls.

The commercial version of OpenL Tablets supports a dynamic web client, integration with Apache Spark and analytics/advanced modeling. The Apache Spark integration allows very large numbers of transactions to be run through the rules for impact simulation and what-if analysis.

More details on OpenL Tablets available at http://openl-tablets.org/

I blogged last week about IBM’s AI approach and one piece was still under NDA – new capabilities around trust and transparency. These capabilities were announced today.

As part of trying to address the challenges of AI, IBM has added a trust and transparency layer to its ladder of AI capabilities (described here). They see five primary personas round AI capabilities- business process owners, data scientists, AI ops, application developer and CIO/CTO. The CIO/CTO is generally the persona who is most responsible. It is them who see the challenges with trust. To use AI, companies need to understand the outcomes – the decisions – are they fair and legitimate?

The new trust and transparency capability is focused on detecting fairness / bias and providing mitigation, traceability and auditability. It’s about showing the accuracy of models/algorithms in the context of a business application.

Take claims as an example. A claims process is often highly populated with knowledge workers. If an AI algorithm is developed to either approve or decline a claim then the knowledge workers will only rely on it if they can trust and understand how it decided.

These new capabilities show the model’s accuracy in terms defined by the users – the people consuming the algorithms. The accuracy can be drilled into, to see how it is varying. For each model a series of factors can be identified for tracking – gender, policy type, geography etc. How the model varies against these factors can be reviewed and tracked.

The new capabilities can be applied to any model – an AI algorithm, a machine learning model or an opaque predictive analytic model such as a neural network.  IBM is using data to probe and experiment against any model to propose a plausible explanation for its result – building on libraries such as LIME to provide the factors that explain the model result. The accuracy of the model is also tracked against these factors and the user can see how they are varying. The system can also suggest possible mitigation strategies and allows drill down into specific transactions. All this is available through an API so it can be integrated into a run time environment. Indeed this is primarily about runtime support.

These new capabilities are focused on fairness – how well the model matches to expectations/plan. This is about more than just societal bias but about making sure the model does not have inbuilt issues that prevent it from being fair and behaving as the business would want.

It’s great to see these capabilities being developed. We find that while our clients need to understand their models, they also need to focus those models on just part of the decision if they are actually going to deploy something – see this discussion on not biting off more AI than you can trust.

This capability is now available as a freemium offering.

Seth Dobrin wrapped things up to discuss some go to market strategies and client successes. The IBM Data Science Elite Team is a team of experts that IBM offers to customers to help jump start data science and AI initiatives. Specifically they focus on helping new data scientists make the transition from college to corporate. Also combined with some basic lab services offerings.

The team is free but it has to be a real business case, the customer has to be willing to put its own team on the project to learn and the customer has to be a reference. Team is growing, now about 50 people. Mostly experienced people but also growing new staff too. Data science engineers, Machine Learning engineers, decision optimization engineers, data visualization engineers. Client has to provide an SME/product owner. Everything is done in sprints for rapid iteration. Often help clients hire later and focus on helping clients develop new skills in IBM’s toolset.

Have about 104 active engagements with about 70 completed. For instance, connecting ML and optimization – predicting renewable energy to optimize energy sources, predictive  turnover to optimize store locations or predict cash outs in ATMs to optimize replenishment.

In addition, IBM is working with The Open Group for professional certification in data science.  They are also investing in university classes and supporting on the job training (including a new data science apprenticeships and job re-entry programs). Finally they are investing in a new AI Academy for creating and applying AI – reskilling internally and making this available to clients.  These are based on IBM’s methodology for data science involving courses and work.

Shadi Copty discussed one IDE and one runtime for AI and data science across the enterprise as part of IBM’s AI approach. Shadi identified three major trends that are impacting data science and ML:

  • Diversity of skillsets and workflows with demand remaining high and new approaches like deep learning posing additional challenges.
    • IBM: Visual productivity and automation of manual work
    • IBM: Improved collaboration and support for tools of choice
  • Data movement is costly, risky and slow as enterprise data is fragmented and duplication brings risk
    • IBM: Bring data science to the data
  • Operationalizing models is hard with most models never deployed
    • IBM: Ease of deployment
    • IBM: Model management

Two key product packages:

  • Watson Studio is building and training – exploration, preparation etc. Samples, tutorials, support for open source, editors, integrations, frameworks for building models etc.
  • Watson Machine Learning is the execution platform. One runtime for open source and IBM algorithms with standard APIs etc.

Recent changes:

  • Data refinery for better data management
  • SPSS Modeler UI integrated into Watson Studio. One click deployment and spark execution
  • ML Experiment Assistant to find optimal neural networks, compare performance, use GPUs etc
  • Neural Network Modeler to provide a drag and drop environment for Neural Networks across TensorFlow, Pytorch etc
  • Watson Tools to provide some pre-trained models for visual recognition

The direction here is to deliver all these capabilities in Watson Studio and Watson Machine Learning and integrate this into ICP for Data so it is all available across private, public and on-premise deployments. APIs and applications layer on top.

Ritika Gunnar and Bill Lobig came up to discuss trust and transparency but this is all confidential for a bit… I’ll post next week [Posted here].

Sam Lightstone and Jason  Tavoularis wrapped up the session talking about the effort to deliver AI everywhere. Products, like databases, are being improved using AI in a variety of ways.  For instance, ML/AI can be used to find the best strategy to execute SQL. For some queries, this can be dramatically faster. In addition, SQL can be extended with ML to take advantage of unsupervised learning models and return rows that are analytically similar for instance. This can reduce the complexity of SQL and provide more accurate results. IBM Cognos Analytics is also being extended with AI/ML. A new assistant based on conversational AI helps find the available assets that the user can access. As assets are selected the conversation focuses on the selected assets, suggesting related fields for analysis, appropriate visualizations or related visualizations, for instance.  Good to see IBM putting is own tech to work in its tech.

Daniel Hernandez kicked things off with a discussion of data for AI. AI adoption, IBM says, is accelerating with 94% of companies believing it is important but only 5% are adopting aggressively. To address perceived issues, IBM introduced its ladder to AI

  • Trust
  • based on Automate (ML)
  • based on Analyze (scale insights)
  • based on Organize (trusted foundation)
  • based on Collect (make data simple and accessible)

This implies you need a comprehensive data management strategy that captures all your data, in rest and in motion, in a cloud-like way (COLLECT). Then it requires a data catalog so the data can be understood and relied on (ORGANIZE). Analyzing this data requires an end to end stack for machine learning, data science and AI (ANALYZE). IBM Cloud Private for Data is designed to deliver these capabilities virtually everywhere and embeds the various analytic and AI runtimes. This frames the R&D work IBM is doing and where there expect to deliver new capabilities. Specifically:

  • New free trial version available at a very long URL I can’t type quickly enough. This lets you try it.
  • Data Virtualization to allow users to query the entire enterprise (in a secure, fast, simple way) as though it was a single database.
  • Deployable on Red Hat OpenShift with a commitment to certify the whole stack on the Red Hat PaaS/IaaS.
  • The partnership with Hortonworks has been extended to bring Hadoop to Docker/Kubernetes on Red Hat.
  • Working with Stackoverflow to support ai.stackexchange.com

A demo of ICP for Data in the context of a preventative maintenance followed. Key points of note:

  • All browser based of course
  • UI is structured around the steps in the ladder
  • Auto discovery process ingests metadata and uses AI to discover characteristics. Can also crowdsource additional metadata
  • Search is key metaphor and crosses all the sources defined
  • Supports a rich set of visualization tools
  • Data science capabilities is focused on supporting open source frameworks – also includes IBM Research work
  • All models are managed across dev, staging and production and support rolling updates/one-click deployment
  • CPLEX integrated into the platform also for optimization

 

IBM is hosting an event on its AI strategy.

Rob Thomas kicked off by asserting that all companies need an AI strategy and that getting success out of AI – 81% of projects fail due to data problems – involves a ladder of technology with data at the bottom and AI at the top. It’s also true that many AI projects are “boring”, automating important but unsexy tasks, but Rob points out that this builds ROI and positions you for success.

To deliver AI, IBM has the Watson stack – Watson ML for model execution, Watson Studio to build models (now incorporating Data Science Experience DSX and SPSS Modeler), APIs and packaged capabilities. The business value, however, comes from applications – mostly those developed by customers. And this remains IBM focus – how to get customers to succeed with AI applications.

Getting Watson and AI embedded into all their applications, and instrumenting their applications to provide data to Watson, is a long term strategy for IBM.

Time for the first session.

Digital Insurance’s Insurance Analytics and AI event is coming to New York, November 27-28 (new venue and date) Austin September 27-28, 2018. This is going to be a great place to learn how to use analytics, machine learning, data science and AI to modernize your insurance business. I’ll post more as the details firm up but I’m really excited about one session I know is happening.

I am participating in a panel on the Role of AI and Analytics in the Modernization of Insurance. I’m joining Craig Bedell, an IBM Industry Academy member for Insurance, as well as two analytic leaders – Hamilton Faris of Northwestern Mutual and Tom Warden of EMPLOYERS. We’ll be sharing our advice on modernizing insurance decision making across sales, underwriting, pricing, claims and much more. We’ll explore the Analytics and AI innovation journey and highlight how insurance firms that combine these efforts with operational decision management efforts are far more likely to succeed in digital transformation. It’s going to be great.

Register here – and do it before August 17 to get the Early Bird rate! See you in NY.

I’m a big believer in decision models using the DMN industry standard notation and Decision Management Solutions uses it on all our projects – we’ve modeled over 3,000 decisions and trained over 1,000 people. But we don’t use executable decision models very often and strongly disagree with those that say the only good decision model is an executable one.

This week I wrote a set of posts over on our corporate blog about the three reasons we don’t – business user engagement, maintenance, analytics/AI – and about our vision of a virtual decision

Check them out.

Karl Rexer of Rexer Analytics is at Predictive Analytics World this week (as am I) and he gave some quick highlights from the 2017 Rexer Analytics Data Science Survey. They’ve been doing survey since 2007 (and I have blogged about it regularly) and the 2017 is the 8th survey with 1,123 responses from 91 countries. Full details will be released soon but he highlighted some interesting facts:

  • Formal data science training is important to respondents (75% or so) with particular concerns about data preparation and misinterpreting results when people don’t have formal training.
  • Only about one third have seen problems with DIY analytic and data science tools, which is pretty good and getting better.
  • Most data scientists use multiple tools – an average of 5, still – with SQL, R and Python dominating across the board outside of academia.
  • R has shown rapid growth over the last few years with more usage and more primary usage every year and RStudio is now the dominant environment.
  • While there’s lots of interest in “deep learning”, 2/3 have not used deep learning at all with only 2% using it a lot so it’s not really a thing yet.
  • Job satisfaction is good and most data scientists are confident they could find a new job – not a big surprise.
  • People agree that a wide range of skills are needed with domain knowledge scoring very highly as important. Despite this recognition everyone still wants to learn technical skills first and foremost!

Looking forward to getting the full results.

We are relaunching our newsletter, in a GDPR-compliant way, with a focus on DecisionsFirst Digital Transformation. We were probably GDPR-compliant before but better safe than sorry.

We send emails about every 3 or 4 weeks with information about resources, events and articles on Decision Management topics. For example, case studies in digital transformation; decision modeling as a best practice for getting business value from your technology investments in business rules, predictive analytics, AI, and machine learning; upcoming webinars and events; training and more.

If you used to get it, or you want to start getting it, please sign up here.

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the third of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making.
  2. Move beyond reporting to predict, prescribe, and decide.
  3. Use analytics to learn, adapt, and improve (this post).

Analytic enterprises don’t focus on big wins but on using analytics to learn what works, to adapt decision-making, and continuously improve results.

An analytic enterprise collects data about how it makes decisions and about the outcome of those decisions. It records the data that drove its analytics and the analytics that drove its decisions. Business outcomes may be recorded as structured data, as social media posts, as unstructured text, as web activity or even sensor data. This varied data is matched to decision-making and analytics so that the true impact of the analytics that drove those decisions can be assessed.

This continuous monitoring identifies new opportunities for analytics, which analytics need a refresh, where ML and AI might be valuable and much more. An analytic enterprise learns from this data and moves rapidly ensure that its decision-making is updated effectively. Its analytic platform links data, analytics and outcomes so it can close the loop and continuously improve.

Check out this video on how analytic enterprises learn, adapt and improve and download the new white paper Building an Analytic Enterprise (sponsored by the Teradata Analytics Platform).

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the second of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making
  2. Move beyond reporting to predict, prescribe, and decide(this post).
  3. Use analytics to learn, adapt, and improve.

Historically, most enterprises have focused on analytics for reporting and monitoring. Success as an analytic enterprise means using analytics to enable better, more data-driven decisions. This means shifting to predictive analytics that identify what is likely to happen in the future and prescriptive analytics that suggest the most appropriate decision or action. Predictive analytics give analytic enterprises a view ahead, so they can decide in a way that takes advantage of a fleeting opportunity or mitigates a potential risk.

Many of the decisions best suited to analytic improvement are operational decisions that are increasingly automated and embedded in IT infrastructure. If these decisions are to be improved, multiple predictive and prescriptive analytics must often be combined in a real-time system.

An analytic enterprise needs an analytic platform and data infrastructure that supports both predictive analytics and automation. It uses its analytic platform to embed predictive and prescriptive analytics in highly automated, real-time decisions.

Check out this video on how analytic enterprises predict, prescribe and decide and download the new white paper Building an Analytic Enterprise (sponsored the Teradata Analytics Platform).

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the first of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making (this post).
  2. Move beyond reporting to predict, prescribe, and decide.
  3. Use analytics to learn, adapt, and improve.

Success in analytics means being business-led, not technology-led. Analytic projects that focus on data or algorithms prioritize being able to start quickly over business value. In contrast, a focus on improving business decision-making keeps business value front and center.

A focus on decision-making also acts as a touchstone, preventing a chase after the next shiny object. It provides a business justification for the data, tools, and algorithms that will be required.

Modeling the decision-making in a visual way, using a notation like the industry standard Decision Model and Notation (DMN), breaks down complex decisions and shows what analytic insight will help make the decision more effectively. These models show the impact of expertise, polices and regulations while also clearly showing what data is used in the decision.

When a decisions first approach is combined with a flexible analytic platform, analytic teams are released from the constraints of their current tools or siloed data to focus on business value.

Check out this video on how analytic enterprises put business decisions first and download the new white paper Building an Analytic Enterprise (sponsored by the Teradata Analytics Platform).

A recent article on Dig-in talked about How insurers can think strategically about AI. It contained a killer quote from Chris Cheatham of RiskGenius:

A lot of times people jump in and try AI without understanding the problem they’re trying to solve. Find the problem first, then figure out if AI can solve it, and what else you need to get the full solution.

Exactly. We have found that focusing on decisions, not a separate AI initiative, delivers business value and a strong ROI. Only if you can define the decision you want to improve with AI, and build a real model of how that decision works – a decision model – can you put AI to work effectively. Check out our white paper on the approach – How to Succeed with AI – or drop us a line to learn more.

John Rymer and Mike Gualtieri of Forrester Research have just published a new piece of research – The Dawn Of Digital Decisioning: New Software Automates Immediate Insight-To-Action Cycles Crucial
For Digital Business. This is a great paper – not only does it mention some Decision Management Solutions’ clients as examples, it makes some great points about the power of Decision Management and some great recommendations about how best to approach Digital Decisioning.

Digital decisioning software capitalizes on analytical insights and machine learning models about customers and business operations to automate actions (including advising a human agent) for individual customers through the right channels.

In particular I liked the paper’s emphasis on keeping business rules and analytics/ML/AI integrated and its reminder to focus first on decisions (especially operational decisions) and not analytic insights. These are key elements in our DecisionsFirst methodology and platform and have proven themselves repeatedly in customer projects – including those mentioned in the report.

Our DecisionsFirst approach begins by discovering and modeling these operational decisions, then automating them as decision services and finally, as also noted in the report, creating a learn and improve feedback loop.
As Mike and John suggest, we combine business rules, analytics and AI into highly automated services for decision-making then tie this to business performance using decision models.

It’s a great report and one you should definitely read. I’ll leave you with one final quote from it:

Enterprises waste time and money on unactionable analytics and rigid applications. Digital decisioning can stop this insanity.

You can get the paper here if you are a Forrester client.

ACTICO has just released ACTICO Modeler 8 – the latest version of the product previously known as Visual Rules for Finance (see most recent review here). ACTICO Modeler is a project-based IDE. ACTICO users can now select whether to create a “classic” Rule Modeling project or a Decision Model and Notation (DMN) project. The DMN modeler supports Decision Requirements Diagrams, Business Knowledge Models (BKMs) and the full FEEL syntax.

Decision Requirements Diagrams are built using drag and drop or by working out from existing diagram elements. When a Decision, Input Data, Knowledge Source or BKM is selected its properties can be filled out and this includes linking to other objects, like organizational units, that are managed in the project. A decision model supports multiple diagrams on which objects can be reused – users can drag existing model objects from the project repository structure or search for them. Decisions, Input Data, Knowledge Sources and BKMs are genuinely shared across all the diagrams in a model’s project. Any change on one diagram is immediately reflected on all other diagrams.

Existing DMN models can be imported simply by dropping DMN XML files into the environment. As DMN 1.1 models don’t have diagrams, users can simply add a new diagram to an imported project and drag elements on to it as needed.

All boxed expressions and full FEEL are supported – literal expressions, contexts, invocations, lists, relations, function definitions and decision tables. Validation is applied as syntax is edited using the classic squiggly red underline and supporting hints to correct it. A problems view summarizes all the problems in the current model and this is dynamic, updating as the model is edited. The core FEEL validations are in the product already and more are planned in coming releases.

Decision services can be defined using their own diagram, allowing the user to show which decisions should be included in the decision service and which ones are invokable. All the information requirements that flow across the decision service boundary are defined. Each decision service has its own diagram and the relevant decisions are dragged from the project to create the decision service. The decision service can be invoked from the ACTICO classic rule representation. This allows, for instance, test cases to be reused and allows new DMN models to be deployed and managed using the existing server architecture. Individual decisions and BKMs can be tested using the same mechanism.

A view of the ACTICO DMN Modeler showing a Decision Requirements Diagram and a Decision Table for one of the Business Knowledge Models displayed.

You can get more information on the ACTICO DMN Modeler here.

Jim Sinur of Aragon Research recently published a new blog Mounting Pressure for Better Decisions. He argues, correctly, that decision making is under pressure because there is more data available than ever before, a need for faster change in the way organizations make decisions to respond to evolving circumstances and a general need for speed in handling transactions.

We help companies improve decision-making by applying our DecisionsFirst Decision Management approach and by building Decision Management Systems for them. Combining decision models (built using the Decision Model and Notation or DMN standard) with powerful business rules management systems, advanced analytics (machine learning, predictive analytics) and AI, we help companies see a set of unique benefits:

  • Improved consistency
    Decision models enable consistent decision making across channels and people without imposing mindless consistency.
  • Increased Agility
    The systems we build are easy for the business to change in response to new business conditions because the business understand the decision models and own the business rules that drive the system.
  • Reduced Latency
    The combination of business rules and advanced analytics enables higher rates of straight through processing (automation) while also ensuring more clarity and less confusion for the transactions that must be handled manually.
  • Lower Cost
    Decision Management Systems reduce costs by ensuring less waste and rework, more STP and fewer manual touches.
  • Better Accuracy
    Decision Management Systems operationalize data-driven, analytical decisions throughout the organization to improve the accuracy of decisions everywhere.

If you are interested in learning more about Decision Management and the technology available for it, check out our Decision Management Systems Platform Technology Report or contact us for a free consultation.