≡ Menu

EIS OpenL Tablets is a product of EIS Group focused on using a spreadsheet paradigm to manage business rules. I last spoke to OpenL Tablets in 2013 and recently got an update on the product. EIS OpenL Tablets is available as open source and in a commercial version. EIS Group is an insurance innovation company founded in 2008 with over 800 employees worldwide.

EIS OpenL Tablets has the usual Business Rules Management Systems components – a web studio, an engine and a web services deployment framework. It also has a plug-in for Maven, templates and support for logging to a NoSQL database. It uses a spreadsheet paradigm for rule management and this is key to engaging business users. They find that 90% of the business rules that business people need to write can be represented in spreadsheet-like decision tables (Decision Management Solutions finds the same on our consulting projects). As a result they focus on creating all the rules needed in Excel and then provide a single web based environment (demonstration available here) for validation and life cycle management.

The web studio allows a user to work with multiple projects and allows various groups of users to be given different privileges and access to various projects. Each project contains Excel files, deployment details and other details. Each Excel file can have multiple tabs and each tab can have multiple decision tables as well as simple look up tables, formulas, algorithms and configuration tables. All these can be viewed and edited in the web studio (where there are various wizards and automatic error checking capabilities) or can be opened in Excel and edited there.

Decision tables can have formulas in actions and conditions, supporting a complex set of internal logic. The user can also define a sequence of calculations using the tables as well as datatypes and a vocabulary (allowed values for inputs), sample data/test cases etc. Test cases can be run directly in the web interface and the rules can be traced to see which ones executed in a specific test case. There is an optional element to store a detailed execution log to a NoSQL database in production also.

Ongoing changes are tracked for an audit trail and changed versions can be compared in the web studio. A revision can be saved to create a snapshot at which point all the small changes are removed from the log and the snapshot is stored. These larger snapshots can also be compared.

Projects can contain service definitions for which rule services should be defined. Various deployment options are supported, and the user can specify which rules can be exposed as entry points. A swagger UI is generated for exposed service calls.

The commercial version of OpenL Tablets supports a dynamic web client, integration with Apache Spark and analytics/advanced modeling. The Apache Spark integration allows very large numbers of transactions to be run through the rules for impact simulation and what-if analysis.

More details on OpenL Tablets available at http://openl-tablets.org/

I blogged last week about IBM’s AI approach and one piece was still under NDA – new capabilities around trust and transparency. These capabilities were announced today.

As part of trying to address the challenges of AI, IBM has added a trust and transparency layer to its ladder of AI capabilities (described here). They see five primary personas round AI capabilities- business process owners, data scientists, AI ops, application developer and CIO/CTO. The CIO/CTO is generally the persona who is most responsible. It is them who see the challenges with trust. To use AI, companies need to understand the outcomes – the decisions – are they fair and legitimate?

The new trust and transparency capability is focused on detecting fairness / bias and providing mitigation, traceability and auditability. It’s about showing the accuracy of models/algorithms in the context of a business application.

Take claims as an example. A claims process is often highly populated with knowledge workers. If an AI algorithm is developed to either approve or decline a claim then the knowledge workers will only rely on it if they can trust and understand how it decided.

These new capabilities show the model’s accuracy in terms defined by the users – the people consuming the algorithms. The accuracy can be drilled into, to see how it is varying. For each model a series of factors can be identified for tracking – gender, policy type, geography etc. How the model varies against these factors can be reviewed and tracked.

The new capabilities can be applied to any model – an AI algorithm, a machine learning model or an opaque predictive analytic model such as a neural network.  IBM is using data to probe and experiment against any model to propose a plausible explanation for its result – building on libraries such as LIME to provide the factors that explain the model result. The accuracy of the model is also tracked against these factors and the user can see how they are varying. The system can also suggest possible mitigation strategies and allows drill down into specific transactions. All this is available through an API so it can be integrated into a run time environment. Indeed this is primarily about runtime support.

These new capabilities are focused on fairness – how well the model matches to expectations/plan. This is about more than just societal bias but about making sure the model does not have inbuilt issues that prevent it from being fair and behaving as the business would want.

It’s great to see these capabilities being developed. We find that while our clients need to understand their models, they also need to focus those models on just part of the decision if they are actually going to deploy something – see this discussion on not biting off more AI than you can trust.

This capability is now available as a freemium offering.

Seth Dobrin wrapped things up to discuss some go to market strategies and client successes. The IBM Data Science Elite Team is a team of experts that IBM offers to customers to help jump start data science and AI initiatives. Specifically they focus on helping new data scientists make the transition from college to corporate. Also combined with some basic lab services offerings.

The team is free but it has to be a real business case, the customer has to be willing to put its own team on the project to learn and the customer has to be a reference. Team is growing, now about 50 people. Mostly experienced people but also growing new staff too. Data science engineers, Machine Learning engineers, decision optimization engineers, data visualization engineers. Client has to provide an SME/product owner. Everything is done in sprints for rapid iteration. Often help clients hire later and focus on helping clients develop new skills in IBM’s toolset.

Have about 104 active engagements with about 70 completed. For instance, connecting ML and optimization – predicting renewable energy to optimize energy sources, predictive  turnover to optimize store locations or predict cash outs in ATMs to optimize replenishment.

In addition, IBM is working with The Open Group for professional certification in data science.  They are also investing in university classes and supporting on the job training (including a new data science apprenticeships and job re-entry programs). Finally they are investing in a new AI Academy for creating and applying AI – reskilling internally and making this available to clients.  These are based on IBM’s methodology for data science involving courses and work.

Shadi Copty discussed one IDE and one runtime for AI and data science across the enterprise as part of IBM’s AI approach. Shadi identified three major trends that are impacting data science and ML:

  • Diversity of skillsets and workflows with demand remaining high and new approaches like deep learning posing additional challenges.
    • IBM: Visual productivity and automation of manual work
    • IBM: Improved collaboration and support for tools of choice
  • Data movement is costly, risky and slow as enterprise data is fragmented and duplication brings risk
    • IBM: Bring data science to the data
  • Operationalizing models is hard with most models never deployed
    • IBM: Ease of deployment
    • IBM: Model management

Two key product packages:

  • Watson Studio is building and training – exploration, preparation etc. Samples, tutorials, support for open source, editors, integrations, frameworks for building models etc.
  • Watson Machine Learning is the execution platform. One runtime for open source and IBM algorithms with standard APIs etc.

Recent changes:

  • Data refinery for better data management
  • SPSS Modeler UI integrated into Watson Studio. One click deployment and spark execution
  • ML Experiment Assistant to find optimal neural networks, compare performance, use GPUs etc
  • Neural Network Modeler to provide a drag and drop environment for Neural Networks across TensorFlow, Pytorch etc
  • Watson Tools to provide some pre-trained models for visual recognition

The direction here is to deliver all these capabilities in Watson Studio and Watson Machine Learning and integrate this into ICP for Data so it is all available across private, public and on-premise deployments. APIs and applications layer on top.

Ritika Gunnar and Bill Lobig came up to discuss trust and transparency but this is all confidential for a bit… I’ll post next week [Posted here].

Sam Lightstone and Jason  Tavoularis wrapped up the session talking about the effort to deliver AI everywhere. Products, like databases, are being improved using AI in a variety of ways.  For instance, ML/AI can be used to find the best strategy to execute SQL. For some queries, this can be dramatically faster. In addition, SQL can be extended with ML to take advantage of unsupervised learning models and return rows that are analytically similar for instance. This can reduce the complexity of SQL and provide more accurate results. IBM Cognos Analytics is also being extended with AI/ML. A new assistant based on conversational AI helps find the available assets that the user can access. As assets are selected the conversation focuses on the selected assets, suggesting related fields for analysis, appropriate visualizations or related visualizations, for instance.  Good to see IBM putting is own tech to work in its tech.

Daniel Hernandez kicked things off with a discussion of data for AI. AI adoption, IBM says, is accelerating with 94% of companies believing it is important but only 5% are adopting aggressively. To address perceived issues, IBM introduced its ladder to AI

  • Trust
  • based on Automate (ML)
  • based on Analyze (scale insights)
  • based on Organize (trusted foundation)
  • based on Collect (make data simple and accessible)

This implies you need a comprehensive data management strategy that captures all your data, in rest and in motion, in a cloud-like way (COLLECT). Then it requires a data catalog so the data can be understood and relied on (ORGANIZE). Analyzing this data requires an end to end stack for machine learning, data science and AI (ANALYZE). IBM Cloud Private for Data is designed to deliver these capabilities virtually everywhere and embeds the various analytic and AI runtimes. This frames the R&D work IBM is doing and where there expect to deliver new capabilities. Specifically:

  • New free trial version available at a very long URL I can’t type quickly enough. This lets you try it.
  • Data Virtualization to allow users to query the entire enterprise (in a secure, fast, simple way) as though it was a single database.
  • Deployable on Red Hat OpenShift with a commitment to certify the whole stack on the Red Hat PaaS/IaaS.
  • The partnership with Hortonworks has been extended to bring Hadoop to Docker/Kubernetes on Red Hat.
  • Working with Stackoverflow to support ai.stackexchange.com

A demo of ICP for Data in the context of a preventative maintenance followed. Key points of note:

  • All browser based of course
  • UI is structured around the steps in the ladder
  • Auto discovery process ingests metadata and uses AI to discover characteristics. Can also crowdsource additional metadata
  • Search is key metaphor and crosses all the sources defined
  • Supports a rich set of visualization tools
  • Data science capabilities is focused on supporting open source frameworks – also includes IBM Research work
  • All models are managed across dev, staging and production and support rolling updates/one-click deployment
  • CPLEX integrated into the platform also for optimization

 

IBM is hosting an event on its AI strategy.

Rob Thomas kicked off by asserting that all companies need an AI strategy and that getting success out of AI – 81% of projects fail due to data problems – involves a ladder of technology with data at the bottom and AI at the top. It’s also true that many AI projects are “boring”, automating important but unsexy tasks, but Rob points out that this builds ROI and positions you for success.

To deliver AI, IBM has the Watson stack – Watson ML for model execution, Watson Studio to build models (now incorporating Data Science Experience DSX and SPSS Modeler), APIs and packaged capabilities. The business value, however, comes from applications – mostly those developed by customers. And this remains IBM focus – how to get customers to succeed with AI applications.

Getting Watson and AI embedded into all their applications, and instrumenting their applications to provide data to Watson, is a long term strategy for IBM.

Time for the first session.

Digital Insurance’s Insurance Analytics and AI event is coming to New York, November 27-28 (new venue and date) Austin September 27-28, 2018. This is going to be a great place to learn how to use analytics, machine learning, data science and AI to modernize your insurance business. I’ll post more as the details firm up but I’m really excited about one session I know is happening.

I am participating in a panel on the Role of AI and Analytics in the Modernization of Insurance. I’m joining Craig Bedell, an IBM Industry Academy member for Insurance, as well as two analytic leaders – Hamilton Faris of Northwestern Mutual and Tom Warden of EMPLOYERS. We’ll be sharing our advice on modernizing insurance decision making across sales, underwriting, pricing, claims and much more. We’ll explore the Analytics and AI innovation journey and highlight how insurance firms that combine these efforts with operational decision management efforts are far more likely to succeed in digital transformation. It’s going to be great.

Register here – and do it before August 17 to get the Early Bird rate! See you in NY.

I’m a big believer in decision models using the DMN industry standard notation and Decision Management Solutions uses it on all our projects – we’ve modeled over 3,000 decisions and trained over 1,000 people. But we don’t use executable decision models very often and strongly disagree with those that say the only good decision model is an executable one.

This week I wrote a set of posts over on our corporate blog about the three reasons we don’t – business user engagement, maintenance, analytics/AI – and about our vision of a virtual decision

Check them out.

Karl Rexer of Rexer Analytics is at Predictive Analytics World this week (as am I) and he gave some quick highlights from the 2017 Rexer Analytics Data Science Survey. They’ve been doing survey since 2007 (and I have blogged about it regularly) and the 2017 is the 8th survey with 1,123 responses from 91 countries. Full details will be released soon but he highlighted some interesting facts:

  • Formal data science training is important to respondents (75% or so) with particular concerns about data preparation and misinterpreting results when people don’t have formal training.
  • Only about one third have seen problems with DIY analytic and data science tools, which is pretty good and getting better.
  • Most data scientists use multiple tools – an average of 5, still – with SQL, R and Python dominating across the board outside of academia.
  • R has shown rapid growth over the last few years with more usage and more primary usage every year and RStudio is now the dominant environment.
  • While there’s lots of interest in “deep learning”, 2/3 have not used deep learning at all with only 2% using it a lot so it’s not really a thing yet.
  • Job satisfaction is good and most data scientists are confident they could find a new job – not a big surprise.
  • People agree that a wide range of skills are needed with domain knowledge scoring very highly as important. Despite this recognition everyone still wants to learn technical skills first and foremost!

Looking forward to getting the full results.

We are relaunching our newsletter, in a GDPR-compliant way, with a focus on DecisionsFirst Digital Transformation. We were probably GDPR-compliant before but better safe than sorry.

We send emails about every 3 or 4 weeks with information about resources, events and articles on Decision Management topics. For example, case studies in digital transformation; decision modeling as a best practice for getting business value from your technology investments in business rules, predictive analytics, AI, and machine learning; upcoming webinars and events; training and more.

If you used to get it, or you want to start getting it, please sign up here.

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the third of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making.
  2. Move beyond reporting to predict, prescribe, and decide.
  3. Use analytics to learn, adapt, and improve (this post).

Analytic enterprises don’t focus on big wins but on using analytics to learn what works, to adapt decision-making, and continuously improve results.

An analytic enterprise collects data about how it makes decisions and about the outcome of those decisions. It records the data that drove its analytics and the analytics that drove its decisions. Business outcomes may be recorded as structured data, as social media posts, as unstructured text, as web activity or even sensor data. This varied data is matched to decision-making and analytics so that the true impact of the analytics that drove those decisions can be assessed.

This continuous monitoring identifies new opportunities for analytics, which analytics need a refresh, where ML and AI might be valuable and much more. An analytic enterprise learns from this data and moves rapidly ensure that its decision-making is updated effectively. Its analytic platform links data, analytics and outcomes so it can close the loop and continuously improve.

Check out this video on how analytic enterprises learn, adapt and improve and download the new white paper Building an Analytic Enterprise (sponsored by the Teradata Analytics Platform).

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the second of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making
  2. Move beyond reporting to predict, prescribe, and decide(this post).
  3. Use analytics to learn, adapt, and improve.

Historically, most enterprises have focused on analytics for reporting and monitoring. Success as an analytic enterprise means using analytics to enable better, more data-driven decisions. This means shifting to predictive analytics that identify what is likely to happen in the future and prescriptive analytics that suggest the most appropriate decision or action. Predictive analytics give analytic enterprises a view ahead, so they can decide in a way that takes advantage of a fleeting opportunity or mitigates a potential risk.

Many of the decisions best suited to analytic improvement are operational decisions that are increasingly automated and embedded in IT infrastructure. If these decisions are to be improved, multiple predictive and prescriptive analytics must often be combined in a real-time system.

An analytic enterprise needs an analytic platform and data infrastructure that supports both predictive analytics and automation. It uses its analytic platform to embed predictive and prescriptive analytics in highly automated, real-time decisions.

Check out this video on how analytic enterprises predict, prescribe and decide and download the new white paper Building an Analytic Enterprise (sponsored the Teradata Analytics Platform).

An analytic enterprise uses analytics to solve its most critical run-the-business problems. It takes advantage of new tools and new data sources while ensuring analytic results are used in the real-world. This is the first of three blog posts about how to become an analytic enterprise:

  1. Focus on business decision-making (this post).
  2. Move beyond reporting to predict, prescribe, and decide.
  3. Use analytics to learn, adapt, and improve.

Success in analytics means being business-led, not technology-led. Analytic projects that focus on data or algorithms prioritize being able to start quickly over business value. In contrast, a focus on improving business decision-making keeps business value front and center.

A focus on decision-making also acts as a touchstone, preventing a chase after the next shiny object. It provides a business justification for the data, tools, and algorithms that will be required.

Modeling the decision-making in a visual way, using a notation like the industry standard Decision Model and Notation (DMN), breaks down complex decisions and shows what analytic insight will help make the decision more effectively. These models show the impact of expertise, polices and regulations while also clearly showing what data is used in the decision.

When a decisions first approach is combined with a flexible analytic platform, analytic teams are released from the constraints of their current tools or siloed data to focus on business value.

Check out this video on how analytic enterprises put business decisions first and download the new white paper Building an Analytic Enterprise (sponsored by the Teradata Analytics Platform).

A recent article on Dig-in talked about How insurers can think strategically about AI. It contained a killer quote from Chris Cheatham of RiskGenius:

A lot of times people jump in and try AI without understanding the problem they’re trying to solve. Find the problem first, then figure out if AI can solve it, and what else you need to get the full solution.

Exactly. We have found that focusing on decisions, not a separate AI initiative, delivers business value and a strong ROI. Only if you can define the decision you want to improve with AI, and build a real model of how that decision works – a decision model – can you put AI to work effectively. Check out our white paper on the approach – How to Succeed with AI – or drop us a line to learn more.

John Rymer and Mike Gualtieri of Forrester Research have just published a new piece of research – The Dawn Of Digital Decisioning: New Software Automates Immediate Insight-To-Action Cycles Crucial
For Digital Business. This is a great paper – not only does it mention some Decision Management Solutions’ clients as examples, it makes some great points about the power of Decision Management and some great recommendations about how best to approach Digital Decisioning.

Digital decisioning software capitalizes on analytical insights and machine learning models about customers and business operations to automate actions (including advising a human agent) for individual customers through the right channels.

In particular I liked the paper’s emphasis on keeping business rules and analytics/ML/AI integrated and its reminder to focus first on decisions (especially operational decisions) and not analytic insights. These are key elements in our DecisionsFirst methodology and platform and have proven themselves repeatedly in customer projects – including those mentioned in the report.

Our DecisionsFirst approach begins by discovering and modeling these operational decisions, then automating them as decision services and finally, as also noted in the report, creating a learn and improve feedback loop.
As Mike and John suggest, we combine business rules, analytics and AI into highly automated services for decision-making then tie this to business performance using decision models.

It’s a great report and one you should definitely read. I’ll leave you with one final quote from it:

Enterprises waste time and money on unactionable analytics and rigid applications. Digital decisioning can stop this insanity.

You can get the paper here if you are a Forrester client.

ACTICO has just released ACTICO Modeler 8 – the latest version of the product previously known as Visual Rules for Finance (see most recent review here). ACTICO Modeler is a project-based IDE. ACTICO users can now select whether to create a “classic” Rule Modeling project or a Decision Model and Notation (DMN) project. The DMN modeler supports Decision Requirements Diagrams, Business Knowledge Models (BKMs) and the full FEEL syntax.

Decision Requirements Diagrams are built using drag and drop or by working out from existing diagram elements. When a Decision, Input Data, Knowledge Source or BKM is selected its properties can be filled out and this includes linking to other objects, like organizational units, that are managed in the project. A decision model supports multiple diagrams on which objects can be reused – users can drag existing model objects from the project repository structure or search for them. Decisions, Input Data, Knowledge Sources and BKMs are genuinely shared across all the diagrams in a model’s project. Any change on one diagram is immediately reflected on all other diagrams.

Existing DMN models can be imported simply by dropping DMN XML files into the environment. As DMN 1.1 models don’t have diagrams, users can simply add a new diagram to an imported project and drag elements on to it as needed.

All boxed expressions and full FEEL are supported – literal expressions, contexts, invocations, lists, relations, function definitions and decision tables. Validation is applied as syntax is edited using the classic squiggly red underline and supporting hints to correct it. A problems view summarizes all the problems in the current model and this is dynamic, updating as the model is edited. The core FEEL validations are in the product already and more are planned in coming releases.

Decision services can be defined using their own diagram, allowing the user to show which decisions should be included in the decision service and which ones are invokable. All the information requirements that flow across the decision service boundary are defined. Each decision service has its own diagram and the relevant decisions are dragged from the project to create the decision service. The decision service can be invoked from the ACTICO classic rule representation. This allows, for instance, test cases to be reused and allows new DMN models to be deployed and managed using the existing server architecture. Individual decisions and BKMs can be tested using the same mechanism.

A view of the ACTICO DMN Modeler showing a Decision Requirements Diagram and a Decision Table for one of the Business Knowledge Models displayed.

You can get more information on the ACTICO DMN Modeler here.

Jim Sinur of Aragon Research recently published a new blog Mounting Pressure for Better Decisions. He argues, correctly, that decision making is under pressure because there is more data available than ever before, a need for faster change in the way organizations make decisions to respond to evolving circumstances and a general need for speed in handling transactions.

We help companies improve decision-making by applying our DecisionsFirst Decision Management approach and by building Decision Management Systems for them. Combining decision models (built using the Decision Model and Notation or DMN standard) with powerful business rules management systems, advanced analytics (machine learning, predictive analytics) and AI, we help companies see a set of unique benefits:

  • Improved consistency
    Decision models enable consistent decision making across channels and people without imposing mindless consistency.
  • Increased Agility
    The systems we build are easy for the business to change in response to new business conditions because the business understand the decision models and own the business rules that drive the system.
  • Reduced Latency
    The combination of business rules and advanced analytics enables higher rates of straight through processing (automation) while also ensuring more clarity and less confusion for the transactions that must be handled manually.
  • Lower Cost
    Decision Management Systems reduce costs by ensuring less waste and rework, more STP and fewer manual touches.
  • Better Accuracy
    Decision Management Systems operationalize data-driven, analytical decisions throughout the organization to improve the accuracy of decisions everywhere.

If you are interested in learning more about Decision Management and the technology available for it, check out our Decision Management Systems Platform Technology Report or contact us for a free consultation.

Maureen Fleming of IDC presented at IDC Directions on How Does Decision-Centric Computing Drive Digital Transformation? She kindly shared this presentation with me. Decision-centric computing, she says:

continuously receives and analyzes data to predict when decisions need to be made, systematically learns how to automate those decisions, and acts on each decision to improve performance.

Exactly. We call these Decision Management Systems but the concept is the same.

While the presentation focused on IoT and streaming scenarios, the concepts can be applied more generally – after all, many business scenarios are heading to a streaming solution. The most interesting piece was this graph titled “Predictive Analytics is Only a Piece of the Puzzle”
This graph shows that organizations that are mature in terms of predictive analytics use business rules a lot (70%), those that are in production with something use business rules a little (24%) and those that are stuck in development are not using them very much at all (5%).

This illustrates a point we make with analytics clients – a business rules management system is a great platform for deploying predictive analytics, especially when you apply Decision Management principles and decision modeling to do the rules in a decisions first way.

For IDC subscribers, Maureen has written Introducing Decision-Centric Computing which has another great quote:

Without a way to incorporate decision automation to make repetitive decisions, enterprises will find it increasingly difficult to justify their investments in advanced analytics and risk failure to materialize the anticipated benefits

Decision Management is a proven approach to delivering Decision-Centric Computing and using a Decisions First methodology effectively combines business rules and predictive analytics using decision modeling. What are you waiting for?

Jim Sinur, VP of Research and Aragon Fellow at Aragon Research recently posted “Better Decisions with Decision Management” to his blog. Jim begins by describing Decision Management as “another discipline that will help consistently deliver better decisions”, especially when added to analytics and AI.

It’s great to have Jim’s focus turn to a Decision Management Framework and a Decision Management Platform – we are excited to see what he comes up with.

Of course we use Decision Management on all our projects, applying our unique Decisions First approach to ensure success. Check out the Decision Management Manifesto for our philosophy and if you want our take on a Decision Management Platform, check out the Decision Management Systems Platform Technologies Report with lots of detail on current technology and approaches, use cases and more.

Analytics are only valuable if your enterprise’s decision making changes for the better. You need to build an analytic enterprise that leverages analytics to inform strategy, empower people, and especially drive systems.  An analytic enterprise uses analytics to solve its most critical run-the-business problems, and uses increasingly advanced and diverse analytics to maximize its ability to get value from data.

There are three critical success factors for building an analytic enterprise -focusing on business decisions, moving to predictive and prescriptive analytics and focusing on continuous improvement not one-time big wins. You can learn more about how and why to become an analytic enterprise in this white paper Building An Analytic Enterprise and the associated webinar recording here.

This research was sponsored by Teradata.