Big Data and Decision Management Systems: The impact of Volume

June 27, 2013

in Analytics, Business Rules, Decision Management

Share

Table of contents for Big Data and Decision Management Systems

  1. Big Data and Decision Management Systems: The impact of Volume
  2. Big Data and Decision Management Systems: The impact of Variety
  3. Big Data and Decision Management Systems: The impact of Velocity

Big Data is often described in terms of an increase in volume, an increase in velocity and an increase in variety: More data, of more types, arriving more quickly. In this short series of blog posts I will discuss the impact of each aspect of Big Data on Decision Management Systems – systems designed to automate and manage operational decisions. First – the impact of Volume.

In an era where we must handle more data, and where the rate of increase in data is itself increasing, we have to face the limitations of human interpretation. Where we might have once assumed that a person could look at and usefully interpret all the data that might be relevant this is increasingly impractical. This has two main impacts on Decision Management Systems.

First it makes the case for automating decisions stronger. Computers are generally much better at looking at lots of data and at doing so quickly enough to be useful. They can be set up to balance recency v long term trends and avoid many of the data interpretation problems that beset human decision-makers. As data volumes accelerate into the stratosphere, Decision Management Systems are your allies in making sense of all this data.

Second it makes the case for operationalizing analytics stronger. I have written about this before (check out this article or this white paper) but the basic premise is that as there is more data so you need more analytic models and you need those models to be built more efficiently. This means applying more automation and more technology in the process of building the models themselves. This could be through machine learning (such as that from Skytree), through fully automated modeling capabilities (such as those from Predixion or KXEN) or through automation added to tools for data scientists (such as those from FICO, SAS or IBM/SPSS). It also means applying the latest in in-memory and in-database technologies to decrease the time all this modeling takes. The days when an individual modeler could hand craft a complete model, sampling data carefully and doing every step by hand are gone. We have to industrialize analytics.

In part 2 I will discuss the impact of variety.

Share

Previous post:

Next post: