≡ Menu

First Look – IBM In-Database Analytics


IBM SPSS has been supporting in-database analytic modeling for a while now. Their objective is to make it possible for analysts to run the complete data mining process end-to-end in-database – from accessing the data to data transformation and model building/scoring. In particular, they try to enable analysts to push data transformation and data preparation into the database as these are typically a big part of data mining projects. To achieve in-database execution they provide three main features – SQL Pushback, direct access to a database’s own analytic modeling routines and model deployment/scoring options.

To build a predictive analytic model in IBM SPSS Modeler, an analyst creates an analytic workflow. This consists of multiple tasks or nodes to read, merge or transform data; split data into different test sets; apply modeling algorithms and more. SQL Pushback takes the nodes in this workflow that relate to data access and transformation and pushes them to the database. The tool generates the SQL you need for these steps and executes that SQL on the database from which you sourced the data. This SQL is specific to the database concerned for the main supported databases (IBM DB2, Microsoft SQL Server, Netezza, Oracle, Teradata) and generic SQL is available for many nodes for other databases.

IBM SPSS Modeler also reorders work streams to maximize the effectiveness of this SQL, particularly in terms of keeping the data in the database. For instance if multiple nodes that can be executed in-database are separated by one that cannot be then the nodes will be re-ordered to group the in-database nodes where this is possible.

When In-database execution is possible for a node in the workflow it is color-coded purple to show this – modelers with strong database servers will try and turn “all the nodes purple” so that everything is being done in database. Some customers write raw SQL to use more extended functions like statistics functions that would not automatically be pushed back. SQL Pushback can be turned off so that high load production environments don’t get slowed by in-database analytics and users can decide to cache intermediate results in a database table simply by selecting a node and asking for caching.

The second element of in-database analytic modeling is to build the model itself in-database. For this IBM SPSS Modeler Building use the analytic routines in Oracle (the ODM algorithms), Microsoft SQL Server, DB2 and InfoSphere Warehouse as well as (since the 14.2 release in June) Netezza. These in-database algorithms are presented as new node types in the workflow, allowing a modeler to simply select them as part of their usual workflow. In addition IBM SPSS Modeler has its own algorithms that can be used on the modeling server. These in-database algorithms allow data in the database to be scored and some allow the model to be calculated live when the record with which it is associated is retrieved. These in-database algorithms are typically parallelized by the database vendor and IBM SPSS Modeler inherently takes advantage of this.

IBM SPSS Modeler supports a number of other deployment options besides the use of these in-database routines. A number of the standard IBM SPSS routines can generate SQL for scoring in-database – the model is built outside the database but the SQL allows the scoring to be done in-database once the model is built. Several of these routines support parallel execution on the modeling server. Models, no matter how they were built, can also be deployed using Scoring Services and made available using a web services interface for live scoring. Models can also be deployed using IBM SPSS Decision Management.

Don’t forget the Decision Management Technology Map


Comments on this entry are closed.

  • Thomas Tileston August 2, 2011, 7:50 pm

    How does this compare to the SAS/Teradata partnership for in-database analytics?
    In my experiences with in-database analytics I’ve found that data prep and data transformation are frequently too complex for SQL generators especially with complex queries, sub-selects, multiple inner- outer- joins, etc. Further, most analysts don’t have the data mining skills to write their own complex SQL. Is this addressed?

    • James Taylor August 3, 2011, 8:50 am

      I don’t like to do comparisons as I have not used both approaches on a real project and so don’t have the kind of data a comparison takes. I think the approach they take is an interesting one as generating SQL should allow both for broad support across databases as well as access to specific syntax on each supported DBMS. I like the color-coding of activities as they move “in-database” too and the multi-database support from the modeling tool.
      My understanding is that the tool generates the SQL for the various data preparation activities it supports so the analyst does not need to write the SQL – they can if they want but the tool generates the SQL required. How complex a data prep/transformation project it supports is a great question and I will ask the IBM guys to respond with some examples.