≡ Menu

Deploying predictive analytics on everything – from mainframes to hadoop

Share

Predictive analytics is a powerful tool for managing risk, reducing fraud and maximizing customer value. Those already succeeding with predictive analytics are looking for ways to scale and speed up their programs and make predictive analytics pervasive. But they know there is often a huge gap between having analytic insight and deriving business value from it – predictive analytic models need to be added to existing enterprise transaction systems or integrated with operational data infrastructure to impact day-to-day operations.

Meanwhile the analytic environment is getting increasingly complex with more data types, more algorithms and more tools including, of course, the explosion of interest in and use of R for data mining and predictive analytics. Getting value from all this increasingly means executing analytics in real-time to support straight through processing and event-based system responses.

There is also increasing pressure to scale predictive analytics cost-effectively. A streamlined, repeatable, and reliable approach to deploying predictive analytics is critical to getting value from predictive analytics at scale. This must handle the increasingly complex IT environment that contains everything from the latest big data infrastructure like Hadoop / Storm / Hive / Spark to transactional mainframe systems like IBM zSystems.

PMML – Predictive Model Markup Language – the XML standard for interchanging the definitions of predictive models is a key interoperability enabler, allowing organizations to develop models in multiple tools, integrate the work of multiple teams and deploy the results into a wide range of systems.

I am working on a new paper on this and if you are interested you can get a copy by signing up for our forthcoming webinar – Predictive Analytics Deployment to Mainframe or Hadoop – on March 3 at 11am Pacific where I will be joined by Michael Zeller of Zementis.

 

Share