In-memory databases and in-memory analytics are interesting technologies when it comes to Decision Management Systems. Memory is thousands of times faster than disk and the amount of memory on a node is increasing rapidly. Putting data is put in memory and executing analytics against it, especially if the analytics engine has plenty of memory too, can dramatically increase the speed of producing complex analytic models. This allows companies to better cope with increased data complexity (which is driving more use of multi-part ensemble analytic models for instance) and still decrease the time for each iteration in modeling, making data miners and data scientists more productive.
I have written about few in-memory technologies and approaches in recent months (including IBM DB2 BLU, SAS Overall in-memory strategy and SAP Predictive Analytics with its support for HANA in-memory) and into this space has come Teradata with its new Intelligent Memory. I would summarize the idea behind the Teradata Intelligent Memory product with three points:
- While much faster than disk, and much cheaper than it used to be, memory is still 80x more expensive
- Data volumes are exploding even faster than memory availability so putting all your data in memory is impractical
- Why ask your DBA to do manually what your system can do automatically?
Teradata has taken its prior work on how “hot” data is – how important a given piece of data is in the IO of a database – and extended this so that very hot data is put into memory. This is exactly what you would do manually – put the data that makes the biggest performance data into memory – but does it automatically and dynamically, constantly updating what’s in memory v just on disk to continually optimize performance. Nice.
Anyway, I am going to be writing some though leadership stuff on in-memory processing as it relates to Decision Management Systems so look for that in the coming months.