Back in 2011 I wrote a blog post stating that we shouldn’t get to excited about new in-memory databases and that, rather than getting replaced, established relational databases would gain in-memory capabilities over time. With Oracle’s soon-to-be-released in-memory option that prediction has now come true.
I believe the approach taken by Oracle is the right one. After all, why would we in the software industry learn new tools and skills to develop against a new database when we can get the same benefits as an incremental update to the databases we know well? Why would organizations operating software move from a database they know and trust since decades to something new and untested? Granted, to fully leverage in-memory database technology most applications will need a degree of updating, but why go for a larger re-write when a smaller update will do just fine?
An additional benefit of the dual-format approach (storing data in-memory and on disk concurrently) taken by Oracle is that application developers and users can be more selective about what portions of the data are loaded in-memory. This allows for optimizing for performance and memory usage at the same time.
IFS has been participated in the Oracle 12.1 (the release that includes the in-memory option) beta program and we’ve had the opportunity to give the new in-memory option a run for its money. Even though in reality achieving good results with new technologies like this isn’t always as “flipping a switch” easy as it is made out to be, we have had good result with very little work on our side. The immediate appeal is to leverage in-memory for analytical use cases, with faster reporting an analysis. But there is also potential to speed up some of the more data read intensive parts of core business processes.
So am I excited about in-memory? Yes I am. But I am miles more excited about in-memory technology, than in-memory databases.