Why in-memory doesn't matter (and why it does)

​​(This post is a bit of a blast from the past. It was original published in 2009 and was lost during a migration in 2011. The software landscape it references is obviously dated and many links are likely broken, but the analysis is still relevant.)

Well, that title was going to be perfect flame-bait, but then I went all moderate and decided to write a blog that actually matters. So here's the low-down:

There's a lot of talk lately about in-memory and how it's the awesome. This is especially true in the SAP-o-sphere, primarily due to SAP's marketing might getting thrown behind Business Warehouse Accelerator (BWA) and the in-memory analytics baked into Business ByDesign.

I'm here today to throw some cold water on that pronouncement. Yes, in-memory is a great idea in a lot of situations, but it has its downsides, and it won't address a lot of the issues that people are saying it addresses. In the SAP space, I blame some of the marketing around BWA. In the rest of the internet, I'm not sure if this is even an issue.

Since I've actually done a fair amount of thinking about these issues (and as a result I troll people on Twitter about it), I thought maybe it'd be helpful if I wrote it down.

So let's get down to brass tacks:

How in-memory helps

In short: it speeds everything up.

How much? Well, let's do the math: Your high-end server hard drive has a seek time of around 2 ms. That's 2*10^-3 seconds (thanks Google). Yes, I'm ignoring rotational latency to keep it simple.

Meanwhile, fast RAM has a latency measured in nanoseconds. Let's say 10ns to keep it simple. That's 10^-8 seconds.

So, if I remember my arithmetic (and I don't), RAM is about 2*10^5, or 200,000 times faster than hard disk access.

Keep in mind that RAM is actually faster because the CPU-memory interface usually supports faster transfer rates than the disk-CPU interface. But then, hard disks are actually faster because there are ways to drastically improve overall access performance and transfer rates (RAID, iSCSI? - not really my area). Point is, RAM helps your data access go a lot faster.

But ... er ... wait a second (or several thousand)

So here I am thinking, "Well, we're all fine and dandy then. I just put my job in RAM and it goes somewhere between 100,000 and 1,000,000 times as fast. Awesome!".

But then I remember that RAM isn't a viable backing store for some applications, like ERPs (no matter what Hasso Plattner seems to be saying) or any other application where you can't lose data, period. Yes it can act as a cache, but your writes (at least) are going to have to be transactional and will be constrained by the speed of your actual backing store, which will probably be a database on disk.                                                           

And then I see actual benchmarks meant to reflect the real world like this. For those who won't click the link, the numbers are a bit hard to read, but I'm seeing RAM doing about 10,000 database operations in the amount of time it takes a hard disk store to do about 100. That's only a 100x speedup.

Ok, now I'm back down to earth and I'm thinking, "I just put my job in RAM and I'll get maybe a 50-100x speedup but at the cost of significant volatility". (I'm also thinking that SAP's claimed performance improvements of 10x - 100x sound just about like what we'd expect.)

This is still really really good. It makes some things possible that were not possible before and it makes some things easy that used to be hard.

And finally, why in-memory doesn't matter

But really, what is the proportion of well-optimized workloads in the world? How often are people going to use in-memory as an excuse to be lazy about solving the actual underlying problems? In my experience, a lot. Already we are hearing things along the lines of, "The massive BW query on a DSO is slow? Throw the DSO into the BWA index." [Editor's note: A DSO is essentially a flat table. Also, the current version of BWA doesn't support direct indexing of DSOs, but it probably will soon, along with directly indexing ERP tables.]

Now's the part where we who know what we're doing tear these people to shreds and tell them to implement a real Information Lifecycle Management system and build a Inmon-approved data warehouse using their BW system (BW makes it relatively easy). Then that complex query on a flat table that used to take two days of runtime will run in 30 seconds.

Well, that would be one approach, but frankly most people and companies don't have the time or the organizational maturity in their IT function to pull this off. And in this world, where people have neither the time nor the business process for this sort of thing, then it starts to make sense to spend money on it, and something like BWA is a great thing in this context.

But it's not great because it's in-memory. It's great because it takes your data - that data you haven't had the time to properly build into a datawarehouse with a layered and scalable architecture, highly optimized ROLAP stores, painstakingly configured caching, and carefully crafted delta processes - and it compresses it, partitions it, and denormalizes it (where appropriate). Then, as the icing on the cake, it caches the heck out of it in memory.

Let's be clear: BW already has in-memory capabilities. Livecache is used with APO, and the OLAP cache resides in memory. The reason BWA matters is not that it is in-memory. It matters because it does the hard work for you behind the scenes, and partially because of this it is able to use architectural paradigms like column-based stores, compression, and partitioning that deliver performance improvements for certain types of queries regardless of the backing store.

In-memory is great, and fast, and should be used. But in most ways that are really important, it doesn't matter all that much.'

What does SAP mean by "In-memory"?

It's been a bit more than 2 years since SAP introduced the "In Memory" marketing push, starting with Hasso Plattner's speech at Sapphire ... or was it TechEd ... my memory fails me ;-)

It has been two years and I have yet to see a good understanding emerge in the SAP community about what SAP actually means when it talks about "In Memory". I put the phrase "In Memory" into quotes, because I want to emphasize that it has a meaning entirely different from the standard English meaning of the two words "in" and "memory". This is a classic case, best summed up by a quote from one of the favorite movies of my childhood:

Vizzini: HE DIDN'T FALL? INCONCEIVABLE. Inigo Montoya: You keep using that word. I do not think it means what you think it means.

- IMDB

The only reasonably specific explanation of the "In Memory" term that I have seen from SAP is in this presentation by Thomas Zurek - on page 11.

If you want a coherent, official stance from SAP on "In Memory" and the impact of HANA on BW, I highly recommend reading and understanding this presentation. I think I can add a little more detail and ask some important questions, so here is my take:

Fact (I think...)

SAP is talking about at least 4 separate but complementary technologies when it says "In Memory":

1. Cache data in RAM

This is the easy one, and is what most people assume the phrase means. But as we will see below, this is only part of the story.

By itself, caching data in RAM is no big deal. Yes, with cheaper RAM and 64-bit servers, we can cache more data in RAM than ever before, but this doesn't give us persistence, nor does working on data in RAM guarantee a large speedup in processing for all data-structures. Often, more RAM is a very expensive way to achieve a very small performance gain.

2. Column-based storage 

Columnar storage has been around for a long time, but it was introduced to the SAP eco-system in the BWA (formerly BIA, now BAE under HANA - gotta respect the acronyms) product under the guise of "In Memory" technology. The introduction of a column-based data model for use in analytic applications was probably the single biggest performance win for BWA and followed in the footsteps of pioneering analytical databases like Sybase IQ, but it was largely ignored.

Interestingly, Sybase IQ is a disk-based database, and yet displays many of the same performance characteristics for analytical queries that BWA boasts. Further evidence that not all of BWA's magic is enabled by storing data in RAM.

3. Compression

So how do we fit all of that data in to RAM? Well, in the case of BWA the answer is that we don't - it stores a lot of data on disk and then caches as much as possible in RAM. But we can fit a lot more data into RAM if it is compressed. BWA, and HANA, implement compression algorithms to shrink data volume by up to 90% (or so we are told).

Compression and columnar storage go hand-in-hand for two reasons:

a. Column-based storage usually sorts columns by value, usually at the byte-code level. This results in similar values being close to each other, which happens to be a data layout that results in highly efficient compression using standard compression algorithms that make use of similarities in adjacent data. Wikipedia has the scoop here: http://en.wikipedia.org/wiki/Column-oriented_DBMS#Compression 

b. When queries are executed on a column-oriented store it is often possible to execute the query directly on the *compressed* data. That's right - for some types of queries on columnar-databases you don't need to decompress the data in order to retrieve the correct records. This is because knowledge of the compression scheme can be built into the query engine, so query values can be converted into their compressed equivalents. If you choose a compression scheme that maintains ordering of your keys (like Run Length Encoding), you can even do range queries on compressed data. This paper is a good discussion of some of the advantages of executing queries on compressed data: http://db.csail.mit.edu/projects/cstore/abadisigmod06.pdf

4. Move processing to the data

Lastly, the BWA and HANA systems make heavy use of the technique of moving processing closer to the data, rather than moving data to the processing. In essence, the idea is that it is very costly to move large volumes of data across a network from a database server to an application server. Instead, it is often more efficient to have the database server execute as much processing as possible and then send a smaller result set back to the application server for further processing. This processing trade-off has been known for a long time, but the move-processing-to-the-data approach was popularized relatively recently as a core principle of the Map-Reduce algorithm pioneered by Google: http://labs.google.com/papers/mapreduce.html 

This approach is especially useful when an analytical database server (which tends to have high data volumes) implements columnar-storage and parallelization with compression and heavy RAM-caching, so that it is capable of executing processing without becoming a bottle-neck.

Speculation

There are also a few technologies that I suspect SAP has rolled into HANA, but since they don't share the detailed technical architecture of the product, I don't know for sure.

1. Parallel query evaluation 

Parallel query execution (sometimes referred to as MPP, or massively-parallel-processing, which is a more generic term) involves breaking up, or sometimes duplicating, a dataset across more than one hardware node and then implementing a query execution engine that is highly aware of the data layout and is capable of splitting queries up across hardware. Often this results in more processing (because it turns one query into many, with an accompanying duplication of effort) but faster query response times (because each of the smaller sub-queries executes faster and in parallel). MPP is another concept that has been around for a long time but was popularized recently by the Map-Reduce paradigm. Several distributed DBMSes implement parallel query execution, including Vertica, Teradata, and hBase

2. Write-persistence-mechanism 

Since HANA is billed as ANSI SQL-compliant and ACID-compliant, it clearly delivers full write-persistence. What is not clear is what method is used to achieve fast and persistent writes along with a column-based data model. Does it use a write-ahead-log with recovery? Maybe a method involving a log combined with point-in-time snapshots? Some other method? Each approach has different trade-offs with regards to memory consumption and the ability to maintain performance under a sustained onslaught of write operations.

Conclusion

So, there are still a lot of questions about what exactly SAP means (or thinks it means) when it talks about "In Memory", but hopefully this helps to clarify the concept, and maybe prompt some more clarity from SAP about its technology innovations. There is no denying that BWA was and HANA will be a fairly innovative product, but for people using this technology it is important to get past the facade of an innovative black-box and understand the technologies underneath and how the approach applies to the business, data, or technical problem we are trying to solve.