Sunday, September 13, 2009

Kickfire’s approach to parallelism

I was chatting with Raj Cherabuddi, founder of Kickfire recently about Kickfire’s approach to parallelism, and I think that some of the problems they have to deal with regard to parallelizing queries are quite different from standard parallel database systems, and warrant talking about in a blog post.

Parallel databases typically achieve parallelism via “data-partitioned parallelism”. The basic idea is that data is horizontally partitioned across processors, and each processor executes a query on its own local partition of the data. For example, let’s say a user wants to find out how many items were sold over $50 on August 8th 2009:

SELECT count(*)
FROM lineitem
WHERE price > $50 and date = ‘08/08/09’.

If elements of the line item table are partitioned across different processors, then each processor will execute the complete query locally, computing a count of all tuples that pass the predicates on its partition of the data. These counts can then be combined in a “merge” final step into a global count. The vast majority of the query (everything except the very short final merge step) can be processed completely in parallel across processors.

Let’s assume that there are no helpful indexes for this query. A naïve implementation would use the iterator model to implement this query on each processor. The query plan would consist of three operators: a table scan operator, a selection operator (for simplicity, let's assume it is separate from the scan operator), and an aggregation (count) operator. The aggregation operator would call “getNext” on its child operator (the selection operator), and the selection operator would in turn call getNext on its child operator (the scan operator). The scan would read the next tuple from the table and return control along with the tuple to the selection operator. The selection operator would then apply the predicate on the tuple. If the predicate passes, then the selection operator would return control, along with the tuple to the count operator which increments the running count. If the predicate fails, then instead of returning control to the count operator, the selection operator would instead call getNext again on its child (the scan operator) and apply the predicate to the next tuple (and keep on doing so until a tuple passes the predicate).

It turns out that the iterator model is quite inefficient from an instruction cache and data cache perspective, since each operator runs for a very short time before yielding control to a parent or child operator (it just processes one tuple). Furthermore, there is significant function call overhead, as “getNext” is often called multiple times per tuple. Consequently modern systems will run each operator over batches of tuples (instead of on single tuples) to amortize initial cache miss and function call overheads over multiple tuples. Operator output is buffered, and a pointer to this buffer is sent to the parent operator when it is the parent operator’s turn to run.

Whether the iterator model is used or the batched/staged model is used, there is only one operator running at once (per processor). Thus, the only form of parallelism is the aforementioned data-partitioned parallelism across processors. Even if a processor has multiple hardware contexts/cores, each processing resource will be devoted to processing the same one operator at a time (for cache performance reasons --- see e.g. this paper).

Kickfire, along with other companies that perform database operations directly in the hardware using FPGA technology, like Netezza and XtremeData, need to take a different approach to parallelism.

Before discussing the effect on parallelism, let’s begin with a quick overview of FPGA (field programmable gate array) technology, At the lowest level, a FPGA contains a array of combinational logic, state registers, and memory, that can be configured via a mesh of wires to implement desired logical functions. Nearly any complex algorithm, including database operations such as selection, projection, and join, can be implemented in a FPGA and in doing so, can be run at the speed of the hardware. Not only is performance improved by running these algorithms in the hardware, but the chip can also be run at orders of magnitude lower clock frequencies, which result in commensurate gains in power efficiency. In many cases, operations that take hundreds to thousands of CPU instructions can be performed in a single clock cycle in FPGA logic.

Kickfire therefore employs direct transistor-based processing engines in the FPGA to natively execute complete pipelines of relational database operations. The scale and density of the VLSI processes used in today FPGA’s enable a large number (order of hundreds) of these custom operations to occur in parallel, enabling the use of parallel processing algorithms that can improve performance even further.

The ability to have a large number of operations occurring in parallel means that the query processing engines do not need to switch back and forth between different operators (as described in the iterator and blocked/staged schemes above). However, if you want to get the most out of the parallelism, data partitioned parallelism can only get you so far.

For example, if every processing unit is devoted to performing a selection operation on a different partition of the data, then the result of the selection operator will build up, eventually exceeding the size of on-chip and off-chip buffers, thereby starving the execution engines. Consequently, Kickfire implemented efficient pipelined parallelism in addition to data partitioned parallelism, so that all operators in a query plan are running in the hardware at the same time, and data is being consumed at the approximate rate that it is being produced. Kickfire implemented advanced networking techniques in the areas of queuing and flow control to manage the data flow between the multiple producers and consumers, ensuring that the data, in most cases, stays on the chip, and only occasionally spills to the memory (off-chip buffers). Keeping the intermediate datasets live on the chip prevents memory latency and bandwidth from becoming a bottleneck.

However, data-partitioned parallelism is still necessary since operators consume data at different rates. For example, if a selection predicate has 50% selectivity (1 in 2 tuples pass the predicate) followed by an aggregation operator (as in the example above), then one wants to spend approximately twice as much time doing selection as aggregation (since the aggregation operator will only have half as many tuples to process as the selection operator), so Kickfire will use data-partitioned parallelism to have twice as many selection operators as the parent operator.

For example, a hardcoded Kickfire query might look like the figure below (ignoring the column-store specific operators which is a story for another day):




Note that the selection operations on T1 and T2 along with the join between these two tables occurs four times in the query plan (this is data-partitioned parallelism). Since the join has a reasonably low cardinality, there is no need to have the parent selection operator also appear four times in the query plan; rather it can appear twice, with each operator processing the results from two child join operators. Similarly, since the selection operators produce fewer outputs than inputs, the parent operator only needs to appear once. Data from one operator in the query plan is immediately shipped to the next operator for processing.

Kickfire also claims to be able to devote hardware to running multiple queries at the same time (inter-query parallelism). Getting the right mix of data-partitioned parallelism, pipelined parallelism, and inter-query parallelism is a tricky endeavor, and is part of Kickfire’s “secret sauce”. Clearly, this requires some amount of knowledge about the expected cardinality of each operator, and the Kickfire software uses information from the catalog to help figure all of this out. One would expect this process to get more difficult for complex queries --- it will be interesting to see how Kickfire performs on complex workloads as they continue to gain customer traction (Raj makes a compelling case that, in fact, it is the most complex queries where FPGA technology can shine the brightest).

In a nutshell, Kickfire uses column-oriented storage and execution to address I/O bottlenecks (column-oriented storage has been covered extensively elsewhere in my blog, but you can read about the specifics of Kickfire’s column-store on their blog), and FPGA-based data-flow architecture to address processing and memory bottlenecks. Their “SQL chip” acts as a coprocessor and works in conjunction with the x86 processors (which runs a SQL execution engine in the software when needed, though this is usually the exception path) in their base server. By alleviating these three important bottlenecks, Kickfire is able to deliver high performance; yet still achieves tremendous power efficiency thanks to the low clock frequencies.

Overall, although I have openly questioned Kickfire’s go-to-market strategy in past posts (see here and here), their non-technical departments seem a little disorganized at times (see Jerome Pineau’s experience), and some highly visible employees are no longer with the company (notably Ravi Krishnamurthy who presented their SIGMOD paper and Justin Swanhart who did a nice job explaining the Kickfire column-store features in the aforementioned write-up), I remain a fan of their technology. If they make it through the current difficult economic climate, it will be at the virtue of their technology and the tireless work of people like Raj. As the rate of clock speed increases of commodity processors continues to slow down, being able to perform database operations in the hardware becomes an increasingly attractive proposition.

Thursday, September 3, 2009

A tour through hybrid column/row-oriented DBMS schemes

There has been a lot of talk recently about hybrid column-store/row-store database systems. This is likely due to many announcements along these lines in the past month, such as Vertica’s recent 3.5 release which contained FlexStore, Oracle’s recent revelation that Oracle Database 11g Release 2 uses column-oriented storage for the purposes of superior compression, and VectoreWise’s recent decloaking that also announced an optional hybrid storage layer. Furthermore, analysts like Curt Monash and Philip Howard are predicting further development in this space (see here and here).

It’s surprising to me that it has taken this long before we started seeing commercially available implementations of hybrid systems. The research community has been publishing papers on hybrid systems for decades, with straightforward proposals that could easily be implemented in commercial systems starting to be published 8 years ago.

Different approaches to building hybrid systems yield very different performance properties and solve different sets of problems. Thus, as more hybrid systems become commercially available, and as more companies consider developing their own hybrid system, it is important that people understand the different tradeoffs involved between the various hybrid schemes. My goal in this post is to educate people who are not part of the SIGMOD/VLDB research community about three well-known approaches to building hybrid systems taken by different research efforts, and give pointers to research papers where the reader can find out more detail. Each approach has its own set of advantages and disadvantages and I will try to list both sides of the tradeoff in each case. The goal of this post is not to say that one type of hybrid scheme is better than another --- each scheme can be a good fit in the right situation.

Approach 1: PAX

The PAX scheme was published in 2001 in the VLDB paper “Weaving Relations for Cache Performance” by Natassa Ailamaki, David DeWitt, Mark Hill, and Marios Skounakis. The basic idea is the following: instead of storing data row-by-row within a disk block, store it column-by column. This is different than a “pure” column store, which stores each column in entirely separate disk blocks. The key difference is that if you had a table with 10 attributes, then in a “pure” column store, data from each original tuple is spread across 10 different disk blocks, whereas in PAX, all data for each tuple can be found in a single disk block. Since a disk block is the minimum granularity with which data can be read off of disk, in PAX, even if a query only accesses only 1 out of the 10 columns, it is impossible to read only this single column off of disk, since each disk block contains data for all 10 attributes of the table.

To understand why this is a good idea, some context is necessary. At the time the paper was written, column-stores (called the “DSM model” in the paper) had made very limited impact on the commercial landscape (there was Sybase IQ, but it was very much a niche product at the time). It was widely believed that the reason why the DSM model had failed to take off was due to the high tuple reconstruction costs.

Let’s say a query accessed three out of ten columns from a table and required some operator (like an aggregation) that required each of these three columns to be scanned fully. A row-store would have to do a complete table scan, wasting some I/O reading the 7 irrelevant columns in addition to the 3 relevant ones. But at least it would read the whole table sequentially. The column-store would only have to read the 3 relevant columns, but would have to seek back and forth between the 3 columns, doing tuple reconstruction. In 2001, servers had nowhere near the memory capacities they have today, so extensive prefetching was not an option (i.e., instead of reading one block from column 1 and then one block from column 2 and then one block from column 3 and then the next block from column 1, etc., prefetching allows you to read n blocks from column 1 and then n blocks from column 2, etc, allowing the seek cost to be amortized over a large amount of sequential reads, but you need enough memory to keep n blocks from each column in memory at once). Given how expensive seek costs are relative to sequential access, it is no accident column-stores didn’t take off until system memories increased to recent levels to allow for significant prefetching. (Research on late materialization to delay tuple reconstruction until the end of the query when fewer tuples need to be materialized also helped).

Anyway, PAX was able to achieve the CPU efficiency of column-stores while maintaining the disk I/O properties of row-stores. For those without detailed knowledge of column-stores, this might seem strange: the way most column-stores pitch their products is by accentuating the disk I/O efficiency advantage (you only need to read in from disk exactly what attributes are accessed by a particular query). Why would a column-store want equivalent disk access patterns as a row-store? Well, it turns out column-stores have an oft-overlooked significant CPU efficiency as well. The aspect of CPU efficiency that the PAX paper examined was cache hit ratio and memory bandwidth requirements. It turns out that having column data stored sequentially within a block allows cache lines to contain data from just one column. Since most DBMS operators only operate on one or two columns at a time, the cache is filled with relevant data for that operation, thereby reducing CPU inefficiency due to cache misses. Furthermore, only relevant columns for any particular operation need to shipped from memory.

Bottom line:

Advantages of PAX:
  • Yields the majority of CPU efficiency of column-stores
  • Allows for column-oriented compression schemes, which can improve compression ratio due do increased data locality (data from the same attribute domain is stored contiguously). This can improve performance since the more data can be compressed, the less time needs to be spent reading it in from storage.
  • Is theoretically fairly easy to implement in a row-store system to get some of the column-store advantages. I say “theoretically” because no commercial row-store system actually did this (to the best of my knowledge) until Oracle 11gR2.

Disadvantages of PAX
  • Equivalent I/O properties as row-stores (not counting compression) in the sense that irrelevant columns still have to read from storage in the same blocks as the needed columns for any particular query. In 2001 this was an advantage. Today, for typical analytical workloads, this is a significant disadvantage. (For less scan-oriented workloads, such as tuple lookups and needle-in-the-haystack queries, this remains an advantage).
  • The cache prefetching features on modern processors renders some of the cache efficiency of PAX obsolete (PAX no longer makes a large difference on cache hit ratio). However, it still reduces the demands on memory bandwidth, and other CPU advantages of column-stores, such as vectorized processing, remain possible to achieve in PAX.

Approach 2: Fractured Mirrors

This scheme was published in 2002 in the VLDB paper “A Case for Fractured Mirrors” by Ravi Ramamurthy, David DeWitt, and Qi Su. (Yes, you read that right. The University of Wisconsin DBMS group lead by DeWitt authored both seminal papers on hybrid column-store/row-store systems.) The approach is essentially the following: you’re going to replicate all of your data anyway for high availability and/or disaster recovery. Why not have different storage layouts in each replica? That way, if you have a tuple-lookup or a needle-in-a-haystack query, you send it to the row-store replica. If the query is more scan-oriented (e.g. an aggregation or summarization query), you send it to the column-store replica. The implementation in the paper is a little more complicated (to avoid skew, each node contains part of the column-store and part of the row-store), but the above description is the basic idea.

Most people agree (I think) that row-stores are more than an order of magnitude faster than column-stores for tuple-lookup queries, and column-stores are (at least) more than order of magnitude faster than row-stores for scan-oriented queries. Here is how one comes to this conclusion: To lookup a tuple in a row-store, one needs to read in just the one block that contains the tuple (let’s assume all relevant indexes for both the row-store and the column-store are in memory). In a column-store, one block needs to be read for each attribute. Assuming there are more than 10 attributes in the table, this is already more than an order of magnitude. On the other hand, for scan queries, if a query accesses less than 10% of the attributes of a table (the common case), column-stores get one order of magnitude improvement relative to row-stores immediately (for disk efficiency). Additionally, many have argued (see here and here) that column-stores get an additional order of magnitude improvement for CPU efficiency.

If you buy the above argument, then it is critical to use a scheme like fractured mirrors for mixed workloads. Given how often people talk about mixed workloads as being a key problem in enterprise data warehouses, it is surprising how long it has taken to see a commercial implementation along the lines written about in the research paper.

Advantages of Fractured Mirrors:
  • Every individual query gets sent to the optimal storage for that query. Performance is thus an order of magnitude better than either a pure row-store or a pure column-store on queries that are problematic for that type of store.

Disadvantages of Fractured Mirrors:
  • All data needs to be replicated. Obviously, in most cases you’re going to replicate the data anyway. But if you are already using the replication for something else (e.g. storing the data in a different sort order), then you either need to increase the replication factor or remove some of the additional sort orders in order to implement fractured mirrors.
  • If you really want to get the most out of this approach, you need to extend what is proposed in the research paper and have complete implementations of row-store and column-store systems (since column-stores have very different query execution engines and query optimizers than row-stores). This is obviously a lot of code, and precludes most companies from using the fractured mirrored approach. I am flummoxed as to why the only company with legitimate row-store and column-store DBMS products (Sybase with ASE and IQ) hasn’t implemented the fractured mirrors approach yet.

Approach 3: Fine-grained hybrids

The VLDB 2003 paper by Hankins and Patel and the CIDR 2009 paper by Cudre-Mauroux, Wu, and Madden are examples of this approach. Here, individual tables can be divided into both row and column-oriented storage. For example, if two columns are often accessed together (e.g. order-date and ship-date), they can be stored together (in rows). The remaining columns can be stored separately. This can be done within a disk block, within a table, or even at a slightly larger grain across tables. For example, if one table is often accessed via tuple lookups (e.g. a customer table), then it can be stored in rows; while other tables that are usually scanned (e.g. lineitem) can be stored in columns.

Advantages of fine-grained hybrids
  • If correct decisions are made about what attributes (and/or tables) should be stored in rows, and what attributes should be stored in columns, then you can get all of the performance advantages of fractured mirrors without the additional replication disadvantage.

Disadvantages of fine-grained hybrids
  • Depending on the DBMS system you start with, it can be complex to implement. You essentially have to have both row- and column-oriented execution engines, the optimizer has to know about the differences between row-storage and column-storage, and indexing code has to be updated appropriately. It turns out that it is much easier to implement in a column-store that already supports early materialization (early materialization refers to the ability to reconstruct rows from column-storage early in a query plan) than in other types of systems. This is because early-materialization requires query operators to be able to handle input in both column and row-oriented format (it will be in column format if tuples haven’t been materialized yet, otherwise it will be in row-oriented format). Hence, the execution engine and optimizer already has knowledge about the difference between rows and columns and can act appropriately.
  • It requires some knowledge about a query workload. Otherwise incorrect decisions about tuple layout will be made, leading to suboptimal performance.


Commercial availability

Oracle, Vertica, and VectorWise have announced hybrid systems using one of these schemes (I have no inside knowledge about any of these implementations, and only know what’s been published publicly in all cases). It appears that Oracle (see Kevin Closson’s helpful responses to my questions in a comment thread to one of his blog posts) and VectorWise use the first approach, PAX. Vertica uses fine-grained hybrids (approach 3), though they probably could use their row-oriented storage scheme to implement fractured mirrors (approach 2) as well, if they so desire. Given that two out of the three authors of the fractured mirrors paper have been reunited at Microsoft, I would not be surprised if Microsoft were to eventually implement a fractured mirrors hybrid scheme.

Conclusion

There was nearly a decade of delay, but at long last we’re starting to see hybrid row/column-stores hit the marketplace. Row-stores and column-stores have very different performance characteristics, and they tend to struggle where the alternative excels. As workloads get more complex, hybrid systems will increase in importance.