Evaluating Kyoto Cabinet

Today, I wanted to see whether Kyoto Cabinet could do a better job with table lookups than MySQL.

I have a 1.7+ billion row table in MySQL that has three columns, a 64-bit int and two 32-bit ints. This yields about 28GB on-disk in MySQL with MyISAM, which is about right if you multiply out 1.7 billion by 16 bytes.  The sole reason this table exists is to provide lookups on the two smaller ints from the larger.  Naturally, an index is warranted, and in this case, the index takes up about the same amount of space.  Okay, maybe a little more (29,478,814,827 for the table and 30,052,789,248 for the index).  Did I mention that the index takes about a day to generate on a machine with gobs (128GB) of memory and SSDs?

I figured, hey, I’m using it like a key-value store, so could I do better? Kyoto Cabinet seems like something to try, since I’ve heard good things about Tokyo Cabinet.

Here’s what I did:

For space efficiency, I stored the 64-bit int as an 8-character key and packed the two 32-bit ints into an 8-character value.  I used KC’s HashDB and set HashDB::TLINEAR, tune_buckets to 2 billion (2 * 2^30), and tune_map to 16GB (16*2^30).   TLINEAR is recommended for space efficiency, tune_buckets should be within a factor of 2 of the expected key count, and tune_map should reflect the expected overall db size.  I think my values were in the right ballpark.

What I found was that KC looked very fast for small data sizes, but its insertion time seemed to increase linearly as the number of keys inserted.  Here is a plot relating insertion time (in seconds) for a batch of 64K keys versus the total number of keys inserted.

You can see that there are a couple bumps in the curve, but the times seem to keep increasing: 222 seconds for 64K when 2M keys have been inserted, versus 1 second (or less) for the first few 64K batches.  This doesn’t seem like it’s going to work for 2 billion.

MySQL is looking pretty good here.  Though the index is large, it’s close to 16 bytes per record, which doesn’t sound that much bigger than KC’s 10 bytes.

I don’t know if I have any alternatives.  Perhaps MongoDB. I think it has about 12 bytes in overhead per record, just because of BSON, but if it has a more reasonable insertion time, it may be worth a look.


About Daniel

I write distributed database software. Coding is fun. Love learning languages (spoken and computer). Always looking for opportunities to use advanced math in work and daily life. github: wangd
This entry was posted in development and tagged , , . Bookmark the permalink.

4 Responses to Evaluating Kyoto Cabinet

  1. somedude says:

    Hello, this is interesting, but which database structure did you use in your test ? HashDB or TreeDB ?

    What about LevelDB ?

    Thank you.

  2. Daniel says:

    I didn’t try TreeDB. Would it be better? The application’s accesses are not expected to have any locality in the key space, so any benefit from TreeDB’s ordered storage is unlikely. Since requests will be random in keys, each access will traverse a new path from root to leaf. I am trying to minimize disk seeks, so it seemed that the HashDB layout would be superior. Then again, MySQL uses a tree index, and its performance seemed to work out somehow.

    I might try TreeDB in the future if I got more time for testing. Still, I haven’t seen any evidence that TreeDB would be noticeably faster. The tests I’ve seen show that TreeDB is a smidgen faster for inserts, but somewhat slower for gets and deletes. I think I’m the only one to have tested at the 1 billion scale (the application will need to support 50-70 billion).

  3. angelo says:

    informative. I’ve done similiar experiment on kc.
    I guess the linear inserting time increase is because of the hash implimentation. as more and more records are in kc, more collisions are likely to occur. but the reading time should be bettern than tree db. it seems that you didn’t provide the reading performance chart.

Leave a Reply

Your email address will not be published. Required fields are marked *