EnhanceIO block cache – howto and benchmark
What is EnhanceIO?
EnhanceIO is fork of Facebook’s flashcache. It uses SSDs as cache devices for traditional rotating hard disk drives. I had one spare SSD @ my office so why not give it a try. I chose EnhanceIO over other block caching drivers becouse it’s one of rare drivers that doesn’t need any modification of current system. All you need is driver and you can speed up your existing LV volume. It’s not well documented, but i’ll go thru whole process of setting up, so it shouldn’t be a problem for you to test it out.
Installation
Unfortunately EnhanceIO does not have prebuilt rpm/deb packets yet, but it’s being implemented into default Debian kernel as dkm soon, as developers say. For this reason we have to go thru the process of compiling, but it shouldn’t be too hard.
git clone https://github.com/stec-inc/EnhanceIO.git # Note: if you get compilation errors, try: git clone -b 3.9-kernel https://github.com/stec-inc/EnhanceIO.git cd EnhanceIO/Driver/enhanceio/ make && make install cd ../../CLI/ cp eio_cli /sbin/ cp eio_cli.8 /usr/share/man/man8
Okay, now that we have it installed, we should reboot server and verify the module is loaded.
lynxdev ~ $ lsmod | grep enhanceio enhanceio_rand 12749 0 enhanceio_lru 12831 0 enhanceio_fifo 12749 0 enhanceio 135074 3 enhanceio_fifo,enhanceio_rand,enhanceio_lru scsi_mod 158249 4 sg,libata,sd_mod,enhanceio
Configuring
Now that we have it installed, let’s test it. Let’s enhance our existing LV on the system with our SSD device. Note, /dev/sde1 is my SSD disk. It’s plain partition and I let enhanceIO to do the rest.
Creating SSD cache for my /dev/VG/BENCHMARK:
eio_cli create -d /dev/VG/BENCHMARK -s /dev/sde1 -p fifo -m wb -c benchmark_cache
You can edit existing cache name like this:
eio_cli edit -c benchmark_cache -p fifo -m wt
Possible options are:
-p {rand,fifo,lru} cache replacement policy -m {wb,wt,ro} cache mode rand = random replacement, fifo = first in first out, lru is algorithm by enhanceIO to decide which block to replace wb = writeback (i really disencourage you to use this, becouse it may lead to data loss), wt = writethrough (best cache mode if you want your data to be safe and still have performance boost), ro = read only (no writes go to SSD, so only reads are faster)
Benchmark
I used fio to benchmark enhanceIO. To be able to tell fio what blocksize I have, i simply used this command:
blockdev --getbsz /dev/sde1
Mounted my SSD enhanced device at /mnt/benchmark and fired fio with command below:
/usr/bin/fio --direct=1 --size=9G --filesize=10G --blocksize=4K --ioengine=libaio --rw=rw --rwmixread=100 --rwmixwrite=0 --iodepth=8 --filename=/mnt/benchmark/test --name=90_Hit_4K_WarmUp --write_iops_log=raw_iops.log --write_bw_log=raw_bw.log --write_lat_log=raw_latency.log
I generated some neat graphs for you to see the performance boost of my system:
IOPS graph – Click to view full scale image BW graph – Click to see full scale image
My raw disk is raid10 of 4 very basic SATA disks (5400rpm). That’s where my BENCHMARK LV was created. SSD is low cost Apple’s SSD which showed up as Samsung SSD on my system. I did several tests with enhanceIO, where I switched modes and caching policies. I did add raw SSD and raw raid10 performance graphs for you to compare results and performance gains. As you can see enhanceIO even increased performance of raw SSD. It wasn’t exactly Raid10 + SSD performance, but still pretty good if you ask me.
Final note
Even though EnhanceIO seems like a great option to save money and have only few SSD’s to greatly increase your current disk performance, I must warn you about writeback mode. EnhanceIO is still in development and I wouldn’t recommend you to use writeback option. As you can see from graphs, there’s no significant performance boost. I read news about EnhanceIO and some sysadmins reported loosing their data after switching from writeback to another mode or removing SSD caching from the LV completely. I did not have such issues, but you should use this mode with care. I would recommend you to enhance your volumes that you have good backup of or can loose data. Don’t use it in production environment, until it’s in stable release of Debian kernel is my suggestion.

This is a great post so thanks for taking the time to share the detail.
I’m curious. Does EnhancedIO allow you to use the same SSD to cache against multiple devices? Let’s say I had multiple iSCSI LUNs and wanted to enable read caching for them by the same local SSD. Could I do that or would I need to create multiple partitions in the SSD and apply one per LUN?
From EnhanceIO: EnhanceIO also supports creation of a cache for a device which contains partitions. With this feature it’s possible to create a cache without worrying about having to create several SSD partitions and many separate caches.
I would interpret this that you can cache whole device that has partitions, like whole sda (which contains sda1 sda2…) to block cache, but you can not cache sda and sdb to the same physical cache. This means in your case you would have to create partitions for each of the LUNs… Here’s an idea worth trying, I’m not sure, just a brainstorm… you could create LVM on the SSD and then set cache device a LV residing on SSD… Didn’t try it yet, i’d suggest you to test this first, but it should work…
Since a week I have my home and root partition set up each with its own enhanceio cache partition on a 120GB SSD – and both set to read-only and LRU policy. When the cache has saturated the hit ratio is at about 90% and it does give a good speed up (especially on scattered reads), yet poses no safety risk at all. Now switched to somewhat “riskier” write-through mode and closely watching impact.
It has to be noted that after an unclean shutdown, the cache almost completely clears, i.e. it takes an uncached read access to bring the block back into the cache. The hit ratio does recover quickly though.
We use it for some time in write-through mode and so far we didn’t have any issues. As from my experience, if you start heavy writing with write-back mode on and then try to switch cache to another mode, let’s say write-through or read-only, it can crash or freeze. In write-through mode we haven’t seen such issues, no matter what we did.
Not compiling anymore
Proxmox (debian) 4.15 and gcc 4.9
make -C /lib/modules/4.4.15-1-pve/build M=/home/install/1/EnhanceIO/Driver/enhanceio modules V=0
make[1]: Entering directory ‘/usr/src/linux-headers-4.4.15-1-pve’
arch/x86/Makefile:133: stack-protector enabled but compiler support broken
Makefile:676: Cannot use CONFIG_CC_STACKPROTECTOR_STRONG: -fstack-protector-strong not supported by compiler
CC [M] /home/install/1/EnhanceIO/Driver/enhanceio/eio_conf.o
gcc: internal compiler error: Segmentation fault (program cc1)
Please submit a full bug report,
with preprocessed source if appropriate.
See for instructions.
scripts/Makefile.build:258: recipe for target ‘/home/install/1/EnhanceIO/Driver/enhanceio/eio_conf.o’ failed
make[2]: *** [/home/install/1/EnhanceIO/Driver/enhanceio/eio_conf.o] Error 4
Makefile:1403: recipe for target ‘_module_/home/install/1/EnhanceIO/Driver/enhanceio’ failed
make[1]: *** [_module_/home/install/1/EnhanceIO/Driver/enhanceio] Error 2
make[1]: Leaving directory ‘/usr/src/linux-headers-4.4.15-1-pve’
Makefile:45: recipe for target ‘modules’ failed
make: *** [modules] Error 2
Unfortunately EnhanceIO’s development kinda stopped few years ago. I’d suggest you to look at Facebook’s flashcache instead. I liked EnhanceIO at the time of writing this article because it was so easy to use and setup.
I do agree that the original github repo has halted. However, some forks have been adding support for new kernels. ATM I would recommend the elmystico Enhancio fork (https://github.com/elmystico/EnhanceIO), which works at least until kernel 4.7 (This is what I am running now).
Thanks for sharing the link. Hopefully with these forks, EnhanceIO won’t fade into oblivion…