Server-side flash caching offloads I/O requests from the application server before it travels over the network to the storage system, which enables the lowest possible latency from flash caching technology. Inserting flash caches into the server dispenses with all of the networking components and storage controller gates in the data path. Because the distance is so short (virtually non-existent), reads and writes speed up considerably.
As with any flash cache product, the caching mechanism first identifies hot data and then copies or stores it to cache. Machine logic allows the flash cache to understand which data requires caching and which can be passed directly to the storage system.
The difference between server-side caching and network- or storage-based caching is that it offloads I/O from storage systems by capturing it at the server level. Some server-side caching mechanisms add CPU overhead, such as specific drivers for PCIe cards. Even with some added overhead, in some circumstances, a server-side flash cache will be faster than sending data over the network to an HDD or hybrid array.
The Form Factors
Vendors implement server-side caching in three primary formats: PCIe cards, DIMM cards and SSD drives. PCIe cards are the most common. (Note that PCIe cards are not limited to flash caching, but caching is an attractive usage case for the format.)
- PCIe cards. PCI e-cards are popular for flash caching because leverage the performance of the PCIe bus over the server’s storage interface. Capacities range in size from gigabytes to terabytes. EMC XtremCache is a PCIe flash cache card that caches reads and writes, and writes-through to back-end storage. The SanDisk Fusion ioMemory line offers PCIe acceleration cards from their Fusion-io acquisition.
- Memory channel. Sometimes known as memory-channel storage, most of these flash cards plug into the Dual Inline Memory Module (DIMM). They add flash to system memory for very high performance and the lowest possible latency. Since these cards are less expensive than DRAM, they are a cost-effective way to add flash-based acceleration to the server. This type of product shows great promise for accelerating applications.
- Drive form-factor SSDs. This is the simplest way to add flash directly to a server. Most of these drives are, in fact, used to boot a server that has lost its connection to shared storage. But they can be used for flash caching as well. Viking Technology SATADIMM is an example, as is Intel Cache Acceleration Software deployed on SSDs.
Server-side flash cache also comes in file-level and block-level variations. Each type can use the three common form factors for server-side flash.
- File-level flash caching software obviously works at the file level within the application or OS. File-level flash caching operates on physical and virtual servers. However, since hypervisors do not operate at the file level, file-level flash caching can’t operate from a single instance to serve multiple VMs. The software has to operate within each guest OS on individual VMs. Still, its high performance in file-intensive environments outweighs potential complexity. Machine logic in this type of flash caching is not as crucial as it is in block level devices, which need the internal logic to identify hot data sets with little manual intervention from IT. With file-level caching, it’s not particularly difficult for database administrators to identify logs and indexes that would benefit from server-side flash caching.
- Block-level flash caching is deployed in the hypervisor instead of at the VM level. This enables block-level cache to run as a single instance from the host to multiple VMs. This advantage does come with its own set of issues, as flash cache capacity may be insufficient for multiple large VM worksets. Advanced machine logic should allow the cache to intelligently allocate capacity to priority blocks among multiple VMs and to move blocks in and out of cache to optimize cache speeds. Samsung AutoCache, for example, manages potential bottlenecks by transparently moving copies of data sets out of cache to server-attached SSDs.
Another challenge to block-level flash caching is VM migration. When data is migrated to a different VM, the flash cache will receive a command to flush the now invalidated cached data. The next task is to rebuild the data in cache using the new VM designation. The trick is to not impact performance by taking time rebuilding the cache. This is not an issue in read-only or write-through flash cache that already streams data back to shared storage. Nor does it greatly affect file-level caching, which is already installed in specific VMs and can quickly reload the migrated VM’s data. But large block-level caches that serve multiple VMs may take several days to rebuild data cache. Meanwhile, data performance will go back to depending on slower HDD or hybrid storage systems. The solution is server-side flash cache products that enable VMs to migrate their specific cache intelligence at the same time as its data. This allows for very fast rebuilding that can take minutes instead of days.
There are also products marketed as flash cache that aggregate virtual server clusters by providing multiple flash tiers to all VMs within the cluster. While quite useful in some situations, this is more of a tiering storage operation than it is server-side flash caching. Data must still physically travel over the network to access the shared flash storage pools built into the physical servers.
Making the Choice
Flash caching is very useful in some situations but is not showing the growth that many predicted a few years ago. With the advent of more and cheaper all-flash arrays (AFAs), server-side caching is not a necessity for boosting application performance in every environment. Flash caching may provide high IOPs and low latency more consistently than an AFA, so in some environments may be used in parallel with an all-flash array. In general, however, server-side flash cache is more frequently seen in environments without high-performance all flash storage.
Should you invest in server-side flash caching?
Given a certain set of circumstances, yes. If you do not have an AFA – or if you do but it’s not providing consistent performance – then server-side flash caching can greatly benefit your compute-intensive critical applications. Typical compute-intensive applications include transactional databases and data warehouses, analytics like Hadoop and web serving. These can all benefit from server-side flash caching. Even if you plan to invest in an AFA at a later date, there is no reason to give up server-side flash caching products now if you have specific applications that need the higher performance.
Photo courtesy of Shutterstock.