Flash Caching Appliances

Posted on June 02, 2016 By Christine Taylor

RssImageAltText

Virtualization and compute-intensive applications like Hadoop, VDI, databases, Exchange, and test/dev heavily impact storage. Flash caching is a popular solution to accelerate performance, but IT has to weigh costs and complexity when deciding among flash cache technologies. There are three major types of flash caching that are located in different places along the data path:

  1. Server-side flash caching works on a physical server. Traditionally, server-side flash only shares resources with the VMs hosted on that server. Its sub-millisecond latency is a good fit for environments where an all-flash array (AFA) is overkill for general storage, but a few critical servers need the performance boost. Server-side hasn’t taken off the way that some expected, but it’s a solid point solution for important applications. The majority of products in this category are PCIe flash accelerators, such as SanDisk Fusion-ioMemory.
  2. Storage-based flash caching runs from the storage system. Hybrid and all-flash arrays usually present a flash cache, and some vendors make flash cache cards that IT can insert into exiting arrays. Leading flash array vendors with flash caching include the usual suspects: EMC, IBM, and Pure. NetApp makes flash arrays and also provides PCIe flash cache cards for its FAS and V-series arrays.
  3. Network-based flash caching, or flash cache appliances, sit in the data path between servers and storage. They are usually built from NAND SSDs as opposed to server-side cards, and they serve multiple hosts and storage systems. Common target customers include midmarket customers, MSPs and enterprise data centers that own flash arrays but want higher performance without a technology refresh. Cloudistics Turbine, for example, provides write and read flash caching between virtualized servers and a SAN. And Avere Edge Filers provide flash caching services to file data.

A Closer Look at Flash Cache Appliances

Let’s look at flash cache appliances in some more detail. A flash cache appliance caches copies of the most frequently accessed data sets. Most of them use policy-driven machine logic that learns over time which data sets are requested most frequently and stores them on flash drives for read caching, which eliminates multiple disk access request on the back-end storage system.

Writes are not as straightforward, and may be write-through or write-back. Write-through writes a copy of incoming data to cache while simultaneously streaming the data to the storage device. The flash cache confirms the write to the application once data is written to storage. A write-through appliance’s purpose is not to store data in place of the storage system, but to pre-populate the cache with hot data. So even if an appliance fails, it does not affect stored copies of the data. This means that write-through appliances do not require their own data protection. This method does not significantly accelerate writes, if at all, but does accelerate reads.

Write-back flash cache appliances do accelerate writes as well as reads. Working from its machine logic, the appliance writes-through non-priority data to the storage system, and writes hot data to its own drives. It issues a write confirmation to the application. The flash cache optimizes hot data and periodically flushes it to storage for more permanent storage. Intensive applications benefit from writing directly to the much faster and more efficient flash cache. Write-back caching ideally includes some form of data protection to guard against data loss should the appliance fail before it has flushed back written data to storage. Many appliances do this by using mirrored caches, and in the case of DRAM, battery power in the event of a power loss.

Assuming sufficient capacity on the appliance, the flash cache can pin full data sets. This reduces the amount of data moving through to the storage controllers, which increases the effective capacity of the back-end storage system. Some caching appliances are scalable, which grows flash cache capacity. The appliance also reduces frequent data movement within the storage array. Instead of data sets moving in and out of the array’s flash cache, the appliance takes over cache processing. Finally, implementation should be simple and OS-agnostic. Even when an appliance will write and store data sets, there will not be any need to migrate stored data to the flash cache.

Choosing a Flash Cache Model

There is no particular reason that you can’t implement all three flash cache implementations, depending on the network and your application needs. Some intensive applications do well with server-side flash caching. If you already have a hybrid or all-flash array you probably have flash caching enabled at the storage side, and a network-based flash caching can extend flash benefits to other servers and storage systems. However, since few companies are sitting around looking for ways to burn money, most environments will have to mix and match between flash caching implementations.

Another scenario is that you plan on replacing your hybrid storage system with an all-flash array, but your existing system doesn’t reach end of life until next year. You have invested in server-side flash for your critical Oracle financials, but general purpose storage is also suffering performance hits. In this case, investing in a flash cache appliance on the network will accelerate storage system performance for multiple applications and shared storage.

Flash Cache Appliance Benefits

Flash cache appliance technology isn’t perfect – what is? Data must be written frequently enough for the appliance to recognize it, and the size of the working set cannot be larger than cache capacity.

Even so, all of its benefits boil down to high ROI. These appliances support multiple servers and storage, are simple to manage, and provide investment protection for storage arrays. This provides very good ROI justification for investing in flash, whether for initial SSD implementation, for extending the life of a storage system, or for cost-effectively accelerating compute-intensive applications.

Photo courtesy of Shutterstock.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.