Something of a religious war has popped up in the enterprise solid-state disk (SSD) drive space—where do SSDs belong: In the storage array or in the server as direct-attached storage?

SSDs have triggered a lot of excitement in the enterprise. The key reason is speed. Although the price per gigabyte for SSDs is prohibitive in comparison to hard disk drives (HDDs), there are certain cases in which SSDs save money over their HDD counterparts. This is possible in applications that use large numbers of HDDs at a fraction of their capacity to increase the storage system’s total I/Os per second (IOPS) performance. In many cases, a single SSD can provide more speed than a bank of enterprise HDDs at an adequate capacity for a reasonable price.

Still, enterprise SSDs are expensive, ranging from thousands to hundreds of thousands of dollars each. This discourages IT managers and OEMs from wanting to pepper SSDs throughout their data centers, and causes them to consider where would be the best point to attach a very limited number of SSDs—within servers or inside storage arrays.

Analysts at Objective Analysis have examined this issue through discussions with OEMs and data center managers. Our findings are the basis for this article.

Background

Let’s first take look at the technology and examine the reasons why SSDs have suddenly gained popularity.

The storage hierarchy of computing systems is shown in the conceptual chart of Figure 1. This illustration gives a rough idea of where the different elements of the storage hierarchy fit from the perspectives of bandwidth and cost per gigabyte. We use a log-log chart format to help disclose all the data that would be hidden if we were to use a linear scale for either performance or cost. The three orbs labeled L1, L2, and L3 represent three possible layers of cache in or around the processor.

There is a very large gap between DRAM performance and HDD performance, leading to a growing chasm that has needed to be filled for a number of years. Although the enterprise HDD fits at the top end of the HDD oval in Figure 1, it provides a relatively costly means of approaching the bottom end of the DRAM orb. Flash-based SSDs have cropped up as a cost-effective means of filling this gap.

Since NAND’s price per gigabyte has fallen below that of DRAM in recent years, computer designers have been finding interesting ways to tap into this technology to improve the performance of computers while lowering costs. Flash SSDs are one means of reaching that goal. NAND flash is slower than DRAM, but faster than an HDD. NAND is cheaper than DRAM, but more costly than an HDD. This fits the technology into the performance gap that lies between capacity HDDs and DRAM.

Flash-based SSDs pose a significant threat to enterprise HDDs, and many OEMs and IT managers expect future systems to be built using SSDs for speed coupled with low-price, high-capacity HDDs for mass storage, skipping the enterprise HDDs that might otherwise be used between low-price HDDs and DRAM.

Recently, a handful of flash SSD makers have introduced devices that satisfy server OEMs’ needs at a satisfactory price. Their current offerings are pricey (upwards of $3,000 each) and are being used to replace costly arrays of short-stroked enterprise HDDs. (Short stroking is a technique in which programmers takes pains to minimize the HDD’s head motion, and thus the access time, by using only a few adjacent tracks on the HDD and ignoring the rest of the disk.)

While a short-stroked drive only accesses a fraction of the available disk space, the data will be read off the disk at a significantly higher speed than normal. A disk with tens of IOPS can be coaxed into providing data at a few hundred IOPS by using this approach. In some cases, users find that this is a worthwhile trade-off. One short-stroked system on the market today uses 53TB of HDDs to provide only 9TB of useable space.

EMC recently contrasted its SSD-enhanced Symmetrix arrays to a standard HDD-only high-performance storage array configuration. The standard configuration was built using 244 300GB, 15,000rpm Fibre Channel HDDs. The SSD-based system had only 136 of these 300GB Fibre Channel HDDs, but augmented them with 32 1TB standard SATA drives for capacity and eight 73GB SSDs to store the most speed-sensitive data.

The SSD-enhanced array delivered 60% more IOPS using 26% fewer drives. The system required 21% less power than its HDD-only counterpart and cost 17% less.

Most of today’s enterprise flash SSDs boast tens of thousands of IOPS, or roughly 100 times that of a short-stroked HDD. Often the higher bandwidth of the SSD, in tandem with the very small capacity actually used in a short-stroked HDD, will provide an opportunity for an SSD to replace a bank of HDDs. As long as the SSD’s capacity is as great as that used in the short-stroked HDDs, and as long as the SSD’s bandwidth matches that of the HDD array at a competitive price, the SSD may provide a more cost-effective alternative to an array of such HDDs.

DRAM SSDs

Some companies have pioneered DRAM SSDs for a number of years. Texas Memory Systems, the stalwart in this space, introduced its first SRAM SSD in 1976, moving to DRAM shortly afterwards. Solid Data is another player in this field, and there are even small modular SSDs aimed at the high-performance PC gaming community. Other vendors that entered this space very early included EMC and Dataram. More recent entries include Violin Memory, a company that offers a unique approach to integrating a DRAM layer.

DRAM SSDs offer extremely high performance, but have two disadvantages. First, and most importantly, by using DRAM they end up being more costly per gigabyte than simply increasing the DRAM main memory of the server. Why, then would a data center use these devices? It is because these drives are designed to add more DRAM than could be supported by the server’s hardware and software.

Second, DRAM SSDs are volatile, and a system must be implemented to back up the DRAM in the event of a power failure. Although older systems backed up the DRAM by moving the data to HDDs under battery power (using motorcycle batteries), modern DRAM SSDs back up the DRAM to NAND using an abundance of parallel paths unavailable using the HDD approach. This provides very fast backup and restore with significant power savings, reducing the amount of battery capacity needed to perform this important function.

DRAM SSDs will always play a key role in the enterprise, but they are not a part of the phenomenon that today has been triggered by NAND’s drop to sub-DRAM pricing.

SSDs in the data center

Data centers are typically built with servers and storage in separate cabinets (see Figure 2). Shared storage can solve a multitude of problems, especially when loads are shifted from server to server, since equal data access is given to all servers. On the other hand, the more storage that is installed directly into the server, the lower the number of storage requests that must traverse the network.

Both sides of the network can benefit through the use of faster storage. Storage arrays have come to use “tiered” storage, in which arrays are built with storage devices with different levels of speed and cost. In Figure 1, the leftmost two orbs, those representing tape and HDD, would be classified into Tiers 1, 2, and 3. Tape is used for Tier 3, capacity HDDs for Tier 2, and enterprise HDDs for Tier 1. The storage array keeps the most-often-requested or “hot” data in Tier 1, and the seldom-used or “cold” data in Tier 3. Software within the array manages the data between these tiers, allocating faster storage to hot data and putting colder data on a lower tier.

With the advent of SSDs in storage arrays a new tier, dubbed Tier 0, has found its way into these appliances. The beauty of an SSD is that it already fits into existing management schemes and brings a new level of performance to the storage array.

EMC was the first to broadly support SSDs in storage arrays with the announcement of Tier 0 storage in its Symmetrix systems early last year. These systems use the very high- speed ZeusIOPS Fibre Channel SSDs from STEC.

A few months later, IBM introduced an upgrade to its DS8000 storage system with products offering as many as one million IOPS based on a PCIe SSD, called the IOdrive, from Fusion-io.

Dell has named its storage tiers “pools,” with Pool 1 consisting of SSDs, Pool 2 of enterprise HDDs, and so on. Other companies have offered similar products, some even before EMC’s announcement, but their work was not as highly promoted.

One difficulty with accelerating a storage array is that all data accesses from the server to the array will still suffer from network latency, and in some cases this delay is intolerable. In this case, data center managers often use two approaches: they add DRAM caches to replicate data from the storage array, or they install additional servers, each of which is used to store and operate upon a subset. Both of these approaches are costly.

By adding an SSD to the server, OEMs and their customers have found that they can cut their DRAM requirements and sometimes even reduce the number of servers they use. This not only provides the benefits of minimized floor space and power/cooling cost savings, but it often triggers a hidden benefit of lower software licensing costs, since software licenses are frequently tied to processor count.

Sun Microsystems was an early adopter of SSDs in its servers, working with Intel SSDs, and later introduced an open-standard SATA module that could be plugged into either servers or dedicated boards of solid-state storage. These modules are based on the JEDEC standard SO-DIMM form factor used for DRAM in notebook PCs. Sun also announced upgrades to its ZFS file system to manage hot and cold data automatically without the intervention of administrators.

Over the course of the past year, most other server OEMs have introduced SSD options in their systems based upon standard SATA SSDs, and some have gone one step further by adding devices such as Fusion-io’s IOdrive, which requires some reconfiguration of the system. SATA SSDs are the most common, however, although their bandwidth is limited by the SATA interface.

Either or both?

Objective Analysis does not favor one topology over the other. Direct-attached SSD storage does a very good job of reducing network traffic, but gets in the way of data sharing. SSDs in the storage array cut latency by a significant margin, but data transfers are still burdened with network overhead. Which is “best” depends largely on the type of workload.

Over the long term, as SSDs become commonplace (which we anticipate within as little as three years), many data centers will adopt a hybrid approach, adding SSDs to both storage arrays and servers. The two will combine to reduce network traffic and to improve response time, resulting in a faster system that uses less costly hardware in a smaller space that consumes less power.

A quote attributed to CalTech’s Carver Mead is appropriate to this situation. The quote states that the bandwidth required of a channel is inversely proportional to the intelligence at either end. By increasing the intelligence at either end of the network through the application of SSDs, many data center managers will find that their challenges move away from network latency issues and toward other parts of the computing bottleneck.

 

Similar Posts