SSDs are bursting onto the scene in many forms with a variety of features, I/O interfaces, speeds, capacities, and internal architectures. This article explores how users can evaluate the various types of SSDs, and explains the role of the Storage Networking Industry Association’s (SNIA’s) Solid State Storage Initiative (SSSI) in helping to sort through today’s chaos of choices.

Over the past year, SSDs have transitioned dramatically from a curiosity embraced by only the most adventuresome IT managers to a mainstream technology advocated by leading storage and server OEMs.

Why this sudden interest? It’s simple: SSDs are extremely fast, and now that NAND flash has undergone significant price declines, SSDs are much more reasonably priced than they were in earlier years.

Unfortunately, it’s not practical simply to replace a hard disk drive (HDD) with an SSD. Several issues must be considered, mainly because NAND flash has some quirks. These quirks allow NAND to be manufactured at a very low cost, but the downside is that NAND is tricky to use.

  • NAND writes are far slower than reads
  • Bits can wear out from over-use
  • Data must be erased before being overwritten
  • Reads and writes must be performed in sequence

 

All of these issues are managed by the SSD’s internal controller. Even so, the controller cannot disguise 100% of the NAND chips’ eccentricities, and system managers who want to get the greatest return from an SSD-based storage architecture will need to address the anomalies that remain visible from outside the SSD.

Many flavors to choose from

Today, a dozen or more companies manufacture SSDs aimed at enterprise severs and storage systems. Since memory arrays are built using a large number of low-capacity chips, it is not difficult to make SSDs massively parallel internally (see Figure 1), and this allows them to support bandwidth that far exceeds the capabilities of standard HDD interfaces. Today, there are PATA and SATA SSDs aimed at users who want to upgrade systems with the least effort, but users more interested in speed can purchase drives with Fibre Channel interfaces. In the future, SAS drives will become available.

 

potential data paths within an ssd

As a general rule, these disk-interface SSDs are designed to fit within the mechanical outline of an HDD, so they are very easy to add to a system in place of HDDs.

But even Fibre Channel and SAS interfaces are too slow to take advantage of SSDs’ performance potential, so other vendors produce SSDs that use one or more PCI channels to communicate at very high speeds. These systems are not designed to resemble the mechanical outline of an HDD, and can take the form of a rack-mount box or even an internally-mounted daughterboard that attaches to the processor’s motherboard.

As the I/O type and form factor are being considered, prospective users will also be confronted with decisions about the type of memory and the SSD architecture that will give the best performance for the money. Should the SSD be DRAM-based or NAND-based, or possibly both? Where is the best place in the system to add this new storage element: The server as NAS, a SAN, or both? Will the addition of solid state storage allow other elements of the system to be scaled back?

The need for standards

How do prospective SSD users choose a device to meet their needs? The answer is anything but simple. Not only is it important to understand the load that the system will put on the drive, but managers must also consider which drives will respond best to the load, and where to incorporate SSDs into the system to get the greatest payback.

As with many architectural improvements, IT managers can only make the best decision when armed with an understanding of what activities are most likely and where the system bottlenecks are. But while measuring the workload is within the control of IT managers, SSD specifications are not.

Unfortunately, there are few standards that currently give an unbiased view of SSD operation. SSD vendors are likely to tout their most impressive specifications while playing down other specs that are less flattering. For a number of SSD vendors, this includes specifying read performance without saying much about write speeds. Other vendors will specify sequential read or write speeds but not random access speeds, even though most storage traffic consists of small random operations.

Some SSD manufacturers have stepped up to the bar to promote standards to help with these decisions. SanDisk, for example, is championing a specification called Virtual RPM. This is a formula that attempts to compare an SSD’s speed to that of an HDD with a known speed.

Some other vendors are keen on using IOPS, but this measurement is subject to debate, since IOPS vary by the size of the transfers, and read IOPS are different from write IOPS. Even worse, some of the NAND quirks mentioned above can cause an SSD’s IOPS to vary over time, depending on the workload. Clearly, standards for performance measurement would be helpful. The Storage Networking Industry Association (SNIA) is working to put such benchmarks together in its Solid State Storage Initiative (SSSI).

Meanwhile, JEDEC is addressing interface issues; for example, the fact that many SSDs use similar commands which are invoked by the system in a number of incompatible ways. This group is working on a standard interface that includes new SSD-specific commands that perform a number of useful functions such as erasing the drive should it fall into the wrong hands, or reporting wear back to the host. Another new command has been proposed to allow the disk to perform internal garbage collection and data scrubbing, tasks which improve the SSD’s performance over time and reduce performance degradation even under heavy workloads.

SNIA is also working on an initiative to model the total cost of ownership of an SSD in a system. As mentioned earlier, some users have realized significant hardware savings through the use of SSDs, a fact that lends itself to argue that costs for a system with an SSD are likely to be lower than many systems without an SSD, even though an SSD is invariably costlier than the HDD it might replace.

Data center managers can often realize significant performance improvements at a modest price, or sometimes even at a cost savings, by tapping into SSDs. However, this step must be taken with a lot of judgment and consideration of the best performance to be gained at the minimum cost.

Although the field is wide open today, with few standards in place, SNIA, JEDEC and other organizations are working on initiatives that will help simplify the decision to use solid state storage in the near future.

Jim Handy is a SNIA SSSI member, and director of the Objective Analysis research and consulting firm (www.objective-analysis.com).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *