SSDs require new array architectures

Posted on August 01, 2009

RssImageAltText

By Jeff Boles

Solid-state technologies have long been in play on the periphery of the storage market, where end users with specialized storage needs have demanded performance capabilities far beyond those of spinning disks. The hard disks that serve as the foundation for storage have failed to keep up with the growing capabilities of server buses, network fabrics, and CPUs. While the capacity of disks has exponentially improved, the performance has at best demonstrated a linear improvement. As such, users with unusual performance demands have been forced to seek out alternatives to traditional disk drives, and have often found that solutions built on solid-state technology—whether DRAM-based storage arrays or database caches—could be an answer to their problems.

More recently, solid-state technology in disk form factors, or solid-state disk (SSD) drives, is beginning to enter the enterprise. In convenient intersection with their growing performance demands, enterprises are finding that SSDs are making solid-state technology more accessible. Along with the convenience of a disk form factor, NAND-based SSD densities continue to increase, prices are rapidly dropping, and the list of suppliers is increasing.

Moreover, for power-hungry and heat-plagued data centers, SSDs consume significantly less power and create less heat than spinning disks.

Unfortunately, discussions about SSDs often miss the big picture. While details such as MLC vs. SLC, CMOS process wavelengths, NAND chips, write penalties, read disturb errors, and more are important, those details distract from a more significant discussion about overall storage architectures.

While SSDs promise simplicity, the limitations of traditional storage architectures can stand in the way of simplicity. While in many domains solid state is the fodder for a new generation of innovation including bus architectures, disk and memory caching techniques, b-tree algorithms, parallel file systems, read/write techniques, and more solid state has to date generated very little change in the architecture of traditional enterprise storage arrays.

In turn, SSD can end up being far from a technology that is simply interchangeable with disk. Without due diligence on the part of vendors and customers, the implementation of SSDs can remain mired in the struggles of past approaches, while carrying consequences of premium costs and limited usefulness.

The challenges of building SSD-based storage systems become particularly obvious when assessing the most common limitations of solutions on the market today. Let's look at the specific limitations of typical solutions, and the associated challenges for storage system design:

  • Performance doesn't add up. Conventional storage systems often limit the potential performance of SSDs by introducing additional bottlenecks. This can occur when traditional arrays offer less per-device or aggregate performance than the underlying SSDs can deliver in isolation. The challenge: SSDs require extreme performance that often can't be delivered by traditional array controllers.
  • Limited numbers. Conventional storage systems often limit the number of physical SSDs that can be incorporated in a single array. This is typically due to controller architectures where caching software and hardware have not been designed to keep up with the low latency and high performance of SSDs. The challenge: Adding SSDs in unlimited numbers can exceed the capabilities of any storage controller.
  • Restricted in use. Many arrays promising SSD support today may allocate SSDs to entire volumes, with no ability to broadly share SSDs across many data sets, and little ability to easily migrate volumes to and from SSDs as performance demands change. Moreover, such arrays may limit the use of advanced storage features such as thin provisioning or snapshots with SSDs. Altogether, such restrictions encumber SSD use with the same challenges as external appliances difficult data placement choices, difficult data management, and significantly different storage management. The challenge: The huge delta between SSD and traditional disk capacity and performance requires sophisticated integration before it is effectively applied to dynamic demands, and cost- effective use.
  • Scaling breaks the array. Finally, conventional storage arrays that are limited in controller configurations may easily exceed either internal or external array bandwidth when SSDs are used. When bandwidth is limited, contention between lower performing rotational disk and SSD devices can easily limit the total performance of a storage array to levels even well below that of limits created by internal controller architecture. The challenge: Not only must total controller performance support SSD, but to break the barrier of limited or fixed SSD configurations, the total internal and external bandwidth of a solution must scale.

These are not atypical limitations, and this list in fact describes many of the SSD implementations in the market today. Most conventional array controllers were not designed with SSDs in mind. It is not unusual to encounter end users who have implemented SSD devices behind mid-range arrays that may fall short of being able to deliver on the full potential of even a single SSD device. With such arrays, when the user implements SSDs with the intent of scaling performance to the next level, they find that SSD is not a panacea, and they still face the traditional scale-up performance ceilings when it comes to harnessing SSDs more performance beyond the little boost a few SSD devices can provide still necessitates the disruptive addition of more controller horsepower or an entire additional array. In such cases, SSD as a performance solution is just a disguise, and the problems of storage silos, limited throughput and IOPS, and management issues are still lurking beneath the covers.

Next generation architectures

SSDs in arrays will be set free from limitations when they are incorporated into arrays as part of an integrated whole that is designed with a new approach to scalability. With the challenges that have been identified in mind, and an awareness of the dynamic nature of varying enterprise workloads, the capabilities behind next-generation SSD array architectures become self-evident. Such systems require:

  • Controller performance. Disk arrays supporting SSDs must have tremendous excess performance capabilities or be expandable in controller performance in order to scale SSDs beyond relatively low numbers.
  • Purpose built for SSD. There are few solutions on the market today able to make effective use of SSD technology without significant redesign of controller software. SSD-based systems should incorporate features designed to deal with device limits such as write penalties, while still enabling use of the full range of management features in use with traditional disk. Designing controllers specifically for SSDs is key to unlocking the use of more cost-effective SSD media.
  • SSD in scalable numbers. Storage arrays should be able to utilize varying amounts of SSDs, without artificially-imposed device limits or restricting the use of storage features.
  • Bandwidth to scale. When flexible numbers of SSD devices are considered, it imposes a quandary for controller architecture, as beyond controller performance, internal and external array bandwidth should also scale.
  • Wide provisioning of SSD devices. Finally, storage arrays should also incorporate technologies such as automated storage tiering, based on I/O intelligence, or the understanding of I/O demands at a volume or sub-volume level. This allows data to be optimally moved to and from SSDs as performance demands change, and increases how broadly and easily SSD can be applied across all of the volumes on an array.

These capabilities are cropping up most often in next-generation arrays that are changing the paradigm when it comes to scalability. Such solutions are shifting to scale-out or widely-clustered architectures that are granularly additive in controller performance and bandwidth. When focused on the task of SSD support, such architectures can easily unleash access to broad and practically unlimited numbers of SSDs. Compared to conventional array designs that limit performance, scale-out SSD architectures can better match an enterprise's needs no matter how they might change.

SSD is seen as a performance-only technology. However, SSDs have the potential to also drastically change the capital and operational costs of storage, and making SSD technology scalable is key to realizing the cost advantages of this fundamental change in how storage is architected.

Total storage system costs

Today, storage systems are often configured and purchased with the number of spindles determined by performance requirements. In turn, organizations buy more storage systems than they need, and end up with wasted capacity on each system.

The I/O intensity of enterprise workloads falls along a curve, requiring precise tuning of storage to deliver performance at optimal cost. With conventional arrays, performance cannot be precisely matched, which results in either too much performance at excessive cost or a performance impact on applications. Scale-out SSD architectures can keep performance in lockstep with demands.

With SSDs, organizations can more effectively match disk array performance to their performance needs, and avoid purchasing unneeded storage capacity. When the costs of multiple storage arrays and software licensing are assessed, SSD in a single array can often yield a lower cost of acquisition than over-provisioning underutilized disk spindles for the sake of performance. But where SSDs behind traditional architectures can yield cost savings, they will do so with limited effect. With the right scalable SSD architecture, the cost of SSD investments can be leveraged many times over by adding to the number of SSD devices, at a much smaller incremental cost over time, scalable SSD architectures may avoid several more expensive future array purchases that may be necessary if using traditional arrays with more limited performance capabilities.

Storage operational costs

Of equal impact, SSDs can optimize the operational cost of storage by reducing the number of arrays and associated software that must be maintained, reducing the amount of floor space consumed, and avoiding duplicative array hardware that must be powered and cooled.

Over time, there is little doubt that a total cost of ownership case can be made for SSDs by examining the sum total of these costs in detail. With scalable SSD, these benefits can be leveraged to greater effect by reducing the need for multiple array purchases that may be necessary over time if using traditional arrays with more limited performance capabilities. Moreover, scalable SSD architectures will also avoid the significant operational costs associated with migrating data, or dividing workloads where traditional arrays with limited total performance are used.

Scalability

Architecture makes for enormous differences between storage solutions, and SSDs call attention to this like never before. Our recommendation: Do not buy an array thinking that you can address performance needs with the addition of some expensive SSDs. Look holistically at the entire system architecture, and make sure that the solution has the flexibility to fully integrate SSDs with your storage practices, while scaling well beyond your current requirements.

The ultimate success of SSD technology in the enterprise depends on the architecture of the array. While flash technology will undoubtedly find widespread adoption in many types of devices, the SSD device itself will not succeed to its full potential in the enterprise without attention to how it is integrated with existing storage arrays.

Moreover, do not fool yourself into thinking block storage falls into a couple of different buckets (SSD, Fibre Channel, SATA, etc.) and that the availability of flash means arrays can be configured for any need. In reality, classic array architectures may already keep users from reaching the full potential of their storage technology. Scalable arrays will leverage SSDs to greater effect by applying SSDs to performance problems across a wide pool of storage volumes. Meanwhile, scalability can unlock the door to more cost effective SSD use that will magnify the cost benefits of SSD technology.

While many vendors are charging at the market with SSD solutions, we see a fundamental architecture shift taking hold in solutions from, for example, Dell's EqualLogic PS6000 series arrays, Pillar's Axiom, EMC's V-Max, Atrato's SSD solutions, and from heterogeneous virtualization vendors such as DataCore, FalconStor and IBM. As more vendors come to market and can demonstrate that their solutions can leverage cost-effective media to flexibly scale their total single system performance, this list will grow.  


JEFF BOLES is a senior analyst and director of validation services with the Taneja Group research and consulting firm, www.tanejagroup.com

MLC moving toward the enterprise

By Dave Simpson

According to conventional wisdom, enterprise-class SSDs require single-level cell (SLC) technology, as opposed to the lower-cost multi-level cell (MLC) technology – both of which fall under the NAND flash memory category. That's because, generally speaking, SLC has performance, reliability and endurance (longevity) advantages. However, SLC media is much more expensive than MLC media. (In terms of raw media, the cost difference is about 4x.) As such, a variety of vendors are working on ways to combine the advantages of SLC with the low-cost advantages of MLC, in many cases using MLC media with advanced software and/or hardware.

Scott Shadley senior product manager at SSD specialist STEC, argues that the primary enhancements will come at the controller level (firmware and algorithms), and he identifies the following as the key areas that will require advancements.

  •  Existing ECC algorithms in SSD devices need to be improved, because ECC requirements for MLC are much more stringent than for SLC.
  • Vendors need to mitigate the wear issues with MLC, because there's about a 10x difference in the number of recycles per data sheet that the two technologies can withstand. Many SSD vendors utilize wear-leveling techniques to mitigate this problem, but "the goal is to limit, or minimize, writes in general, which wear leveling does not address," says Shadley.

"Vendors have to come up with new ways for the controller to handle, or manipulate, incoming data so that the SSD is only writing a minimal amount of times, either by controller caching, external caching, and/or algorithms that manipulate the data to minimize the amount of data stored on the media," he explains.

  • Shadley also notes that, because MLC is slower than SLC, enterprise-class SSDs based on MLC technology will require faster processors, more flash channels, better ways of accessing the flash on those channels, and the ability to take advantage of the new NAND interfaces. "The goal is to minimize the performance and cost differences between MLC and SLC," he says.

More InfoStor Issue Articles

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives