There’s a good bit of debate about whether or not enterprise-class solid-state disk (SSD) drives can use multi-level cell (MLC) flash technology, as opposed to single-level cell (SLC) technology, which is currently the dominant technology for enterprise SSDs. The Objective Analysis market research and consulting firm not only believes that it will happen, we anticipate that nearly all enterprise SSD vendors will eventually convert to MLC flash — possibly in the relatively near future.

Consider the following points:

  • MLC prices are likely to be considerably lower than those of SLC flash
  • Many enterprise SSDs don’t need the level of endurance that SLC offers
  • Even though MLC is slower than SLC, the SSD’s controller can compensate for, or mitigate, the difference
  • SSD controllers are improving at an exponential rate

Let’s look at each of these points in detail.

Prices

In late 2008, SLC NAND prices became 4.5 times higher than MLC flash (see the dotted line in Figure 1 on page 10.) Since SSD makers were expecting SLC prices to remain about twice those of MLC flash, they were naturally quite alarmed at this scenario, and began investigating whether or not they could convert to MLC flash. Surprisingly, this didn’t pose as much of a problem as expected, so the first MLC-based enterprise SSDs were introduced. (We’ll explore how vendors were able to do this later in the article.)

More importantly, why would such price disparities occur? After all, the die area of an SLC chip is only about twice that of its MLC counterpart.

The price disparity came about because there are very few applications that use SLC flash, so many NAND manufacturers have simply stopped making it. Manufacturing costs are tied to the sheer unit volume of any single chip that a company produces, so manufacturers are eager to decrease the variety of chips they produce. With few companies producing SLC NAND, and with cutbacks in SLC output in favor of MLC, a shortage developed and competitive bidding drove prices up.

Endurance

SSD controllers already do a lot to hide NAND’s endurance problems from the system. For example, DRAM is used in most SSD drives for write coalescing — the process of collecting writes from several different parts of a block and presenting them to the block in a single action, rather than as a sequence of smaller writes. While this step alone can cut writes by an order of magnitude, the DRAM can also be used as a write cache to trap multiple writes to the same location within a short time (cache designers call this “temporal locality”), thus reducing actual writes to the NAND by another one or two orders of magnitude.

Add to this other techniques such as wear-leveling and over-provisioning (having more NAND in the SSD than the system is allowed to see) and the number of writes that actually make it through to any particular block becomes extremely small.

This is particularly interesting since software performs a considerable number of write cycles that users are unaware of. Most users think that the only disk writes occur when the user saves a file, but in reality the operating system is constantly writing small pages of various housekeeping data to the disk.

At the 2009 Flash Memory Summit, Xiotech shared test results showing that a simple boot sequence for Windows 7 caused 1,001,000 disk I/Os, one quarter of which were writes. Since many users boot their PCs once a day, these numbers would lead to the conclusion that an SSD in a PC would be very likely to fail in a relatively short time. However, since these writes are reduced and hidden by write coalescing, caching, wear leveling, and over-provisioning, the actual wear on the SSD is significantly lower than the high number of disk I/Os would indicate.

How much are these writes reduced? Micron Technology shared with us some informal measurements of its 128GB SSDs in a client environment. In the case of Micron’s architecture, each block of flash is only written to about 30 to 40 times per year. (We hope that Micron will perform more rigorous tests and post the results.)

Speed

Many SSD vendors tap into an inherent benefit of SSD architectures — the large number of NAND chips in an SSD allows designers to harness numerous parallel data paths to increase internal bandwidth. Why perform I/O on a single chip when you can be reading, writing, and erasing multiple chips at the same time? For example, Intel’s SSDs use ten internal channels, while SandForce’s SSD controller uses 16 and Fusion-io’s SSDs use 25 channels.

The limiting factor for most systems is the fact that an increase in the number of I/O pins on the controller adds to the controller’s costs. Also, if you require the system to be built using a large number of channels, then the minimum capacity of the SSD will be larger than an SSD with a controller that supports fewer channels.

With all these channels, the SSD’s internal data paths are usually much faster than the I/O channel can support. When it comes to the SLC vs. MLC debate, the SSD’s internal reads with either type of flash will be much faster than the I/O channel, and the only place where speed may be noticeably different is in writes. As mentioned before, very few writes actually make it through to the NAND chips, and it is this dramatically-reduced number of writes that will perform more slowly in MLC flash than in SLC flash. The net result is that an MLC-based SSD will run more slowly than an SLC SSD, but the difference may not be enough to noticeably detract from the system’s overall performance.

Controllers

Microcontrollers and ASICs follow Moore’s Law. The amount of computing horsepower that can be purchased within a certain dollar budget increases dramatically every year. This means that SSD controller designers can constantly improve their product for a fixed price point — a controller that was prohibitively expensive a couple of years ago can now be made at a very reasonable price.

As the controller’s horsepower increases, the quality of flash used in an SSD can decrease. Error correction can move from 2-bits to 4-bits, and on up through 8-bits and above. More sophisticated algorithms can be adopted for wear leveling, garbage collection, and write management. Most importantly, lower-quality flash can substitute for higher-quality flash as the controller’s improved performance gives it the ability to compensate for any resulting errors.

Under this scenario, it is only natural that SSD vendors can and will migrate from SLC NAND to MLC NAND, with an eventual transition to 3-bit and 4-bit NAND technologies over the longer term, even though these technologies are today viewed as “iffy at best.”

Will enterprise SSD vendors convert from SLC to MLC? Some already have. STEC and Fusion-io, for example, have for several months been shipping MLC versions of their enterprise SSDs.

Bear in mind that through the end of 2007 nearly everyone in the SSD business agreed that MLC could not be used in any SSDs, whether client or enterprise. Then in 2008, the first MLC-based client SSDs were introduced, and by the end of the year nearly all client SSDs used MLC.

Similarly, 2009 began with widespread sentiment that it was impossible to use MLC in an enterprise SSD environment, but as we left the year some enterprise SSD suppliers were already shipping SSDs based on MLC flash. It is very likely that enterprise SSDs will head down the same path as client SSDs.

Objective Analysis expects a large percentage of enterprise SSDs to be MLC-based by the end of 2010. In fact, it’s possible that 50% of all enterprise SSDs could ship with MLC NAND flash by the end of the year. Within a few more years, nearly all enterprise SSDs could be based on MLC NAND flash.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *