Diablo Takes DIMM View of Server-Side Flash

Posted on July 30, 2013 By Pedro Hernandez

RssImageAltText

To date, PCIe solid state drives (SSDs) have been the closest, most direct path that flash storage vendors use to quickly shuttle data to server processors. Today, Ottawa, Canada-based Diablo Technologies has paved its own shortcut to faster enterprise applications performance.

Flash add-on cards deliver fast performance by leveraging the PCIe bus, circumventing a server's comparatively slower storage subsystems. Diablo's new Memory Channel Storage (MCS) technology gets even closer to the CPU by leapfrogging PCIe and slotting flash chips directly into DIMM sockets.

The result, according to Kevin Wagner, vice president of marketing for Diablo, is flash storage that "acts and behaves more like DRAM than SSDs."

"We put this on the memory channel, right there with the system memory," Wagner told InfoStor. Sporting the "exact same dimensions as a standard DIMM," installing Diablo's MCS is a plug-and-play affair, with the exception of loading new drivers. A standard DRAM DIMM is required to make the setup work, however "every other slot can be filled with our modules," he said.

The performance gains are dramatic. The company estimates that by configuring MCS as a block storage target, latencies are slashed by 85 percent compared to PCIe SSDs. The gains are even more pronounced compared to SATA and SAS SSDs (96 percent reduction).

In short, business critical workloads like online transaction processing, virtual infrastructures and cloud applications can eke out DRAM-like levels of speed and responsiveness. Wagner reported that some early customers are using his company's technology as a cache for PCIe caching cards.

Allowing that "PCIe is a very good, very fast generalized bus," Wagner informed that the best PCIe SSDs have latencies 25 to 28 microsecond range. MCS, in contrast, shrinks the gap to 3 to 5 microseconds. MCS is able to deliver such performance gains due, in part, to the inherent nature of memory buses.

"Because the memory controllers run in parallel, everything is completely parallelized," said Wagner. MCS uses this to its advantage to pump more data to a server processor, faster.

"The arrival of MCS finally allows applications to leverage the benefits of flash memory connected directly to the processor's memory controllers, which will ultimately change the cost/density/performance rules forever," said Diablo CEO Riccardo Badalone in company remarks.

Moreover, MCS fits into practically every sort of server and storage system, regardless of its form factor. Since it adheres to the industry-standard DDR3 memory specification, MCS fits seamlessly into blade servers, storage controllers and non-standard designs where PCIe slots are in short supply.

Diablo has another trick up its sleeve, one that can potentially revolutionize Big Data processing.

Alternately, MCS can be "be configured to expand system memory from gigabytes to terabytes, dramatically improving the performance of large in-memory applications," announced the company. Servers not only get a 100x increase in accessible system memory, the capability allows entire application data sets to occupy the CPU memory space. MCS can scale up to 12.8 TB per system.

"This disruptive capability makes MCS uniquely well suited for memcached, big data analytics, and other large in-memory applications," said the company.

Pedro Hernandez is a contributing editor at InfoStor and InternetNews.com. Follow him on Twitter @ecoINSITE.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.