What Happens to Enterprise PCIe SSDs?

Posted on February 29, 2012 By Henry Newman

RssImageAltText

Most enterprise applications today require shared storage, given that the applications must fail over to another system in case of server failure. I know some technologies allow a PCIe slot to be extended, so theoretically you could place another PCIe SSD in another server, mirror the SSDs and have the other server sitting there with the SSD in it as a cold standby.

You could also mirror the storage to disk and then reload the SSD on another server, but this works only if you are doing mostly reads and a few writes, given the latency writing to the shared storage. Many applications that run locally but are not mission-critical can be restarted, but how many sites need SSDs for single-server, non-mission critical applications?

I know the Wall Street trading community needs and can afford PCIe SSDs, as can the movie industry. Some environments can ingest servers for Hadoop systems, but from what I have seen, shared storage still dominates the majority of storage being sold. Some have discussed moving the PCIe SSDs from the server to PCIe slots on RAID controllers so the storage is shared. That solves the problem of sharing, but it creates another problem: access performance and latency. Fibre Channel speed is at best 16 Gb/sec or at most 1500 MB/sec; maybe the RAID controller support InfiniBand and FDR is 56 Gb/sec or maybe at most 6.5 GB/sec -- still far slower than a PCIe 3.0 slot, and you have increased latency by using and external channel.

I am just not sure how big the market is for these devices.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives