I'd like to take exception to comments in Bob Hansen's article, "LANs, MANs, SANs, and WANs to converge at 10Gbps" (see InfoStor, January 2002, p. 32). The article says that 10Gbps InfiniBand will be the last to market when, in fact, this is not true. The 10Gbps Ethernet and Fibre Channel specifications-and the iSCSI specification-have not even been released yet. Only InfiniBand has an approved 10Gbps specification (released in October of 2000 by the steering committee of Compac, Dell, Hewlett-Packard, IBM, Microsoft, and Sun).
The first 10Gbps InfiniBand silicon shipped in January 2001. A variety of silicon companies are shipping, and have demonstrated interoperability of, 10Gbps channel adapters and switches. Not only is 10Gbps silicon shipping today but, unlike Ethernet, InfiniBand includes hardware transport support natively in its silicon and does not require the additional expense of Transport Offload Engines. Also, the InfiniBand demonstrations have been over 10Gbps copper connectors, which does not require expensive 10Gbps transceivers. Fiber optics is the only media currently being defined by the 10Gbps Ethernet spec.
At least two vendors have announced adapter cards based on InfiniBand silicon. Your readers should expect a number of OEM announcements, beginning mid-year, that will lead to the deployment of cost-effective 10Gbps InfiniBand solutions.
Director of Product Marketing
ATA stacks up
As a storage system engineer and consul tant, I have been following hard-disk-drive trends for several decades. I am particularly interested in the application of ATA drives to enterprise storage systems. In response to your article, "ATA puts the squeeze on SCSI," (see InfoStor, Jan. 2002, p. 1) I would like to make a few comments about ATA's reliability.
The subject of drive reliability can be divided into architectural (design) and process (manufacturing). Many storage engineers used to argue that the architectural reliability of ATA drives was inferior to that of SCSI and Fibre Channel drives. That's no longer the case. ATA drives now include CRC error detection, which meets the reliability needs of enterprise applications.
Also, I would argue that the economics of survival has made the process reliability of ATA drives at least equal to enterprise class drives. About 10 times more ATA disk drives are manufactured today than SCSI and Fibre Channel drives combined. At these levels, ATA drive manufacturers are forced to meet very high process reliability requirements or else face extensive penalties for returned drives.
Lastly, differences in MTBF for these drive classes can largely be attributed to the testing environment. Enterprise drives are typically tested in 24x7 environments; ATA drive tests, meanwhile, emphasize start-stop testing and thermal-cycling requirements. Experience shows that under the same test conditions the MTBF numbers would converge.
As for the difference in the unrecoverable error rates between the two drive classes (1x1014 for ATA drives versus 1x1015 for typical Fibre Channel or SCSI drives), I would argue that both systems provide very high reliability but that an ATA-based system costs much less.
To understand the storage system implications for these differences in unrecoverable error rates, let's compare two 100-drive RAID systems. In the most unfavorable situation (continuous reading), an ATA-based system would need to perform a RAID parity stripe correction every 1.5 hours while the enterprise drive-based system would need to perform a correction every 6.8 hours or so. In both cases, the RAID system would perform with excellent performance and reliability.
Joel N. Harrison
Storage Consultant and co-founder of Quantum Corp.