iSCSI vs. FCoE, part deux

May 14, 2010 – It’s an age-old debate – iSCSI vs. Fibre Channel – but in light of all the hubbub around Fibre Channel over Ethernet (FCoE), the debate is heating up again.

If you follow the FCoE news, you’d think that converged networks based on FCoE are an IT inevitability. Eventually (it’s going to be a very slow adoption curve) that may be true, but I think it will be more in the Fortune 1000 space – where the benefits of FCoE will be most apparent – rather than in the SMB space, where low cost is still king.

FCoE provides the ability to run storage traffic over Ethernet (although you have to upgrade to 10GbE and Converged Enhanced Ethernet, or Data Center Bridging), but iSCSI also enables storage traffic over Ethernet – at a much lower cost. And, as with FCoE, you can preserve existing investments.

Running storage traffic over Ethernet is a no-brainer, but it doesn’t require FCoE. Cost-conscious SMBs may find iSCSI more palatable. And if part of your convergence and cost cutting revolves around virtualization, iSCSI (or NAS) may make even more sense. Plus, it’s a simpler and faster route to a converged network.

Of course, the old argument against iSCSI was, in part, related to performance. But that was when the argument centered on 1Gbps Ethernet vs. 4Gbps Fibre Channel. Now that iSCSI can run on 10Gbps Ethernet (as can Fibre Channel via FCoE), those performance-oriented arguments are crumbling.

But don’t take my word for it.

openBench Labs (which contributes lab reviews to InfoStor), recently set up an interesting test case by building a modest, inexpensive 10GbE iSCSI storage network that turned in some impressive performance results.

You’ll have to read the full review to put some perspective on the performance numbers, but openBench Labs CTO Jack Fegreus clocked average throughput of 5,000 I/Os per second (IOPS) with one virtual machine (VM), which was in line with what openBench Labs achieved with direct Fibre Channel access to the same disk array in previous tests. With two VMs, throughput scaled to about 8,200 IOPS.

Jack concludes his review with this observation: “Given these results, our 10GbE iSCSI configuration with QLogic Intelligent Ethernet Adapters should be able to support the installation of Microsoft Exchange Server on a VM with upwards of 5,000 mail boxes.” Quite sufficient for most SMBs. “The simplest and most immediate strategy for IT to begin leveraging 10GbE in a data center, especially when dealing with a virtual environment, is to begin implementing iSCSI.”

Jack’s 10GbE iSCSI network consisted of three Dell PowerEdge servers running Windows Server 2008 R2 and VMware ESX Server 4; iSCSI software from StarWind; dual-port 10GbE Ethernet adapters and 4Gbps Fibre Channel HBAs from QLogic; and a Xiotech Emprise 5000 disk array with two 4Gbps Fibre Channel ports. Performance was measured with Intel’s Iometer benchmark.

Check out the full review: “How to jumpstart SAN + LAN convergence.”

Related blog posts:
Virtual server SANs: FC vs. iSCSI vs. NAS
Intel, Microsoft top 1,000,000 IOPS in iSCSI tests
Video surveillance is a real sweet spot for iSCSI
NAS gains in virtual server environments

And you might want to consider attending this upcoming (May 26) SNIA Webcast: “iSCSI and New Approaches to Backup and Recovery.”


posted by: Dave Simpson

Dave Simpson, Editor-in-Chief
by Dave Simpson

Dave Simpson has been the Editor-in-Chief of InfoStor since its inception in 1997. He previously held editorial positions at publications such as Datamation, Systems Integration, and Digital News and Review. He can be contacted at dsimpson@quinstreet.com

Previous Posts