Storage implications of 10GbE

Posted on July 01, 2006

RssImageAltText

10Gbps Ethernet isn’t required for NAS and iSCSI-based IP SANs, but adoption of the high-speed networking technology may spur further adoption of-and applications for-those storage technologies.

By David Dale

The standard for 10 Gigabit Ethernet (IEEE 802.3ae) was approved in June 2002. Positioned as a high-speed networking technology for LANs, MANs, and WANs, 10 Gigabit Ethernet (10GbE) was also hyped at that time as a key enabler of IP Storage proliferation.

Each new generation of Ethernet extended the capabilities of the previous generation and followed a predictable cost/learning curve-entering the market with high prices that decline as volumes pick up and becoming commodity-based as the new network infrastructure becomes pervasive.

The last incarnation of Ethernet-1000Base-T, or Gigabit Ethernet-operated over the installed Category 5 copper infrastructure and delivered 1Gbps bandwidth. And 10GbE continues this progression by increasing bandwidth to match the speed of the fastest technology on the WAN backbone (OC-192, which runs at about 9.5Gbps) and extending native Ethernet from LANs to MANs and WANs.

Issues to consider

A number of issues were identified when the 10GbE standard was approved:

Pricing;

The ability of server hardware architectures to accommodate the high-speed interface;

How to mitigate the expected high CPU overhead associated with 10Gbps TCP/IP operation; and

The host operating systems’ ability to enable wire-speed low-latency 10Gbps operation (a requirement for server clustering).

Price reductions for 10Gbps products are a major influence in the adoption of 10GbE LANs. When 10GbE prices were $70,000 per port (the first wave of products in 2002/2003), users paid a significant premium in comparison to the cost of 10 1Gbps ports. This disparity made link aggregation much more appealing as an intermediate solution. A significant price decline was needed for 10GbE to proliferate.

Click here to enlarge image

Within servers, terminating a 10GbE link places strain on the overall server architecture. In 2003, the I/O system of a typical server running Windows 2000 used PCI running at 66MHz, with an effective bandwidth of about 350MBps. However, to saturate a 10GbE link, about 1.25GBps of bandwidth is required in each direction, or a total of 2.5GBps-approximately seven times the bandwidth of a PCI bus.

The third issue concerned TCP/IP processing overhead- which many people thought would be a problem for storage over 1Gbps Ethernet, and virtually everyone believes will be an issue at 10Gbps speeds. TCP/IP offload didn’t prove to be a major issue for iSCSI over Gigabit Ethernet: Today’s CPUs have plenty of performance headroom to accommodate TCP/IP processing without impacting application performance, so only a small fraction of iSCSI deployments use TCP/IP offload hardware. This will not be the case at 10Gbps speeds, where TCP/IP offload engine (TOE) solutions may be required.

The fourth issue concerns the host operating system’s handling of I/O requests. Today, I/O traffic is buffered in memory before being placed in working memory and then written to disk. This typically involves several copies into memory, each of which involves traffic over the memory bus. At 10Gbps speeds, the effect of these copies could swamp the memory bus. To address this issue, the Internet Engineering Task Force (IETF) is working on a number of standards that could cut down on memory copies and enable direct placement of data into memory (e.g., remote direct memory access, or RDMA). This will be a requirement of the high-bandwidth low-latency needed for 10GbE server clusters.

10 Gigabit Ethernet today

10GbE products have been available since 2002. Initial products focused on switch-to-switch connectivity and server-to-server clustering connectivity. Today, factors driving adoption of 10GbE include the rapid growth of voice-over-IP and storage-over-IP (both NAS and iSCSI) applications in many enterprises.

Per-port pricing has been steadily dropping. By year-end, a 10Gbps link is likely to make more economic sense than aggregating multiple 1Gbps links. Significant progress has also been made on the other three issues mentioned earlier. For example, the proliferation of higher-speed I/O options for servers (such as PCI-X and PCI-Express) now makes 10Gbps I/O connections more practical.

The availability of low-cost chips implementing TCP/IP offload is also making TOE solutions more attractive. On the software front, de facto standards are emerging for TCP/IP offload and acceleration. An example of this is Microsoft’s recent release of the Windows Server 2003 Scalable Networking Pack, which implements TCP Chimney (a TCP/IP offload architecture), Task Offload (checksum calculation offloading), and Receive-Side Scaling (which allows TCP receive processing to run on multiple processors). All of this will have a positive impact on the availability of broadly supported TOE solutions for 10GbE.

Finally, standards designed to enable server clustering using 10GbE have made significant progress within the IETF. These protocols (e.g., DDR and RDMAP) are expected to reach the final stage of the standards process in the near future.

Storage implications

The availability of 10 Gigabit Ethernet has implications for all applications of storage over IP, including NAS, iSCSI, and interconnecting Fibre Channel SAN islands over WANs. 10GbE delivers greater performance headroom for each of these protocols.

However, since the 10GbE and iSCSI standards emerged within about a year of each other, they became somehow linked as being interdependent-each being the “killer app” for the other. However, this is a misconception.

Native iSCSI disk arrays and IP SANs have been available for about three years. And the number of IP SANs has grown from about 2,500 in 2004 to an expected 22,000 IP SAN deployments by the end of this year. So far, all of those iSCSI-based SANs are on GbE, and almost all of them use only software drivers on the server side.

Most IP SAN deployments are at the departmental level of larger enterprises, or in the main data center of small and medium-sized enterprises. Most are “green-field” SANs replacing direct-attached storage, particularly in Windows environments comprising smaller servers where limited admin support, host-attach costs, and infrastructure complexity have inhibited deployment of Fibre Channel SANs.

Some enterprises now have “Ethernet-only data centers,” with all storage traffic using Gigabit Ethernet for NAS, SAN, and inter-data-center connectivity.

Given that more than 40% of the world’s storage is still direct-attached, continued growth opportunities for IP SANs are enormous. And iSCSI doesn’t need the extra bandwidth of 10GbE to successfully address that market.

Is 1Gbps fast enough?

There is a widespread perception that choosing an iSCSI-based SAN means you have to compromise on performance. However, iSCSI SANs using standard 1Gbps Ethernet cards and free software initiators that come with the operating system provide acceptable performance for the vast majority of enterprise applications. Disk arrays connected by 2Gbps Fibre Channel are not 2x as fast as 1Gbps iSCSI arrays, and 4Gbps Fibre Channel arrays are not 4x as fast.

The Enterprise Strategy Group (ESG) has done extensive comparative performance testing using real-world application workloads and found that 2Gbps Fibre Channel solutions typically deliver only 5% to 15% better performance than 1Gbps iSCSI solutions using software initiators. For more information, go to www.enterprisestrategygroup.com.

The misconception is that storage performance is proportional to the bandwidth of the storage interconnect. An analogy would be a car traveling along a road. Adding more lanes to the road doesn’t make the car go faster unless there is significant congestion. And there is plenty of bandwidth to spare for most application workloads, even at 1Gbps.

Other factors have a much more profound affect on the performance of disk arrays. The number, type, and rotational speed of the disks have the biggest impact. For example, large arrays deliver more performance than small arrays, and Fibre Channel drives provide more performance than a similar number of ATA drives. The ability of the array to optimize data placement and to stripe data over the largest number of disks also have a significant impact on performance. Interconnect bandwidth is a relatively insignificant factor compared with these other issues.

However, that being said, the availability of 10GbE will expand the reach of both NAS and iSCSI SANs.

10GbE as a storage interconnect

An informal survey of iSCSI storage vendors indicates that most expect to ship arrays with 10GbE connectivity options this year. (Some high-end NAS vendors have already announced this capability.) The main thrust of these solutions will be to support large numbers of GbE-connected servers. This will enable the deployment of much larger iSCSI-based IP SANs. And 10GbE will also enable iSCSI to address very-high-performance applications that need low latency and more than 1Gbps of storage bandwidth.

Will 10GbE make iSCSI more competitive with Fibre Channel? Yes, no, and maybe.

The Fibre Channel performance advantage (both perceived and real) will gradually disappear. However, even today, the decision of whether to deploy Fibre Channel or iSCSI usually comes down to the question of whether you already have a Fibre Channel SAN infrastructure. If the answer is “yes,” you’ll likely choose Fibre Channel. If the answer is “no,” iSCSI is likely to be attractive.

Over time, all IT organizations will face decisions about their next-generation data-center fabric. Most organizations will eventually be using 10GbE in their data communications infrastructure. The question at that point will be, “Should I standardize on one interconnect technology for my next-generation data center, or does it make more sense to deploy multiple network types?”

Although 10GbE will be deployed as a storage interconnect, 10GbE is not about storage; rather, it’s about IT infrastructure.

All enterprises today view a scalable Ethernet infrastructure as a key enabler to competitive advantage. 10GbE enables IT organizations to scale their LAN infrastructure to accommodate ever-increasing amounts of data. It enables enterprises to extend their high-performance LAN to interconnect data centers within the metropolitan area, without having to resort to expensive leased telco lines. And it enables service providers to provide high-speed end-to-end Ethernet services.

10 Gigabit Ethernet solutions are available today, and recent advances in server, operating system, and I/O chipset support makes deployment in 2006 a practical proposition.

David Dale is chair of the SNIA IP Storage Forum and an industry evangelist at Network Appliance. Members of the IP Storage Forum contributed to this article.

For more information, go to www.ipstor age.org.


What is 10 Gigabit Ethernet?

10 Gigabit Ethernet (10GbE) is the next generation of Ethernet. The IEEE 802.3ae standard defines the operation of the 802.3 Media Access Control (MAC) at 10Gbps, while preserving the 802.3 frame format, including minimum/maximum frame size-so 10GbE supports all the network services that operate at Layer 2, 3, and higher of the OSI model (e.g., VLANs, spanning tree, MPLS, QoS, VoIP, security, etc.).

Although the IEEE 802.3 standard for GbE supported both full and half duplex, only products that provided full-duplex operation (and therefore avoided packet collisions) were successful in the market. Consequently, it was decided that 10GbE would be full duplex only. As such, 10GbE is unlimited in reach: Only the physics of transmission and the physical media limit the distance of the link.

IEEE 802.3ae defined two different physical layer (PHY) families: the LAN PHY transmits data over fiber, and the WAN PHY adds a SONET/SDH framing sublayer so it can use SONET as the transport.

The physical media supported includes both fiber and copper cabling. Fiber cabling supports multiple derivatives of the standard related to the different optical types:

10GBASE-E-40km over singlemode fiber

10GBASE-L-10km over singlemode fiber

10GBASE-S-65m over multimode fiber

In addition there are standards that are supporting legacy FDDI-grade fiber (10GBASE-LX4).

For copper, there are two standards-one ratified, and one still in development:

10GBASE-CX4 (twin-axial copper cabling)-15m maximum (ratified)

10GBASE-T (Category 6 and 7 copper twisted-pair cable)-emerging standard from IEEE 802.3an, expected to be approved this summer.


Applications for 10GbE: LAN, MAN, WAN

When the 10 Gigabit Ethernet (10GbE) standard was ratified, three specific areas of adoption were highlighted: local area, metropolitan area, and wide area networks.

In the local area, a fundamental rule of building switched networks is that a faster technology is always needed to aggregate multiple lower-speed connections. The proliferation of GbE ports is driving the requirement for 10GbE connections. More specifically, 10GbE was seen as an interconnect for high-speed links between switches in the data center, enterprise backbones, in building-to-building connections (up to 40km using singlemode fiber), and as an interconnect for server clusters.

In the metropolitan area, 10GbE enables enterprises and service providers to deliver high-performance connectivity and services over dark fiber at a fraction of the cost of traditional technologies such as SONET, and without the complexity of protocol conversion and transport bridging. In effect, an organization’s LAN can extend to the metropolitan area.

In the wide area, 10GbE enables service providers to provide high-performance, cost-effective links that are easily managed with Ethernet tools. Since the end-point bandwidth scales up to the highest WAN backbone bandwidth, this offers the potential for end-to-end 10Gbps operation.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives