LANs, MANs, SANs, and WANs to converge at 10Gbps

Posted on January 01, 2002

RssImageAltText

A look at the competing (or complementary) protocols for networking, storage networking, and inter-processor communications.

By Bob Hansen

Today, general networking, storage networking, and inter-processor communications have each developed different standards. As we move toward a 10Gbps world, total cost of ownership and interoperability will drive the industry toward a single networking interconnect standard.

In general networking, including all Internet applications, the interconnect technology is IP over Ethernet at 10Mbps, 100Mbps, or 1Gbps. Although IP over Ethernet scales across LANs, MANs, and WANs-and dominates the industry in port count, interoperability, and vendor support-the IP-over-Ethernet protocol stack (usually TCP/IP) is executed in the host CPU, making the architecture unsuitable for latency-sensitive applications such as database storage and inter-processor communications.

Storage networking encompasses two applications, or configurations. Network-attached storage (NAS) uses IP over Ethernet to transport data in file formats between storage servers and their clients, and storage area networks (SANs) transport blocks of data over Fibre Channel. Fibre Channel is the performance leader today at 1Gbps and 2Gbps link speeds and offers excellent (very low) latency characteristics due to a fully offloaded protocol stack. This is one reason why Fibre Channel-based SANs are often used in high-perfor mance applications while Ethernet-based NAS is used where cost and ease of use are more important.


Figure 1: 10Gbps Ethernet products will be available first, with 10Gbps Fibre Channel, iSCSI, and InfiniBand expected to begin shipping in late 2002 or early 2003.
Click here to enlarge image

Finally, there are inter-processor communications (IPC) networks used for server clustering. Although generally limited to high-availability clusters, Ethernet is the most widely used IPC interconnect technology today. High-performance parallel processing clusters tend to use a proprietary interconnect designed for very low latency. IPC is one of the most important applications targeted by Infini Band. With an architecture optimized for low latency and high bandwidth, InfiniBand appears ideally suited for this application.

Starting down the road to 10Gbps

The Internet's insatiable appetite for perfor mance will continue to drive Ethernet development to faster link rates at a quicker pace than Fibre Channel. Where 1Gbps Ethernet development leveraged mature Fibre Channel technology, Fibre Channel development will now leverage 10Gbps Ethernet standards. InfiniBand will be introduced at 2.5Gbps but will quickly move up to 10Gbps.

For general networking, there are no serious challengers to Ethernet. CPU utilization will improve dramatically with the application of TCP Offload Engine (TOE) technology, which moves protocol stack execution from the host processor to the I/O card.

The choice of interconnect technology for inter-processor communications also seems to be clear. Designed from the ground up for IPC and destined to be a standard I/O port on many chipsets, InfiniBand is well-positioned to dominate large segments of this application. However, Ethernet with VI (Virtual Interface) and TCP/IP in offloaded hardware at 10Gbps will have excellent latency and throughput characteristics and could challenge Infini Band for certain segments of the IPC application market.


Figure 2: This I/O card design provides a fully offloaded, hardware-accelerated architecture that supports a variety of networking protocols as well as file- and block-storage I/O.
Click here to enlarge image

The storage networking application is much less clear-cut than either of the others. Today, Fibre Channel dominates high-performance storage networking applications because of the low-latency, high-bandwidth characteristics of the fabric. Even as the Internet Engineering Task Force (IETF) enables block storage over TCP/IP through its iSCSI standards effort, Ethernet must have similar performance characteristics to Fibre Channel if it is to seriously compete for the storage networking application.

Today, several companies are working to offload the iSCSI-TCP/IP protocol stack from the host processor, which will result in CPU utilization numbers similar to Fibre Channel. As these designs evolve, applying hardware acceleration to the protocol stack, latency will no longer be an issue at equal link rates. This technology will also dramatically reduce the latency of NAS, IPC, and general networking applications. The off loaded iSCSI-TCP/IP design, when interfaced to the host system through an IP Network Off load Engine architecture, allows networking traffic of all types to be offloaded and accelerated through a single I/O card.

In the near term, Fibre Channel at 2Gbps will remain unchallenged in the data center while iSCSI enters the storage networking market through applications less sensitive to performance and high availability. As all link speeds move up to 10Gbps, Ethernet will become a very serious contender for the entire storage networking market. In this same time frame, NAS will begin to seriously challenge block storage for database applications.

null

Given the Fibre Channel installed base, and with FC over Ethernet and iSCSI contending for the storage networking market, the need for multi-protocol routers is clear. Several companies have announced bridges and routers, some including storage virtualization capabilities. Others are working to connect SAN islands through the use of a Fibre Channel-over-IP (FCIP) "tunneling" protocol. These developments, along with a strong commitment to interoperability by the iSCSI community, suggest that interconnecting networks will be easily accomplished.

When 10Gbps isn't really 10Gbps

First-generation 10Gbps Fibre Channel and Ethernet protocol chips, network interface cards, and host bus adapters will support a 10Gbps link speed but will be limited to 5Gbps to 8Gbps by the PCI-X host interface. To achieve a sustained 10Gbps throughput, a new interconnect standard is required. InfiniBand, PCI-X double data rate, Rapid I/O, Hyper transport, 3G I/O, and others are contending for acceptance as the 10Gbps interconnect standard.

Storage networking applications will continue to be dominated by Fibre Channel for several more years while iSCSI solutions mature. By the time iSCSI becomes the preferred storage networking solution, the Fibre Channel installed base may be large, helping to drive the market for multi-protocol routers.


Bob Hansen is strategic business development manager in the Storage Networking Division of Agilent Technologies (www.agilent.com) in Roseville, CA.

Coming in INFOSTOR...

SPECIAL REPORT: Tape backup strategies: Disaster recovery

FEBRUARY FEATURES:

  • Continuous replication enhances tape backup
  • Virtualize NAS and SAN
  • Evaluating network storage options
  • Testing Fibre Channel networks


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives