The major driver for Fibre Channel over Ethernet (FCoE) is to marry the massive economics of Ethernet to huge corporate investments in Fibre Channel. The advantages of unification are significant in the data center—reducing equipment, leveraging Fibre Channel, and centralizing storage.

FCoE could be of real benefit to large enterprise data centers. These entities pour billions of dollars per year into Fibre Channel storage, and being able to leverage that investment over 10Gbps Enhanced Ethernet is a very attractive proposition.

FCoE also makes a particularly compelling argument for applying Fibre Channel SAN storage to high-speed, short-range networks, such as blade server backplanes and virtualized servers that are commonly found at the data center edge.

FCoE at-a-glance

The concept of the FCoE standard is a simple one: FCoE enables Fibre Channel frames to run over a 10Gbps Enhanced Ethernet LAN segment, enabling converged networks.

FCoE does not try to be everything to everyone, as smaller environments with Ethernet-only storage do not require FCoE connectivity. For Fibre Channel users, however, FCoE provides the ability to extend Fibre Channel storage from the data center core to its edge.

This scenario requires two related protocols: First is FCoE itself, a transport standard that enables native Fibre Channel frames to run over Ethernet. The second is Enhanced Ethernet, which FCoE requires to transport Fibre Channel over Ethernet.


Fibre channel networks

The FCoE standard enables Fibre Channel traffic to run across multiple Enhanced Ethernet LAN segments within the same Layer 2 bridging domain. It supports SAN management domains by maintaining logical Fibre Channel SANs across a 10Gbps Enhanced Ethernet segment. FCoE enables Fibre Channel frames to run with no performance degradation and without making any changes to the frames.

Enhanced Ethernet—also called Converged Enhanced Ethernet (CEE), Data Center Ethernet or Data Center Bridging (DCB)—eliminates Layer 3 TCP/IP protocols in favor of native Layer 2 Ethernet. Traditional Ethernet commonly experiences network congestion, latency, and frame dropping, which renders it unreliable for Fibre Channel traffic. But 10Gbps Enhanced Ethernet changes this by dispensing with TCP/IP in favor of a “lossless” Ethernet fabric.

The lossless environment’s basic requirements are Priority Flow Control (priority pause), ETS (scheduler), and the discovery protocol. (Congestion management is attractive but optional.) These capabilities allow the Fibre Channel frames to run directly over 10Gbps Ethernet segments with no performance degradation.

A question of standards

Neither the FCoE nor Enhanced Ethernet standards are completed as yet, but major networking and system vendors have agreements in place and are actively qualifying their products anyway. This is not surprising because most of these vendors belong to both the T11 (FCoE) and IEEE (Enhanced Ethernet) standards groups; so, they are in a good position to make settlements as ratification winds its way through the standards process.

Of the two standards, FCoE is farther along and should be ratified later this year. Enhanced Ethernet will take longer, as the IEEE is not exactly known for its lightning-fast ratification speed. Meanwhile, the vendor community has agreed upon the Ethernet standard they will submit to IEEE and will use it to implement the initial commercialized versions of FCoE products. FCoE-enabled networking components are shipping now, with OEM qualifications expected within a few months.

The lack of ratification and the resulting basic integration levels will keep FCoE to the data center edge and non-critical server environments. But in the corporate data center, most mission-critical servers are storing to Fibre Channel with a direct port connection anyway and do not require FCoE or Enhanced Ethernet to do so. There are advantages to using FCoE and Enhanced Ethernet, even in mission-critical servers, but for now, the data center can leave well enough alone as the standards are ratified and interoperability improves. (Enhanced speed, unified I/O, and reducing redundant servers and cabling are attractive to the data center core if the protocols are stable.)

New equipment

  • The necessary equipment to support these protocols include:
  • Enhanced Ethernet switches to provide 10Gbps Enhanced Ethernet;
  • Converged network adapters (CNAs) that support both Ethernet and Fibre Channel;
  • An FCoE forwarder that performs the stateless encapsulation/de-encapsulation function.


The FCoE forwarder is significantly lighter weight than a gateway. A gateway must terminate one protocol (such as iSCSI) and initiate another protocol (such as Fibre Channel) in an iSCSI-to-FCoE gateway. iSCSI requires two sessions—one from the initiator to the gateway, and one from the gateway to the target. With FCoE, there is one Fibre Channel session from the FCoE initiator to the Fibre Channel target.

FCoE requires some new investment in equipment, but the new products are not dedicated to FCoE. The Enhanced Ethernet switches will share 10Gbps Ethernet with all other Ethernet traffic, while the CNAs will provide the functionality of HBAs with addi- tional FCoE connectivity. The cost will not be that much more than what the enterprise is spending now on enterprise storage resources. Wider adoption rates should also bring down initial costs. There are no additional costs to using FCoE with Fibre Channel because the SAN’s hardware, software, and operations remain unchanged. And Enhanced Ethernet benefits not only FCoE but also Ether- net traffic by offering isolated traffic classes, lossless transmission, and 10Gbps speeds.

FCoE adoption will not be without its challenges. Equipment cost is not a huge factor, but troubleshooting may be. Solving errors is more straightforward in a dedicated network than a converged network, so having the ability to separately manage converged fabrics on the same physical pipe will be extremely important. VLANs are the likely solution for this issue, and will allow storage administrators to separately manage Fibre Channel as the Ethernet administrators manage Enhanced Ethernet. FCoE-enabled 10Gbps switches will replace separate LAN switches and Fibre Channel directors.


VLANs will also allow Fibre Channel administrators to retain existing operational procedures. Even so, we expect some practices overlap and management issues, since FCoE is dependent on the Ethernet segment and its resources. Also, servers hosting mission-critical applications generally have large numbers of Ethernet and Fibre Channel ports—more than can reasonably be converged into a redundant pair of 10Gbps ports. Edge deployments, with fewer ports, will benefit more from convergence.

FCoE and the data center

CNAs are in corporate testbeds now. Customers’ primary interest in FCoE centers on intensive computing environments that are located in the data center, but are not attached to the SAN. FCoE is about extending and leveraging existing Fibre Channel resources to these environments.

Together with Enhanced Ethernet, FCoE provides three important benefits to data center administrators:

  1. Enables them to replace direct-attached storage (DAS) with existing centralized storage;
  2. Leverages Fibre Channel investment because administrators do not have to purchase a separate iSCSI SAN;
  3. Delivers 10Gbps Enhanced Ethernet networking to high performance environments.

These high performance environments face several issues that FCoE and Enhanced Ethernet are positioned to solve. Issues include the prevalence of DAS, the perceived need to deploy iSCSI SANs to centralize storage, and a large tangle of energy-consuming cables and redundant equipment.

The majority of testbed deployments exist for these reasons, and we expect early adoptions in 2009 to remain in these types of environment. As deployments prove stable and the standards are ratified, we will see FCoE and Enhanced Ethernet move into mission-critical applications within a few years. Mainstream adoption will lag a year or two behind the early adopters, but with very strong interest in FCoE—and given the benefits of FCoE and Enhanced Ethernet, we should see mainstream testbed deployments in 2009 and edge adoptions in 2010.

There will certainly be organizational and budgetary challenges. Still, we believe that the advantages of centralizing storage on existing Fibre Channel SANs outweigh those issues:

ROI #1: Centralize storage for intensive server environments. Blade and virtualized servers at the data center edge traditionally store data to DAS; however, a converged Ethernet fabric based on Enhanced Ethernet and FCoE provides wide bandwidth, high speed, and access to Fibre Channel SANs. Extending Fibre Channel storage to these I/O-intensive, high-performance environments allows IT to eliminate inefficient DAS and to leverage existing Fibre Channel SAN, instead of purchasing iSCSI SANs. Enhanced Ethernet benefits I/O-intensive environments as well, because they require massive amounts of bandwidth and speed.

ROI #2: Reduce complexity and data center build-out by consolidating servers. Another early use of FCoE will be in server and network consolidation. Typical server clusters in data centers have five to seven I/O interfaces for different networks and redundant builds. FCoE and Enhanced Ethernet unify I/O through multi-protocol switches and host-based CNAs, allowing IT to sharply reduce the number of network devices, server-to-network interfaces, and cables that now interconnect the clusters. The number of interfaces shrinks down to, for instance, two 10GbE ports, two cables, and two switch ports.

Another consolidation advantage is that FCoE-enabled CNAs provide a standardized method of Fibre Channel SAN connectivity, which simplifies physical architecture and provisioning. As with I/O connectivity, there is no need to locate available Fibre Channel services, since all data center servers will have the CNAs, which enable IT or policy-driven operations to provision Fibre Channel services at will.

ROI #3: Help achieve energy-efficient data centers. Consolidation and network unification also reduce energy costs related to networking and storage in the data center. In general, networking is not as large an energy consumer as storage. But a converged fabric will yield energy savings by reducing the number of cables, interfaces and redundant servers in the data center. But by using FCoE to extend Fibre Channel storage to more data center servers, administrators avoid adding additional energy-hungry disk arrays. By centralizing storage on the SAN instead of using multiple DAS and iSCSI arrays, IT can significantly reduce the amount of power the arrays require, the rack space they take up, and the cooling they need.

Block protocol summary

ROI #4: Consistent SAN connectivity. CNAs enable dynamic Fibre Channel SAN configuration on Ethernet servers. There is no need to configure additional connections to a separate Fibre Channel port from the server. Fibre Channel connectivity also replaces inefficient DAS architectures. This simplifies physical architecture and provisioning, and does not require the equipment soup required by multiple connections to different fabrics. It also enables storage administrators to efficiently manage storage resources through Fibre Channel instead of Fibre Channel plus iSCSI and/or DAS.

Even though FCoE and Enhanced Ethernet are not yet ratified, they are buoyed by vendor agreements and qualifications. Ratification will be important as it will enable deeper integration and interoperability. This will allow data center administrators to trust FCoE and Enhanced Ethernet in the core data center environment as well as in edge-based virtualized and blade servers. But in the meantime, there is enough vendor support and enterprise interest to bring commercialized products to early adopters this year, and to the mainstream next year. Current FCoE equipment providers include Brocade, Cisco, Emulex and QLogic, but most storage and Ethernet vendors are deeply involved at the working group and integration levels.

In the meantime, InfiniBand vendors are sensing an opportunity to expand beyond their high-performance computing (HPC) niche. With just a few vendors involved (relative to the packed Ethernet field), InfiniBand changes are far easier to ratify, and it already provides much of the lossless environment that Enhanced Ethernet will provide.

Keep pushing

FCoE/EE proponents should keep on pushing commercialization and standardization if they want to see a major data center market open up for new protocols. At this point, we still do expect to see fully ratified FCoE and Enhanced Ethernet standards converging Fibre Channel and Ethernet for increased performance, greater data center density without spiraling energy costs, and lower capital and maintenance costs.

We believe that FCoE champions and their interoperability partners will play a large part in achieving the converged data center of 2009 and beyond.


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *