Most data centers have adopted SANs supported by Fibre Channel. Recent technology advances such as multicore processors, high-density servers, server I/O performance, and server consolidation and virtualization are continuing to accelerate adoption of Fibre Channel-based solutions. Fibre Channel has been the dominate storage system interconnect since the mid-1990s and continues to be the preferred choice by customers for SANs. FC SANs offer a range of benefits such as improved backup and restore, enhanced business continuance, and simplified consolidation.

FCoE is a new protocol that expands Fibre Channel into the Ethernet environment. FCoE combines two leading technologies, the Fibre Channel protocol, the predominate SAN technology, and Ethernet, which is supported in all servers and data centers to provide more options to end users for FC SAN connectivity and networking. The FCoE protocol specification maps Fibre Channel natively over Ethernet and is independent of the Ethernet forwarding scheme (see Fig. 1). It allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs, maintaining the latency, security, and traffic management attributes of FC while preserving investments in tools, training, and SANs.

I/O CONSOLIDATION AND UNIFICATION

Many IT organizations operate multiple networks to connect to servers (for example, one for IP networking, one for storage, and one for Inter-Process Communication [IPC] for high-performance computing environments). IT organization incur cost in numerous ways due to these overlapping networks such as: additional capital equipment, added cost and complexity of cabling and airflow, administrative costs, and the additional power and cooling imposed by multiple components.

The vision of I/O consolidation and unification is the ability of an adapter, switch, and/or storage system to use the same Ethernet physical infrastructure to carry different types of traffic with very different characteristics and handling requirements. For the IT network manager this equates to installing and operating a single network instead of three, but still having the ability to differentiate between traffic types. The data-center manager can purchase fewer host bus and server adapters, cables, switches, and storage systems reducing power, equipment, and administrative costs.

10 GIGABIT ETHERNET

I/O consolidation and unification promises to support both storage and network traffic on a single network. One of the primary enablers for consolidation is 10 Gigabit Ethernet, a technology with the bandwidth and latency characteristics sufficient to support multiple traffic flows on the same link. The following factors are driving adoption and the eventual ubiquity of 10GbE:

–Server virtualization enables workload consolidation, which contributes to network throughput demands. Virtualization, which aggregates multiple applications and OS instances on a single physical server with each application and OS instance generating significant I/O traffic, is putting a big demand on existing multiport 1GbE infrastructures. The demand is particularly high on 1GbE storage systems to support the higher I/O rates, greater capacity, and faster nondisruptive provisioning;
–Multisocket, multicore server technology supports higher workload levels, which demand greater network throughput; and
–Increasing use of network storage requires higher bandwidth between servers and storage.

ETHERNET ENHANCEMENTS AND DATA-CENTER BRIDGING

For 10GbE to be an even stronger option for server I/O consolidation and storage networking, enhancements need to be made to Ethernet to support the unification of multiple fabrics onto a single Ethernet network. The following extensions to classical Ethernet in the IEEE, called Data Center Bridging (DCB), give 10GbE the performance to support transmission mechanisms beyond Internet Protocol, including Fibre Channel over Ethernet.

Priority Groups
Priority Groups, implemented by using the IEEE 802.1Qaz Enhanced Transmission Selection specification, allow greater quality of service (QoS) control among different “lanes” on the same physical cable. Priority Groups allow storage traffic to be managed as a group, with configurable QoS guarantees such as latency and bandwidth. This further enhances Ethernet’s ability to accommodate Fibre Channel storage traffic on a common 10GbE fabric.

Priority Flow Control and Lossless Ethernet
Classical Ethernet manages congestion by dropping packets, and high-level, connection-based protocols recover from packet losses. The IEEE 802.3x Pause mechanism transforms Ethernet into a lossless fabric, allowing it to emulate Fibre Channel operation. While the 802.3x Pause mechanism operates on a port level, Priority Flow Control operates on a per-priority class basis. The 802.1p Priority Flow Control mechanism further enhances classical Ethernet, enabling it to transport multiple traffic types simultaneously over a common, enhanced Ethernet fabric.

With Priority Flow Control, Ethernet can support traffic including Fibre Channel over Ethernet, high-performance computing, and classical Ethernet. DCB-capable products enable lossless Ethernet fabrics by using Priority-Based Flow Control (PFC) to pause traffic based on the priority levels. This allows virtual lanes to be created within an Ethernet link, with each virtual lane assigned a priority level. During periods of heavy congestion, lower-priority traffic can be paused, allowing higher priority and latency sensitive tasks such as VoIP and data storage to continue.

Congestion Management
In addition to the lossless fabric enabled by the Pause mechanism, a large network requires end-to-end congestion management. The IEEE 802.1Qau specification is being developed to support end-to-end congestion notification. With 802.1Qau, when congestion occurs in the interior of the network, traffic sources at the edges are instructed to throttle transmission, thus reducing congestion.

Energy Efficient Ethernet (EEE)
Energy Efficient Ethernet is being developed to automatically reduce power utilization of Ethernet components (NICs, switch ports, etc.) during times of low utilization. When completed, EEE is expected to offer further operational cost reduction possibilities for Ethernet-based storage over FC- based storage.

ENCAPSULATING FIBRE CHANNEL INTO ETHERNET

The encapsulation of the Fibre Channel frame occurs through the mapping of FC onto Ethernet. Fibre Channel and traditional networks have stacks of layers where each layer in the stack represents a set of functionality. The Fibre Channel stack consists of five layers, FC-0 through FC-4. Ethernet is typically considered a set of protocols in the seven-layer OSI stack that define the physical and data link layers. FCoE provides the capability to carry the FC-2 layer over the Ethernet layer (Fig. 2). This allows Ethernet to transmit the upper Fibre Channel layers FC-3 and FC-4 over the IEEE 802.3 Ethernet layers.

Addressing
As with native Ethernet packets, FCoE uses Media Access Control (MAC) addressing to transfer packets between individual network hops. Fibre Channel addressing requires endpoint-to-endpoint knowledge. FCoE implementation contains mechanisms for the mapping and resolving Fibre Channel endpoint addresses to Ethernet MAC addresses.

Security
Using IEEE 802.1Q Tags, Ethernet can be configured with multiple Virtual LANs (VLANs) that partition the physical network into multiple separate and secure virtual networks. Using VLANs, FCoE traffic can be separated from IP traffic so that the two domains are isolated and one network cannot be used to view traffic on the other.

Fibre Channel Zoning and LUN Masking
Because the complete Fibre Channel frame is preserved using FCoE, all the traditional FC management functions such as zoning and LUN masking are performed in the same manner as with a standard Fibre Channel fabric. The FCoE switches understand the Fibre Channel zoning functions, and zones and zone sets are created and managed in the same fashion as with standard Fibre Channel switches. LUN masking is performed in the same manner as with traditional Fibre Channel fabrics. The application servers view storage LUNs in the same way whether they are presented with standard FC technology or with FCoE technology.

THE PATH TO END-TO-END FCOE CONVERGENCE

Today, IT managers often use four, six, or even eight network adapter ports in their mission critical servers. These adapters can be two Fibre Channel host bus adapters, plus several server adapter or NIC ports for LAN traffic (Fig. 3). In the case of a Virtual Machine environment up to an additional four NIC ports are needed for management, depending on the VM vendor’s best practices. This topology allows for segmentation of different traffic types and applications, as well as redundancy so that a connection failure with not impact service availability.

With Fibre Channel over Ethernet, IT organizations can incorporate FCoE-aware Ethernet switches into the access layer and converged network adapters or server adapters with an FCoE initiator at the host layer (Fig. 4). This simplifies the network topology so that only a single pair of adapters and a single pair of network cables are needed to connect each server to both the Ethernet and Fibre Channel networks. The FCoE-aware switches separate LAN and SAN traffic, providing seamless connectivity to existing storage systems.

As storage systems equipped with native FCoE interfaces come to market, IT organizations can integrate them into their data-center networks by connecting them directly into the FCoE fabric and over time data centers can migrate to an end-to-end converged fabric that uses FCoE-aware initiators, switches, and storage targets (Fig. 5) while maintaining their traditional Fibre Channel Management tools.

FCOE BENEFITS

The business benefits of this improved topology include reduced cost and complexity, greater and more flexible performance, and reduced power consumption — all while providing seamless connectivity with existing Ethernet and storage networks.

Fewer Interconnects and Cables
With the simplified FCoE topology, what formerly took a minimum of four interfaces per server, two NICs and two HBAs, now requires only two FCoE adapters per server. This also frees up PCI slots. And the ability to carry FCoE and IP traffic over the same physical cables allows IT organizations to cut the number of network cables within each rack in half. In a typical 20-server rack with fully redundant connectivity, this reduces 40 Ethernet and 40 Fibre Channel connections to only 40 FCoE connections. Fewer cables directly reduces the cost of cabling and helps to reduce cabling errors. With simpler cabling, new server racks can be configured and deployed more rapidly, and fewer cables means less restriction in front-to-back airflow, increasing cooling efficiency.

Unified Management
A new class of 10GbE network adapters helps to further simplify FCoE deployment, allowing FCoE to be incorporated without disrupting current datacenter management practices, software, or the roles of network and storage administrators. These adapters present both an Ethernet interface and a Fibre Channel initiator to the server, allowing the operating system to see two physical devices. These adapters can implement FCoE in hardware, called CNAs (Converged Network Adapters) or as a software initiator, called Server Adapters. This technology makes the existence of the converged network transparent to the operating system and applications, allowing both storage and network administrators to manage their respective domains just as they do today. Consistent management helps ease FCoE deployment while reducing operating expenses.

High Performance
With 10 Gbps Ethernet available today, and committees already discussing 40 and 100Gbps transmission mechanisms, FCoE traffic can travel over the network technology that advances the fastest.

Reduced Power Consumption
According to an EPA study from August 2007, data-center equipment makes up over 50% of energy related expenses in the data center (Fig. 6). Servers consume by far the most energy followed by storage and networks. For every watt of energy usage, approximately another watt is needed for data-center infrastructure (power/cooling). So, every watt saved from IT equipment saves 2 watts overall. The reduced power consumption that comes from using fewer NICs and fewer switches provides relief to organizations that are up against their data-center power and cooling envelopes.

Investment Protection
The ability of FCoE and Fibre Channel to coexist allows administrators to use the same tools and techniques they use today in managing their storage. Also, the ability of FCoE networks to connect either natively to FCoE targets or directly to Fibre Channel networks preserves the investments that organizations have made in their storage infrastructure.

Conclusion

Fibre Channel over Ethernet extends, rather than replaces Fibre Channel, allowing organizations to seamlessly integrate their Ethernet and Fibre Channel networks at the pace and path that works best. FCoE combined with enhancements to Ethernet will allow data centers the ability to consolidate their I/O and network infrastructure, saving both capital and operational expenses and increasing flexibility and control.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *