The Fibre Channel over Ethernet (FCoE) standard is fully baked, but there seems to be some hesitation among end users. Most storage professionals are interested in the concept of moving Fibre Channel traffic over Ethernet networks, but few have yet to flip the switch and put FCoE to use.

The FC-BB-5 working group of the T11 Technical Committee unanimously approved a final standard for FCoE in June 2009. As a result, the T11 Technical Committee plenary session has forwarded the FC-BB-5 standard to INCITS for further processing as an ANSI standard.

According to the Fibre Channel Industry Association (FCIA), the FCoE products in OEM qualification today are based on the completed standard and users will be able to benefit from standardized FCoE solutions from day one.

But that is the future. FCoE is still in the very early stages of development within storage organizations at large enterprises.

InfoStor has been tracking FCoE deployment plans closely. In a reader survey in April, approximately 9% of respondents said they planned to test FCoE in 2009, while 33% had FCoE deployments in plan for 2010 or 2011. However, the overwhelming majority (58%) said they did not currently have deployment plans.

InfoStor posed the same question to readers last month and the results were similar. There has been a slight rise in the number of users planning FCoE deployments this year (13%), but almost 57% are standing pat with no deployments planned.

Indutry analysts are not surprised, as the technology is still in its infancy. International Data Corp. (IDC) predicts 2010 will see an increase in converged networking pilot projects, with significant technology deployments expected in 2011.

Richard Villars, vice president of storage systems at IDC, believes the adoption of FCoE and CEE technologies is more than just a technology transition. He says it’s part of an overall goal to change the way enterprises build and operate data centers.

“Servers are getting very small and all of the cables coming out of them are increasingly packed tighter together. There is a move afoot to a deployment pattern of modular systems,” says Villars. “Now is the time that data center architects should start to plan for future designs, in terms of planning for an environment with these converged technologies to determine how to cable and power optimally.”

Case study

Designing and building an efficient data center is a top concern for Kemper Porter, systems manager in the Data Services Division of the Mississippi Department of Information Technology Services.

His department is in the midst of planning a big move to a new data center and expects to be up and running in six months. One of the items on Porter’s agenda is simplification, and part of simplifying the data center is a transition to converged networking.

“We want to clean up when we get to the new building. We know we’re going to have a very high rate of growth. We had to set a new precedent and deployment pattern,” says Porter.

His team essentially functions as a service provider to various state agencies, provisioning servers and IT resources to application developers across the state.

Porter envisions a massive proliferation of VMware virtual machines (VMs) deployed as building blocks that look and feel like mainframes, all of which connect to centralized storage via converged network adapters (CNAs) with centralized data backups and disaster recovery at the server level.

CNAs consolidate the IP networking capabilities of an Ethernet NIC with the storage connectivity of a Fibre Channel HBA via FCoE on a 10GbE card.

Porter is currently using three single-port CNAs from QLogic in conjunction with two Cisco Nexus 5000 Series FCoE-capable network switches.

The CNAs are being used in a test and development capacity – Porter is waiting until the move to the new data center in 2010 to put them in critical roles. He says the move to CNAs is one of necessity.

“When you have a 3U-high server it does not have an integrated switch and you wind up with a proliferation of network cards – six Ethernet connections per server and two for Fibre Channel,” says Porter. “Having all these network cards creates a spider’s den of cables going in and out of the servers. It creates an excellent opportunity for physical mistakes and makes the process of troubleshooting more difficult.”

He is also concerned with recoverability. “How many VMs can I bring up in a hurry? How quickly can I restore if I lose a machine? Most of that is based on how many network connections you have and how fast you can get to your storage.”

Porter’s storage environment consists of Fibre Channel arrays and storage devices from IBM, Sun/StorageTek, and other vendors. Individual projects can account for as much as 40TB of storage, which is the case for the state’s geographic information system (GIS).

Despite his aversion to so-called bleeding edge technologies, Porter is confident that the CNAs with FCoE will meet his future needs.

“There isn’t anybody really using [FCoE], but I believe touching it and working with it is the only way to get your confidence up. I would not describe myself as an early adopter. This is just about as much fun as I can handle. If I were not moving to a new data center I probably would not be doing this,” Porter says.

The old way of doing things is a non-starter for Porter. His rack servers can comfortably house 77 VM instances per box, while maintaining mainframe-like reliability, but none of it would be possible without minimizing network adapters and port counts.

Porter now runs two to three connections to each server. “It brings the complexity way down. We will still have our Fibre Channel infrastructure with one connection rather than two per server and we still have redundant pathways because we connect to two Nexus switches,” he says.

Converged networking is the way of the future, at least for Porter. “We will buy CNAs to put in all of our future VMware servers. The technology is solid enough that we are going that way,” he says.

QLogic, Emulex make FCoE moves

By Dave Simpson

QLogic and Emulex, the duopoly of the Fibre Channel host bus adapter (HBA) space, are on a collision course in the market for converged network adapters (CNAs) based on Fibre Channel over Ethernet (FCoE). But they’re taking different strategies, and for now QLogic has the early lead in terms of OEM design wins.

Most recently, QLogic nailed an OEM deal for its single-chip CNAs with IBM’s Power Systems Division, marking the availability of native FCoE for Unix (AIX) and Linux platforms. It was the first time that QLogic secured a design win in Big Blue’s Power division, which has been an Emulex stronghold.

QLogic already had an FCoE CNA OEM deal with IBM for System x and BladeCenter systems.

QLogic also landed a design win with NetApp earlier this year. QLogic is (at least for now) the exclusive supplier of FCoE CNAs for NetApp’s target systems, and is the primary supplier of CNAs on the host side. (NetApp has also certified CNAs from Brocade on the host side.)

And EMC has selected QLogic FCoE CNAs for use in most of its storage systems.

QLogic’s single-chip 8100 series CNAs handle storage and networking traffic at 10GbE speed, include an FCoE offload engine, and do not require a heat sink.

Emulex is taking a different strategy. Instead of shipping a CNA with FCoE enabled, the company entered the market with a core 10GbE NIC. In this space, Emulex competes primarily with vendors such as Intel and Broadcom, as well as smaller vendors such as Chelsio and Neterion.

A copper version of Emulex’s 10GbE Universal Converged Network Adapter (UCNA) is priced at $1,136, and an optical version is priced at $2,461.

But Emulex is taking a pay-as-you-go approach, where users can enable iSCSI and/or FCoE via license keys. Pricing for the 10Gb NIC with iSCSI is $1,935 (copper) or $2,799 (optical). With FCoE enabled, pricing is $1,935 (copper) or $2,799 (optical).

“We’re viewing this as an Ethernet play, not as a Fibre Channel over Ethernet play,” says Shaun Walsh, vice president of corporate marketing at Emulex. “We’re going at it from a 10Gb Ethernet perspective, not an FC replacement perspective.”

The single-chip OneConnect UCNAs support hardware offload for TCP/IP, iSCSI and FCoE.

Brocade connects data centers with FCoE blades

By Kevin Komiega

Continuing on its path toward making the DCX Backbone a cornerstone of converged networking, Brocade is shipping a pair of new Fibre Channel over Ethernet (FCoE) blades capable of connecting two or more data centers and consolidating server I/O.

The FX8-24 Extension Blade for the DCX Backbone family offers 12 8Gbps Fibre Channel ports, 10 GbE ports, and two optional 10GbE ports. Organizations can install up to two Brocade FX8-24 blades in a DCX or DCX-4S midrange backbone and can double the aggregate bandwidth to 40Gbps by activating the optional 10GbE ports.

Brocade also introduced FCIP Trunking with the FX8-24 Extension Blade. FCIP Trunking enables the creation of FCIP tunnels with up to 10Gbps bandwidth using 10GbE ports and 4Gbps using 1GbE ports, with full failover and load-balancing capabilities.

Additional new features and enhancements include Adaptive Rate Limiting, FCIP Quality of Service (QoS), and new compression algorithms.

The FX8-24 Extension Blade is joined by the FCoE 10-24 Blade as part of Brocade’s converged networking portfolio. The FCoE 10-24 is an end-of-row switch that brings FCoE capabilities for server I/O consolidation in the data center. It features 24 10Gbps Converged Enhanced Ethernet (CEE) ports and provides Layer-2 Ethernet functionality for LAN traffic. Storage traffic is delivered to the SAN through 8Gbps Fibre Channel blades in the DCX Backbone chassis

Both the FX8-24 and FCoE 10-24 will be available through EMC, Hitachi Data Systems, HP, and NetApp.

In addition, Brocade is now offering the 7800 Extension Switch, which offers the same functionality as the FX8-24 Extension Blade in a fixed-configuration form factor.

According to Bill Dunmire, Brocade’s manager of product marketing, the 7800 lowers the cost of entry for smaller data centers and remote offices implementing point-to-point disk replication in open systems environments.

Brocade is tying its new hardware together with the Data Center Fabric Manager (DCFM) version 10.3 software. The new release includes FCIP management capabilities for managing new features, such as FCIP trunking, from a single management console.

Converged networks reduce cabling costs, complexity

By Kevin Komiega

The standards bodies are on the verge of ratifying Fibre Channel over Ethernet (FCoE) and Converged Enhanced Ethernet (CEE), also known as Data Center Bridging (DCB), as official standards and the idea of moving LAN and Fibre Channel SAN traffic over the same Ethernet network is fast becoming a reality.

The unification of LAN and SAN traffic brings with it significant reductions in the number of cables required to connect servers to storage resources in the data center and, according to consultants, now is the time to start planning for unified network fabrics.

Ethernet has its flaws. It is considered a “best-effort” network that does not always deliver data in order and may drop packets altogether due to network congestion. Storage networks require data be delivered in order and intact, which is why the high performance Fibre Channel protocol was developed to create a separate “lossless” network to carry SCSI traffic between servers and storage devices.

Combining the two and sending storage traffic over IP networks requires two things: 10GbE and enhancements to Ethernet to prevent data loss without incurring performance penalties.

Those enhancements, referred to collectively as CEE or DCB, are currently being finalized by the Data Center Bridging (DCB) Task Group (TG) of the IEEE 802.1 Working Group and are expected by year’s end.

The implementation of CEE on 10GbE allows for the deployment of FCoE. Fibre Channel over Ethernet encapsulates Fibre Channel frames into Ethernet frames. The FCoE standard allows storage traffic to flow reliably over 10GbE pipes, enabling a unified network fabric.

The first step towards implementing a converged or unified network fabric is making the jump from traditional network adapters to converged network adapter (CNA) cards. Using current technologies, servers typically require NICs for Ethernet traffic and HBAs for Fibre Channel storage traffic, each of which requires multiple cables.

Most servers utilize Gigabit Ethernet copper cabling for connectivity with as many as 10 cables running from the server to an access switch. There are many reasons for the cabling sprawl. Multi-core processors provide a wealth of processing power and require significant network bandwidth.

Gigabit Ethernet has become a bottleneck in the data center as enterprises roll out large numbers of virtual servers or virtual machines (VMs). In some cases, server administrators are running more than 20 applications per server. This phenomenon requires significant bandwidth to each physical server.

FCoE is an important foundation in the converged or unified networking concept, but unified network fabrics are based entirely on lossless 10GbE, which allows for the use of FCoE, but also supports the use of other storage/networking protocols such as iSCSI, CIFS and NFS.

Unified fabrics will initially impact the first few meters of the data center network. Servers will use FCoE with CEE at 10Gbps speeds to connect servers to the first hop, access switch. Traffic will then diverge to the LAN or the existing SAN using Fibre Channel.

Take FCoE out of the equation and 10GbE alone will significantly reduce the number of cables required to connect servers to switches.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *