The continued evolution of the data center, while increasing capacity and functionality, also brings with it challenges in terms of space requirements, power consumption, cooling and administrative costs. Virtualization is currently the leading architectural choice for IT network/storage managers to address these challenges in enterprise consolidation projects or for building private or public cloud service platforms. And advanced network storage capabilities such as disaster recovery, data deduplication, high availability, and storage resource utilization are being implemented within virtualized environments.
Virtualization And FCoE
Server virtualization continues to offer data center TCO benefits while improving agility in terms of application and storage availability. Virtualization was enabled in part by the prevalence of high performance Fibre Channel SANs in Fortune 1000 data centers, and Fibre Channel has recently been enhanced to help further enable virtualization in the data center. This includes the development of the Fibre Channel over Ethernet (FCoE) protocol, which transparently leverages Fibre Channel’s installed base of SAN upper layer management.
Implementing FCoE with virtualization is one way to extract value from the virtualization framework, and can improve data center cost efficiency through I/O convergence. However, the demand for storage continues to be relentless, and is growing even faster as the occupancy rate and footprint of virtual machines (VMs) increase. This is causing available bandwidth to fill up fast, and is fueling the rapid migration from 4Gbps to 8Gbps Fibre Channel and from 1Gbps Ethernet to 10Gbps Ethernet.
Most servers are hosting more and more VMs. For example, many multi-core servers have as many as 16 VMs. A February 2010 Dell’Oro Group report forecasts that the number of VMs will double between 2010 and 2012 to 16 million VMs. This spectacular VM growth will continue to drive an endless thirst for higher and higher LAN and SAN I/O bandwidth, which in turn drives the opportunity for more and more VM growth.
Consider that hypervisor vendor best practices for network I/O configurations recommended separate, redundant physical 1Gbps ports for each workload within a virtualized platform. This requirement may enable the proliferation of cable and server adapter (HBAs and NICs) sprawl. However, as data centers incorporate higher I/O bandwidth, such as lossless 10GbE Data Center Bridging (DCB) technology, virtualized servers can benefit from better cost-per-gigabit economics by maximizing I/O efficiency with SAN and LAN convergence via 10Gbps FCoE.
Figure 1 illustrates common I/O consolidation progressions and technology vectors driving network-based storage with virtualization.
Figure 1 – Virtualization vectors driving a 10GbE unified fabric
Deployment of 10Gbps FCoE networks in converged I/O environments will utilize components that function as Fibre Channel HBAs and Ethernet NICs with a new type of server adapter called a converged network adapter (CNA) that is capable of handling both storage I/O and LAN networking traffic.
CNAs with lossless 10GbE provide another option in the deployment of next-generation data centers because of the potential efficiency gains from a unified fabric using FCoE-based networks.
Reliability and availability are key attributes of a SAN, and FCoE supports N_Port ID Virtualization (NPIV). FCoE zoning configurations and all virtual port properties (WWN, etc.) migrate to the new host with NPIV. A software-based abstraction layer and FCoE provide centrally managed logical pools. The virtualized FCoE network storage architecture then enables VMs to migrate across the SAN, resulting in additional availability and redundancy.
FCoE And DCB
FCoE and lossless (DCB) Ethernet are the key technologies enabling LAN/SAN I/O convergence onto a shared I/O transport. FCoE provides seamless integration with existing Fibre Channel SANs and maintains enterprise-class availability. Data center managers can realize the maximum benefits by moving lower speed LAN and SAN traffic to the new lossless 10GbE transport.
The FCoE lossless transport needs to be immune to noise and delay. Consider that a 1E-12 BER translates to one error every 100 seconds with a 10bps data rate. Since a single bit error can cause an entire coding block (3,250 data bits) to be lost, frames could be dropped and multiple I/Os affected. Fibre Channel uses a credit-based flow control mechanism that guarantees delivery and has proven to be superior to the TCP flow control mechanism used by Ethernet. TCP is notoriously prone to data loss, and is unsuitable for transporting storage traffic. To overcome the losses inherent with TCP, lossless Ethernet uses Priority-based Flow Control (PFC) with DCB. As a result, FCoE flow control is expected to provide significant improvement over that of TCP.
As rack and row infrastructure consolidation emerges, FCoE CNAs will be deployed along with Fibre Channel HBAs and traditional Ethernet NIC cards. Layer-2 multi-path capabilities will still be able to take advantage of the installed Fibre Channel and 10GbE cabling.
Validating FCoE And DCB
Testing and validating the Ethernet infrastructure is a key step to ensure that noise, skew and crosstalk will not be a problem. The main focus of network convergence is to access FC-based SANs through Ethernet links while integrating host adapter functions to reduce the number of network components required for LAN, SAN and HPC applications. The role of FCoE and DCB protocols is to enable a unified fabric that effectively and reliably carries Fibre Channel SAN traffic over Ethernet. This means that in order to ensure enterprise-class performance, network operators need to take a storage-centric approach, rather than a LAN-centric approach, to test and verification of FCoE and the unified fabric.
For example, LAN testing focuses only on the switch and network. SAN testing, in contrast, requires network-to-end-device verification across all hosts, fabrics and targets contributing to the overall performance of the network. Given that the LAN fabric is based on best effort delivery, while the SAN has complete traffic control end-to-end governed by Fibre Channel’s protocol for zero frame loss, LAN QoS testing is not relevant to storage networks. SAN testing measures the flow management of Fibre Channel links through mechanisms such as buffer-to-buffer credit management from the host through the fabric to storage. In addition, since SAN testing must verify the delivery of every single frame, SAN testing requires an entirely different set of performance and latency measurements compared to LAN testing.
The following should be key focal points for validation:
— Protocol compliance
— Functional verification
— Performance and benchmarking tests against service level agreements (SLAs)
— Converged networks with simultaneous workloads
— Seamless integration with existing infrastructures
— Congestion management (PFC/ETS behavior)
— Unified management tools monitoring station-to-station activities
It is important to not confuse wire speed in lab tests with overall system performance. While pure protocol speeds are important, they only represent ideal operating conditions. The overall performance of the deployed infrastructure and applications is the real measure to take into account when building new data centers or expanding existing ones.
The critical differences between a SAN and an Ethernet LAN are the tight link-level flow control, link service management, security algorithms in use, and associated proprietary implementations. Among all of the validations, it is critical to perform interoperability tests, from the introduction of new protocols all the way through their mass deployment.
The Fibre Channel Industry Association (FCIA) continues to host plug fest events to validate interoperability between various network components. (The 4th FCIA FCoE Plugfest will take place in June at the University of New Hampshire.) Testing has been focused on protocol compliance, verifying smooth integration with Fibre Channel SANs and Ethernet, confirming lossless transport, and verifying convergence functionality. The results from these plug fests have helped to expedite interoperability between vendors to facilitate adoption of FCoE and unified fabric technologies in the end-user community.
Conclusion
Server virtualization, the opportunity to consolidate LAN and SAN traffic, and the increased requirement for application migration will continue to drive the need for increased I/O bandwidth within the data center. Server I/O consolidation based on FCoE and DCB lossless Ethernet is one step to achieve maximum I/O efficiency with reliable I/O consolidation. With potential reductions in capital and operating expenditures that result from efficient I/O bandwidth management, IT managers will have the option of introducing FCoE in data centers alongside existing Fibre Channel environments.
The nature of FCoE as an encapsulation protocol will guide its deployment in tier 3 and some tier 2 environments, leaving Fibre Channel as the primary storage technology for tier 1 applications in the foreseeable future.
This article was contributed by the Fibre Channel Industry Association (FCIA), and all of the authors are FCIA members. In addition, Sunil Ahluwalia is a product line manager in Intel’s LAN Access Division, David Barry is a senior marketing manager at Broadcom, Joy Jiang is a product manager at JDSU, and Ahmad Zamer is a senior product marketing manager with Brocade.