Fabric convergence: Changing the nature of fabric attach

Posted on July 09, 2010

RssImageAltText

-- Fabric convergence in the data center is upon us, and is yet another of many recurring IT convergence trends that will alter the IT infrastructure.  Other convergences have gone before, including technologies such as VoIP, video conferencing, or fixed and mobile telephony convergence. 

As with prior convergence trends, fabric (or network) convergence holds tremendous promise, and will similarly alter the architecture of enterprise IT services while delivering new levels of cost effectiveness, flexibility, and capabilities within the infrastructure.  These new capabilities will be a driving force behind adoption, as converged fabrics will address the challenging demands of the consolidated, virtualized data center, for which legacy networks are a mismatch. 

Moreover, history has demonstrated that as convergence takes place, capabilities within the infrastructure are likely to shift and recombine between devices, allowing new innovators to come to market, and in turn a generation of new capabilities will emerge. 

An intersection of benefits and innovation is making it inevitable that yet another cycle of convergence will transform the data center.  Let's take a look at the benefits of convergence, how it looks in action today, and how fabric convergence is setting the stage for innovation within the IT infrastructure -- on top of a single, unified wire.

Convergence benefits
In the simplest terms, convergence is the collapse of previously separate technologies into a single solution.  Today, fabric convergence refers to the merging of Fibre Channel, Ethernet and, in some cases, InfiniBand traffic onto a single network or fabric. 

The most visible benefits will be in a converged fabric's impact on the capital (CAPEX) and operational (OPEX) expenses of the data center in terms of port costs, cable costs, and power and cooling expenditures. 

But comparably significant benefits will come from the alignment of a single network with the capabilities of today's computing and storage platforms – platforms that are high performance, efficient, consolidated, virtualized and dynamically adaptable. Traditional fabrics stand in the way of these characteristics, while a converged fabric can be a foundation that extends infrastructure performance, efficiency, and adaptability. 

Springing from capabilities that align the fabric with today's IT requirements, there will be an even more resounding impact on the enterprise bottom line through a reduction in soft costs in areas such as infrastructure design, deployment, change, and management.  Let's take a look at a few of these capabilities:

Efficiency: In resource-limited data centers, converged infrastructures will have distinct advantages in space consumption (and consequently how densely hardware can be racked), power utilization, heat generation, and the impact of cabling on airflow and cooling efficiency. This will fundamentally expand the scalability of the data center, and the infrastructure density and cost behind every connection within the data center.

Simplicity: Running all infrastructure connections over a single unified fabric will simplify how systems are attached to an infrastructure – as well as how the fabric is expanded or scaled – and reduce the planning required for architecting and re-architecting today's networks.

Manageability: Convergence will increase visibility into total system interactions that take place on a single unified wire, as well as the potential to holistically manage all interactions for increased security, optimization and utilization of the infrastructure.

The dynamic fabric: Convergence will transform the infrastructure by virtualizing the topologies or total connections between different systems and applications.  Fabric capabilities will no longer be subject to the bandwidth of single wires, or the number of interfaces, but virtual networks will run on top of a mesh-like fabric with a multitude of potential paths defined in the fabric, aggregating different connections, running over different network paths, and able to be dynamically provisioned, reconfigured, or recovered on the fly.

The fabric makes hosts dynamic: In contrast to traditional fabrics that are restricted in configuration and management by designs that expect to understand physical end points, a converged fabric will virtualize end-point attachments with technologies such as vNICs and vHBAs.  Irrespective of whether applications and servers are themselves virtualized or not, this will enable a new generation of system provisioning, adaptation and mobility.

Ready for prime time
Historically, the first generations of converged offerings (VoIP telephony systems, for instance) have been accompanied by some measure of compromise, as emerging solutions vie to match stability and features with the previously separate, well-honed, and entrenched technologies.

But the march of converged data center fabrics into mainstream enterprises could be different. This time, the converged solution has already been tried by fire, in a market adjacent to mainstream IT: supercomputing. 

Supercomputing has for years required high performance, low latency, dense connectivity to match dense clusters, and incredibly high transaction rates. Converged fabric technology was hatched in those markets more than 10 years ago.  As InfiniBand was being hardened by well over a decade of use and innovation, the lessons learned have contributed to the underpinnings of a more familiar technology.

Today, converged fabrics for the enterprise are carved from 10Gbps Ethernet (10GbE), injected with the best ideas of single fabric, multi-pathing, low latency innovations, and reflect maturity and feature sophistication.  In turn, today's biggest inconsistencies are more often in conventions such as naming, where a plethora of terms still float around.

The Taneja Group uses the term Data Center Bridging (DCB) to refer to this next generation converged fabric, while taking the liberty of assuming that general use of the term also includes the full range of converged fabric technologies, including proprietary as well as standardized features that will enable flatter, faster, and more scalable unified wire infrastructures, as well as converged fabrics that are based on TCP offload and the associated protocols that are mapped on top of TCP (iSCSI for storage and iWARP for clustering).  DCB networks merge both data and storage onto a single unified wire, and through the use of virtualized networks running over the many paths of a mesh-like fabric, can emulate and interoperate with traditional fabrics, while improving on their capabilities.

[For more details, see the sidebar at the end of this article: Converging over obstacles]

Convergence in practice: the virtualized fabric
Irrespective of these underlying enabling technologies, harnessing the power of many connection paths will rest in virtualization and the management of virtual networks.  With virtual topologies in tow, nearly unlimited numbers and types of connections are easy to provision, the fabric can be changed more easily, and the full bandwidth of the fabric across all paths can be utilized. 

Moreover, such flexibility enables organizations to easily implement converged fabrics that look and feel like traditional fabrics, maintaining current responsibilities (in the respective network and storage domains) and allowing localized use of converged fabrics with transparent interconnection to traditional networks. 

Attachment to traditional networks can seamlessly happen through gateway devices, and will initially drive converged fabrics forward for top-of- rack switching or specific pockets of infrastructure where performance, efficiency, or increased versatility is desired.  In many cases, localized use will prove a catalyst that will drive increasingly widespread, faster adoption of converged fabrics - systems such as Cisco's UCS servers or the UCS-based vBlock from the VCE coalition, for example, are carrying such implementations of converged fabrics to market.  This is because their use of a converged fabric is pivotal in delivering best-in-class metrics in density, performance, management, power utilization, availability, and more.

In the future, as DCB takes hold for larger parts of the infrastructure, administrators will ultimately find they can go beyond mimicking traditional fabrics, and harness the flexibility of DCB to carry traditional traffic, but carry it across an end-to-end converged network with more flexibility, bandwidth, visibility, and management.
Performance differentiated: ASICs
As with other convergence trends, today's fabric convergence once again has set the stage for shifting capabilities within the systems attached to the enterprise infrastructure.  One example of this is the interface behind host attach. 

Vendor terminology varies for these adapters, with the prevalent term being Converged Network Adapter (CNA).  Yet CNAs are generally considered to support the convergence of Fibre Channel and Ethernet, oftentimes in isolation from upper-layer processing capabilities that offload adapters may include, which can support more protocols, such as iWARP RDMA, and enable the next generation of clustering, scale-out, and high performance computing. 

Some vendors use the term Unified Wire Adapter (UWA). With these adapters, features are emerging that will not only accelerate the performance of DCB fabrics, but may also more tightly integrate the fabric with hosts to allow the hosts to seamlessly leverage fabric capabilities.

Adapter vendors have long differentiated their products with their ASIC technology.  Yet for the past few years, the technology has seldom mattered to adapter customers.  In a multi-fabric environment where limitations were introduced by Gigabit Ethernet speeds and network architectures, adapter performance was rarely a strategic consideration. 

Today, the intersection of a converged fabric, increasing use of networked storage, and increasingly higher performance hardware and storage systems will make ASIC technology a significant factor.  In a converged fabric world, performance and low latency can be significantly augmented by adapter technology.

Yet even with performance top of mind, there is another dimension that may outweigh pure performance differentiation.  In a converging fabric where features and capabilities may shift between devices, adapter vendors with innovations at the ASIC level may open up opportunities to optimize and extend a converged fabric infrastructure with unique features embedded in the adapter. 

Host-integrated fabrics
ASIC innovation is paying off today in how deeply vendors are able to integrate adapters and host systems, which further leverages the performance advantages of ASIC architectures.  In one dimension, ASIC vendors can perform CPU offload (allowing system hardware to reach higher levels of consolidation and virtualization).  But just as importantly, some ASIC vendors are demonstrating latency advantages that will determine the limits to which any protocol can be scaled within a converged fabric. 

In one dimension, ASIC architectures reach beyond traditional offload to deliver a range of offload services, including full as well as partial offload, and extending across many protocols (iSCSI, FCoE, TCP, UDP, iWARP, etc.).  Such support can accelerate heterogeneous infrastructures of any mix, encompassing the diverse systems supported by ubiquitous Ethernet today.  This is important, as hosts vary widely in how they integrate with adapters attached to high performance fabrics.  For example, VMware's ESX requires full offload for iSCSI hardware acceleration, while partial offload is more popular in many Windows implementations. 

ASIC acceleration may determine total infrastructure performance.  As an example, even though unified wire fabrics will be multi-protocol, IP is here to stay.  Optimizing IP can avoid latency, and can make IP performance scale better. 

Some ASIC vendors accelerate protocols such as iWARP – a protocol that performs direct memory placement of data over IP networks, and can carry low-latency NFS, iSCSI, and Sockets.  In some approaches, iWARP not only reduces latencies to two milliseconds or less, but can scale to hundreds of systems at lower latencies than some of the approaches used by InfiniBand champions of RDMA-supported protocols.  Such low latency at scale may determine how large transactional processing, clusters, or other systems can grow before having to be distributed, federated, or otherwise broken apart.

Virtualization integrated
Moreover, as an extension of host integration, there is differentiation in how vendors integrate their adapter technology into the virtual server infrastructure running within physical servers.  Today, this includes the use of offload technology by hypervisors, virtual guest access to either the physical or a virtualized instance of the adapter, and even localized network switching between virtual guests directly using the adapter. 

With sophisticated ASIC technology, some vendors have been able to aggregate multiple ports of bandwidth on an adapter card and provide versatility in how that bandwidth is accessed and shared.  Some adapters can be shared among multiple virtual guests as a direct physical PCIe resource or as many virtual HBAs and NICs.  Moreover, while direct physical access may provide performance and latency advantages to virtual guests, few adapters can efficiently share aggregated port bandwidth, and direct access can mean different VMs on the same physical hypervisor must send their traffic out to the network in order to communicate with each other. 

High-performance adapters can both share the physical adapter for direct access (including aggregated bandwidth across multiple ports), and virtualize network switching functions to handle VM-to-VM communications within the adapter.  Some architectures leverage these mechanisms to provide granular bandwidth control and management within the hypervisor, reduce utilization of the external network, and increase the number of VMs able to run on each hypervisor.

Efficient use of the fabric will determine the density of your infrastructure, and therein lies the key to unlocking cost advantages.  In a converged fabric, density will not just mean how much traffic can be handled by single connections.  Fewer connections will also mean less network infrastructure equipment.  Moreover, more efficient connection utilization will mean more applications or virtual servers on each piece of server hardware, especially as the footprint of that hardware shrinks.  Even small percentage increases in density may have huge dollar impacts on total cost of acquisition and ownership – density ultimately influences not just hardware, but the more expensive dimension of software licensing, as well as the management effort behind the consolidated, virtual infrastructure.

The list of adapter vendors in the converged fabric space is longer than ever before, including Broadcom, Chelsio, Emulex, Intel, Mellanox, QLogic, and others.  Yet the technical differences vary greatly.  Some vendors claim differentiation in their pipelined architectures and ASIC technology, others leverage features such as Virtual Fabrics, and so on.  Differences between the vendors show up today in early design wins, and how widely their IP is being licensed by other vendors.

We see the adoption of converged fabrics ramping rapidly. Convergence features are now integrated in a variety of products, and convergence is on the upgrade path of most Ethernet infrastructures. While the course of convergence adoption will vary by IT organization, the drivers are in place and the march forward is inevitable.   

Jeff Boles is a senior analyst with the Taneja Group research and consulting firm. Prior to joining the Taneja Group, Jeff was director of an infrastructure and application consulting practice at CIBER and, more recently, an IT manager with a special focus on storage management at the City of Mesa, Ariz.

SIDEBAR:

Converging over obstacles
The fundamental underpinnings of a converged fabric make it distinctly different than Fibre Channel or Ethernet fabrics. Data Center Bridging (DCB) merges both data and storage traffic onto a single unified wire.  But for either type of traffic, DCB will give its single fabric far more capabilities than either of its predecessors. 

At a high level, the extended capabilities are the result of efforts to address a few major shortcomings of traditional Ethernet.   Those efforts have in turn blended the best features of Ethernet with the best features of high performance fabrics.

First, today's use of Ethernet is almost inseparable from the IP protocol that runs over it, with nearly every traditional Ethernet communication being carried across the wire on top of IP addressing.  Yet sending and receiving IP has traditionally involved host operating system processing, and such processing can increase latency.  The way the operating system handles IP may add 10 or more milliseconds to iSCSI transactions compared to similar Fibre Channel transactions.

DCB developers specifically set out to run networks on the very basic layers of Ethernet in order to reduce the processing and latency associated with IP and upper level protocols.  Much like existing high performance fabrics, the resultant 10Gigabit Ethernet transports efficient and well-controlled packaging of any type of traffic, effectively emulating Fibre Channel, traditional Ethernet, or even InfiniBand-style communications over an Ethernet cable.

Second, traditional Ethernet was built on extremely limiting mechanisms for path selection and the avoidance of go-nowhere loops that could be disruptive.  Such mechanisms limited the full use of all network connections, created bottlenecks in an infrastructure, and lacked the rapid disruption recovery that many types of communication require.

DCB enables sophisticated, simultaneous, low latency connections between many different Ethernet devices, without the overhead of IP processing (at Layer 2, instead of Layer 3).  Much like existing high performance fabrics, DCB infrastructures can connect multiple devices together into what is essentially a fully “meshed” fabric with many possible physical paths.  On top of this mesh, DCB mechanisms can create many different virtual paths (networks), allowing many different virtual topologies to exist simultaneously on one physical fabric.  In turn, a topology could be created to carry FCoE that looks and behaves like a multiple path Fibre Channel fabric, while at the same time another topology is created that looks and behaves like a traditional Ethernet network.  Many different mechanisms can be found behind these capabilities today, which vary in sophistication and complexity (from Virtual Port Channels, 802.1aq, a shift to link state routing protocols, TRILL, and more). 

Moreover, the reality is that IP isn't itself slow, but how the host operating system processes it is.  Some offload adapter vendors claim their architectures can accelerate traffic such as iSCSI down to latencies that outperform even InfiniBand.  The same remains true in the converged infrastructure, and the value proposition can be even greater.  The fabric will have more capabilities, and adapter vendors can make that more mesh-like, self-optimizing fabric even more versatile by enabling IP to scale in lock-step with the fabric's performance.


 


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives