The transition to 10Gbps Ethernet will enable IT organizations to realize the full potential of virtualization.
This article focuses on a strategic approach to data-center server networking based on 10 Gigabit Ethernet (10GbE) that promises investment protection, increased efficiencies, and enhanced business agility. Enterprises that invest in such a networking foundation today will be prepared to meet future customer needs while taking advantage of ongoing industry breakthroughs.

Since the late 1990s, enterprises have watched their data centers expand to include hundreds, or even thousands, of systems running diverse operating systems and applications. As the number of servers have grown, so has the cost of operations, which include people, space, power, and cooling.

In response, IT organizations have increasingly turned to server consolidation and virtualization technologies to turn data-center resources from monolithic systems into a “service-centric” shared pool of resources, which consist of standardized components that can be dynamically aggregated, tiered, provisioned, and accessed through an intelligent network.

The pooling, consolidation, and virtualization of standardized server resources dramatically increase performance levels, reduce total cost of ownership (TCO), and allow IT organizations to rapidly deploy and scale resources on-demand to match business and application requirements.

The evolving virtualization and automation of data-center resources call for a highly scalable, resilient, and secure data-center network foundation based on 10GbE, which can protect application and data integrity, while optimizing application availability and performance and enabling responsiveness to constantly changing market conditions, business priorities, and technology advances.

Virtualization may involve running up to 64 virtual machines on a single physical platform with each virtual machine requiring network I/O resources for client/server, cluster, and block/file storage communications. The efficiency, capacity, and availability of 10GbE-based server networking infrastructure can directly correlate to what data centers have come to realize is the promise of virtualization, which can include smoothing of workload demands for higher system utilization, server redundancy, and the ability to create and assign virtual machines to meet changing application demands.

Beyond enabling TCO-driven data-center infrastructure initiatives, a range of data-center applications stand to gain much from the higher bandwidth of a 10GbE networking foundation. Such applications typically generate extremely large files and rely on fast transfers of such files between various work groups and one or more centralized database(s). Examples include weather modeling, motion picture production, and CAD, as well as storage and retrieval of large data sets in support of the Sarbanes-Oxley Act and HIPAA compliance requirements.

Scalable server I/O
As businesses expand, so does their IT infrastructure to meet increasing end-user demands. Historically, such infrastructure expansion has resulted in a proliferation of far-flung servers because businesses traditionally have added a new server each time they added a new project or application.

This one-application-per-server approach to growth ultimately resulted in enterprise IT typically managing hundreds to thousands of “scale-out” servers. Such server proliferation resulted in low utilization of server resources-as low as 10%-and increased management complexity. Server consolidation provides significant savings in equipment and system administrative costs by allowing enterprises to consolidate existing scale-out servers into fewer multi-processor “scale-up” servers

Server consolidation can significantly reduce management costs while increasing performance. This works particularly well in a three-tiered environment with separate Web servers, application servers, and database servers because each tier can be physically consolidated and scaled independently to match specific workload requirements. Blade servers take this consolidation one step further by combining multiple servers into a single chassis. An integrated network switch connects the blades and provides the connections to the data center. In virtually all cases, realizing the full performance potential of server consolidation requires a high-performance, highly reliable data-center network infrastructure such as the one provided by 10GbE server networking. Specifically, it is easy to oversubscribe blade server GbE connections because there are possibly dozens of CPUs generating network traffic within a blade server chassis. The use of 10GbE networking for intra- and inter-chassis connectivity allows blade systems to scale to larger numbers of processors without network connections ever becoming a major bottleneck.

Application consolidation, or server virtualization, involves hosting multiple, diverse applications on a server platform. This is done with virtualization software, such as EMC’s VMware and open source Xen, which partitions the platform into multiple virtual machines running concurrently on a single platform. For example, VMware ESX Server virtualization software supports up to 64 concurrent virtual machines with half the available partitions “live” at any given time. The other 32 virtual machines remain available on a fail-over standby basis in case a hardware or software problem occurs on any of the live virtual machines.

Again, high-performance server connectivity is an essential key to making server virtualization work. When as many as 32 virtual machines are running on a single high-performance platform, existing GbE networking becomes the bottleneck. However, 10GbE networking allows such throughput requirements to be easily and cost-effectively met while conserving server PCI slots. In addition, multi-port 10GbE link aggregation and fault tolerance provide the connectivity resources necessary to support the higher performance, agility, and reliability levels achieved through server consolidation and virtualization.

Multi-core processors
The emerging migration to 64-bit multi-core processors complements the implementation of server virtualization because multi-core processors offer the increased performance needed to counter performance penalties caused by sharing workloads at the processor level. The architecture of multi-core processors offers a critical advantage when building “virtual machines” that deliver very-high-performance memory-sharing between virtual machines hosted on multiple cores within a single processor.

Multi-core processors also benefit virtualization security and power requirements. Multi-core processors can run more-sophisticated virus-, spam-, and hacker-protection applications in the background without performance penalties, while segregating untrusted applications from trusted ones. And, perhaps most importantly, multi-core processors deliver increased performance without increased power or physical space requirements.

Frequently, however, the use of multi-core processors demands a compromise as the GbE-based I/O capacity does not match the increased CPU power of each server. And the result is that multiple cores per system deliver less-than-optimal performance scalability, especially for the range of I/O bound applications, such as decision support systems (DSS), video-over-IP, and high-performance cluster computing (HPCC).

The proliferation of virtualization and 64-bit multi-core computing now demands a high-bandwidth networking infrastructure based on 10GbE that unifies client/server, cluster, and communications and storage communications in the data center. Such a 10GbE infrastructure can draw on existing standards to create a flexible, scalable, and reliable I/O architecture capable of interoperating with any server technology.

Policy-based networking
VMware and Xen virtualization software are evolving to enable data-center automation, which is the next phase in the movement toward a truly flexible data center.

VMware VMotion and Xen Live Relocation features allow application-ready software blocks to be
■ Moved seamlessly between physical and virtual computing resources;
■ Provisioned on one or more systems dynamically;
■ Autonomously updated and patched according to user-definable compliance and security policies; and
■ Scheduled, executed, and tracked according to logical sequences, events, dependencies, and geographic hierarchies.
Data-center automation is so vital because using server virtualization has a potential downside. Adopters of server virtualization generally find out that the number of virtual server instances under management increases by several hundred percent. Given shrinking IT headcounts and current server-to-administrator ratios, one important key to optimal data center TCO is marrying server virtualization running on modular/blade server systems to next-generation data-center automation architectures. Without automation, the promise of server virtualization may remain unfulfilled.

Effective deployment of automation imposes stringent requirements on the data-center networking foundation: dynamic and policy-based delivery of network capacity, without over-provisioning for peak load per server, with network bandwidth management remaining completely transparent to applications.

Automation requires a data-center networking foundation alternative that scales better with lower costs, connects servers within a unified high-bandwidth, low-latency 10GbE fabric, and then creates a central pool of LAN/WAN, IPC, and NAS/SAN resources that that all servers connected to the fabric can share. Both IP and storage traffic can be carried to the server over a single 10GbE connection, which further reduces complexity.

The result is that the networking foundation becomes a centrally managed resource, which provides several added benefits for data-center automation:

■ On-demand, scalable I/O allowing servers to access the resources they need, when they need them;
■ Simplified management allowing I/O to be centrally administered and scaled, which significantly reduces costs and management complexity;
■ Reduced downtime as failure rates are lowered by reducing managed cards and ports in the network infrastructure;
■ Lower costs through dynamic matching of I/O requirements to performance rather than number of servers. Users will see a 50% reduction in I/O costs due to fewer cards, cables, and switch ports; and
■ Flexibility as I/O identities previously bound to individual server hardware, such as World-Wide Node Names and MAC addresses, are virtualized and stored in the data-center networking fabric, which permits rapid change.

Virtualized server I/O
Despite emerging hardware and software initiatives, such as 64-bit multi-core processors and advanced automation, the networking foundation capabilities of enterprise data centers continue to be severely limited. What is required is a networking architecture that unifies network services into a pool of resources that can be easily provisioned and deployed, according to changing business requirements.

A key component of the evolution towards allowing IT to respond to dynamic business needs is a fundamentally new approach for server I/O networking that meets the following challenges:

■ Fundamental change in I/O requirements including proliferation of network protocols, and increased performance requirements;
■ Evolution in business-critical workloads and applications; and
■ Evolution from physical to logical topologies enabling dynamic provisioning.
Traditionally, server CPUs have performed TCP/IP-based network protocol processing. The past several years have seen a proliferation of network protocols, including upper-layer protocols such as iSCSI, NFS/CIFS, MPI, RDMA/TCP, and XML.

Security protocols are also growing in use. Traffic that passes over the Internet is often encrypted using IPSec or SSL. Firewalls, intrusion detection, virus detection, anti-spam, DoS (denial-of-service) attack prevention, and other security functions must be maintained for incoming data. Rising bandwidth and the growing level of sophistication needed to stop malicious hackers and spammers have both increased the demands placed on processors.

Each of these upper-layer protocols has varied performance requirements. For example, while NFS/CIFS applications are throughput-bound, RDMA/TCP is primarily latency-sensitive. A range of protocols, such as iSCSI, and applications fall in the middle. Taken together, these protocols drive the need for a low-latency, high-bandwidth server I/O that can be extended to create a low-latency server-to-server interconnect spanning multiple chassis. In addition, by using optical transports, the low-latency fabric can be extended across multiple sites to create a global fabric of servers across all the data centers within the enterprise.

The range of business-critical data-center applications, from decision support systems and online transaction processing to the broad-based use of high-performance cluster computing in finance, pharmaceutical, and aero/auto segments, all require a QoS-capable, flexible server I/O network that can be provisioned dynamically for each server.

The migration toward server virtualization and automation again has specific performance and flexibility implications for server I/O. Virtualization drives the growth in bandwidth requirements of each physical server, while dynamic provisioning of new applications requires new levels of reliability and intelligence for servers to flexibly use a mix of client/server, IPC, and storage communications, without the burden of over-provisioning.

The virtualized server I/O architecture combines the 10GbE network protocol with intelligence and flexibility to provide a comprehensive server I/O foundation for the automated data center. Each server is provided with multiple virtual I/O interfaces that can be configured with as much as 10Gbps of I/O bandwidth. The number of interfaces and the amount of bandwidth can be dynamically increased or decreased while the server and its applications are running. Network utilization is dramatically improved, thereby reducing the number of dedicated interfaces as well as switches required to provide the connectivity.

The virtualized server I/O architecture is flexible enough to handle a wide range of protocols and it can be easily extended to meet the needs of new or evolving protocols. It provides a low-latency interconnect for highly efficient communications between servers within a single chassis as well as transfers between servers located in different chassis in the data center, for the entire range of latency-sensitive data center applications.

The emerging virtualized server I/O architecture addresses the challenges complex enterprise data centers face today by providing a server I/O solution with higher performance, superior flexibility, scalability, and availability at costs that are lower than all currently available options. The benefits of virtualized server I/O include the following:

■ Rapid re-provisioning and re-allocation of data-center server resources in response to changing business needs;
■ Reduced operating expenses and maximized IT productivity; and
■ Simplified overall IT operations.

The net result of the virtualizaed server I/O is enhanced performance, reduced costs, and a greater return on data-center acquisition and operational expenses.

Saqib Jang is founder and principal at Margalla Communications ( in Woodside, CA.

Leave a Reply

Your email address will not be published. Required fields are marked *