Broadcom Corporation recently unveiled the BCM578x0, its latest generation of 10GbE ASICs for LAN-on-Motherboard (LOM) and NIC applications, and the first in a new class of Converged Network Adapters (CNAs), a.k.a. Converged NICs (C-NICs). The new family of ASICs includes several innovations which raise the bar for CNA technology and give us a glimpse at the capabilities of the next generation of high performance Ethernet adapters.
The quad-port BCM57840 delivers twice the port density of competitive dual-port ASICs, in a footprint smaller than any competitor’s smallest dual-port ASIC. In addition, the high performance BCM57840 hardware is RDMA-ready which positions the product line to serve as the industry’s first LOM/NIC platform for complete LAN, iSCSI SAN, FCoE SAN and low-latency HPC network convergence. Looking forward, this class of Ethernet controller with 40Gb of bandwidth has the potential to serve as a 40GbE host adapter or a 40GbE iSCSI storage port.
The advanced technology in the BCM578x0 is the first in a new class of network controllers you can expect to see later this year in new high performance servers. There are five key reasons why data center managers will need the capabilities of this new generation of LOM and NIC in their virtualized servers:
1. Servers are set to drive high throughput on multiple 10GbE Ports with 8 and 12-Core Processors - The days when servers didn’t have the power to drive 10GbE traffic are gone. The new generation of 8 and 12-core processors from Intel and AMD are designed to drive I/O to-and-from servers loaded with virtual machines. According to Intel, their new 8-core Xeon 7500 can deliver up to 20x the performance of servers with single core processors.
2. Servers are set to Scale 10GbE Ports with PCIe 3.0 - Another reason the capabilities of this new generation of LOM and NIC will be coveted in data centers, is because they’re speed matched with the latest generation of PCIe 3.0 I/O interconnects, which will appear in new servers beginning in 2011. The bi-directional bandwidth of an x8 PCIe bus is 64Gb, which means a single bus can support up to six 10GbE adapters, or one 40GbE adapter, running at full speed.
3. Virtualized servers require full offload - VMware’s inbox support for 10GbE iSCSI offload in 2010 was an acknowledgement that CPU should be preserved for Virtual Machines, and virtualized servers require adapters with full offload. Best practices in the future will include deployment of 10GbE adapters that offload all network protocol and virtual switch processing, including TCP/IP, iSCSI, FCoE, RDMA and Virtual Ethernet Bridging (VEB).
4. Virtualized servers require more system memory - As the average number of virtual servers per physical server grows, so does the average amount of memory configured per physical server. The new generation of Ethernet LOM ASICs is critically important to server designers and admins, because it frees motherboard space for more system memory.
5. Complete convergence on Ethernet requires lower latency Ethernet - The current generation of 10GbE adapters and switches support Data Center Bridging (DCB) and FCoE for network convergence. However, complete network convergence requires support for the Remote Direct Memory Access (RDMA) protocol used for low-latency traffic found in high performance computing (HPC)clusters. When this final convergence building block is put into place, LANs, NAS, iSCSI SAN, FCoE SAN and HPC cluster traffic can converge onto Ethernet. The BMC57840 is designed to support hardware offload of the RDMA protocol used for low-latency networking, making it the first LOM/NIC ASIC capable of supporting offload of network protocol processing and VEB – the features needed for complete network convergence.
The bottom line is that Broadcom has raised the bar on their competition in the CNA market. To stay in the game, high-end Ethernet LOM/adapter ASICs from Brocade, Chelsio, Emulex, Mellanox, QLogic and SolarFlare need to support 40Gb of bandwidth, at least four ports, and full offload—including VEB and some form of the RDMA protocol.
posted by: Frank Berry