The recently completed 16GFC standard doubles the speed of the Fibre Channel physical interface from 8Gbps to 16Gbps, and doubles the data throughput rate from 800MBps to 1,600MBps. 16GFC products will be released in 2011, with SFP+ optical modules to ensure economies of scale.

From HBAs to switches, 16GFC will enable higher performance with lower power consumption per bit over previous generations.

The benefits of 16GFC are obvious: Faster data transfer rates, fewer links required, fewer devices to manage, and less power consumption. Several technology advances and trends are driving bandwidth requirements for SANs, including application growth, server virtualization, multi-core processors, PCI Express 3.0, increased memory and solid state disk (SSD) drives.

Overview

16GFC has significant improvements over previous generations of Fibre Channel, including the use of 64b/66b encoding and linear variants. In addition, 16GFC uses electronic dispersion compensation (EDC) and transmitter training to improve backplane links.

Table 1: Fibre Channel Speed Characteristics

Speed Name Throughput

(MB/sec)

Line Rate

(Gbps)

Encoding Retimers in the module Transmitter Training
1GFC 100 1.0625 8b/10b No No
2GFC 200 2.125 8b/10b No No
4GFC 400 4.25 8b/10b No No
8GFC 800 8.5 8b/10b No No
10GFC 1200 10.53 64b/66b Yes No
16GFC 1600 14.025 64b/66b Yes Yes

 

To remain backward compatible with previous generations, 16GFC ASICs must support 8GFC and 4GFC to meet the Fibre Channel Industry Association’s roadmap for future speeds and backward compatibility. The 16GFC ASICs must have 8b/10b codecs for 4GFC and 8GFC and 64b/66b codecs for 16GFC. Users can attach new 16GFC devices and switches to existing infrastructure and the 16GFC devices will auto-negotiate down to the lower speeds of the legacy devices. The new 16GFC ports can be seamlessly added to existing networks to increase performance for new segments of the storage network without requiring a forklift upgrade.

Benefits

The higher speed links of 16GFC eliminate tens or hundreds of ports from a comparable 8GFC fabric. The real savings occur when the number of HBAs, switches and end devices can be decreased. For example, in the case of a Top of Rack (ToR) switch that needs 100Gbps of bandwidth, the user needs eight 16GFC ISLs instead of sixteen 8GFC ISLs.

In addition to the reduction in equipment, which signficantly cuts power consumption, 16GFC also reduces the power required to transfer bits over the link. When the cost of cabling and operating expenses (opex) such as electricity and cooling are considered, the total cost of ownership (TCO) is often less when links run at twice the speed. The goal of 16GFC designs is for a 16GFC port to consume less power than two 8GFC links that deliver the same throughput. Initial estimates for power consumption show 16GFC SFP+ consuming 0.75 watts of power. In contrast, 8GFC SFP+ consumes 0.5 watts of power. These estimates show that a 16GFC link will consume 25% less power than two 8GFC ports.

If fewer links are needed, cable management becomes simpler. Managing cables behind a desktop or home entertainment center are bad enough, but managing hundreds of cables from a single switch or bundles of cables from a server can be horrendous. The reduction of cables aids in troubleshooting and recabling. The cost of cabling is significant and users can pay more than $300 per port in structured cabling environments. Reducing the number of links by using fast 16GFC links facilitates cable management.

With 16GFC, there are less links, cables, ports and power consumption for the same performance.

Applications

16GFC is designed for high bandwidth applications and devices, including ISLs, data migration, Virtual Desktop Infrastructure (VDI), and SSD drives or memory arrays.

The majority of servers that use Fibre Channel run large databases and other enterprise-class applications. While database applications do not usually require large amounts of bandwidth when individual records are updated or read, the servers need to be designed for demanding applications such as backup and data mining (analytics) when every record may be copied or queried.

Streaming I/O is another class of application that will benefit from 16GFC. A single I/O from these applications can transfer a block of data that is several orders of magnitude larger than blocks in general-purpose file systems. A single I/O can take minutes or hours to complete, and controllers and drives are sending out sequential reads or writes as fast as they can.

Another use case for 16GFC links is between data centers, storage arrays or clouds. During data center consolidations, disaster recovery and equipment changes, users often need to migrate terabytes or even petabytes of data between storage arrays. The time to transfer large blocks of data is often limited by the speed of the links connecting the devices, instead of processors or controllers that may limit throughput during normal processing.

Data Size Time to Transfer Data at 1,600MBps
100 GB 1 minute
1 TB 10 minutes
10 TB 1 Hour, 45 minutes
100 TB 17 Hours
1 PB 1 Week

 

VDI And SSDs

VDI is a growing trend in enterprises, where virtual desktops in the data center are sent to users on a variety of devices. VDI has the advantage of centralized management where applications and hardware can be easily upgraded in the data center and virtually shipped around the world. VDI has high bandwidth requirements when large numbers of users log into their virtual desktops at the same time. This spike in activity leads to long startup times unless high performance VDI systems are used. 16GFC is one of the components that can improve performance at these critical initialization times.

Storage arrays based on SSDs are enabling a new level of performance. With lower latency and higher IOPs than traditional storage arrays, 16GFC interfaces to SSDs are expected to improve bandwidth density by doubling the throughput of ports. SSDs have been used in many high bandwidth applications such as online gaming, where these applications have already reached bandwidth requirements of 50GBps. With the price of SSDs dropping rapidly, SSDs should be able to address many more applications where performance is more important than capacity.

While many applications won’t use the full bandwidth of 16GFC links, over the next few years traffic and applications will grow to fill the capacity of 16GFC. With more virtual machines being added to physical servers, performance levels can quickly escalate beyond the levels supported by 8GFC. And with proprietary trunking technology at 16GFC, users can get up to 128GFC of performance.