Building SANs to scale

Posted on January 01, 1999

RssImageAltText

Building SANs to scale

As more data and transactions go online, early storage area networks (SANs) are being challenged to support more and more devices. Although Fibre Channel technology was designed for scalability, not all implementations scale equally. The simplest and least expensive implementation of Fibre Channel is a point-to-point connection between a server and a storage device.

By Larry Olson

While this implementation delivers speed, it doesn`t offer scalability.For increased scalability, a number of Fibre Channel devices can be connected via fiber or copper cable on a shared-bandwidth arbitrated loop. While each loop theoretically supports a maximum of 126 devices, the number is actually much lower, depending on what devices are attached and how often they need to transfer data.

An arbitrated loop configuration increases the number of storage devices to which a server can connect and adds a hub for arbitration. However, only one originator and one endpoint can communicate at the same time with the maximum bandwidth. And while it is possible to string more loops together to add devices, the bandwidth is still shared. Therefore, if the purpose of the Fibre Channel installation is fast data access, at some point users may decide that the wait for arbitration is too painful and will demand a more scalable solution.

In large environments, a switched SAN may be the answer to achieving both speed and scalability. A switched SAN provides the flexibility of a fabric with full bandwidth to all nodes on the network simultaneously. For large Fibre Channel SANs (more than 16 devices that need a dedicated full gigabit per second), switches can be connected. Even in this scenario, a SAN can be built that meets application requirements.

Switched SAN options

Because multi-switch fabrics provide in-order delivery of frames through any number of links, several configurations are possible, including load-sharing and fault failover. Users can select from a variety of wiring alternatives in three categories:

- Cascaded (low performance, lower cost)

- Meshed (higher performance, more resilient)

- Cross-connected (highest performance, lowest cost per gigabit, highest resiliency.

Selecting the right architecture

Cascaded architectures are the least-expensive options and are similar to traditional network structures in which LAN switches are daisy chained (see Fig. 1).

Like any topology that shares resources among nodes, cascading is most appropriate when aggregate bandwidth requirements are low or when only a few devices need to be interconnected (e.g., in work groups, low-volume file sharing applications, fixed configurations where senders and receivers of data remain static, and other smaller installations).

Beyond these limited applications, cascaded architectures do not scale well. Adding switches can significantly reduce performance due to compounded latency caused by multiple switch-to-switch "hops" and to the limited shared E_port "pipeline." Bandwidth and latency vary unpredictably, depending on where messages enter and exit the fabric, which means network planners must pay close attention to changing traffic patterns when designing and redesigning their system.

For projects requiring a more predictable environment, such as video serving, high volume, frequently accessed databases, and flexible environments where data paths may change dynamically, meshed architectures offer a cost-effective entry to high-performance multi-switch topologies (see Fig. 2). Because each switch is directly connected to every other switch in the fabric, the hop count remains low even as the fabric scales, minimizing bandwidth loss and the effects of compound latency. Because there is no single point of failure, this configuration is extremely resilient and is an excellent solution for backup applications.

Eventually, however, the high number of required inter-switch connections (E_ports) makes it impossible to add more I/O ports. As a general rule, the maximum number of ports to be used for interconnects should be half of the switch`s total ports; the other half should be reserved for devices.

Obviously, one way to grow a meshed configuration is to add larger switches. But for users who need open-ended scalability and robustness, a cross-connected configuration may provide a better solution.

However, certain applications, such as CPU clustering, data mining, and large enterprise backup, may require very large fabrics with a high degree of redundancy. For these applications, a cross-connected architecture provides the ultimate high-bandwidth solution. In this modified "star" topology, some switches are dedicated to perform "cross-connected" functions, creating a variety of performance, resiliency, and cost-per-gigabit benefits not found in other configurations (see Fig. 3).

Performance, expandability, and resiliency

Cross-connected fabrics maintain the highest aggregate bandwidth by allowing network designers to control the ratio of I/O ports to cross-connected ports. Users can create as many redundant paths as necessary to achieve performance goals.

Cross-connected networks can be expanded in three ways, and they can be applied individually or in combination, depending on performance and cost-per-gigabit requirements.

Adding more cross-connected links (inexpensive wiring-only expansion). Adding more cross-connected cables raises the overall fabric capacity and non-blocking percentage (the percentage of time each port has uncontested network access) without requiring additional switches. The drawback is a reduction in the number of available I/O ports available on the perimeter (see Fig. 4).

Adding I/O switches (I/O expansion). Adding I/O switches increases the number of available end ports and overall fabric capacity while incurring only a minimum of additional blocking, depending on the number of cross-connect links used (see Fig. 5).

By adding cross-connected switches (performance expansion). This raises overall system performance considerably while only modestly reducing the number of available I/O ports (see Fig. 6).

A cross-connected switch configuration makes it possible to design systems with no single point of failure, since each I/O switch can be linked to multiple cross-connected switches. In the unlikely event of total switch outage, redundant data paths keep the network functioning at normal or near-normal capacity (see Fig. 7).

Although cross-connected environments involve extra hardware costs, enhanced performance may make up for the initial expense on a per-gigabit basis. And when the economic impact of scalability and resiliency is factored in, cross-connected topologies can provide the best return on investment for a wide range of large and midsize installations.

Click here to enlarge image

Fig. 1: Cascading is the least-expensive option and is similar to traditional network structures in which LAN switches are daisy chained. Fig. 2: Meshed architectures offer a cost-effective entry to high-performance multi-switch topologies.

Click here to enlarge image

Fig. 3: Cross-connected architectures provide a high degree of redundancy and bandwidth. Fig. 4: Cross-connected networks can be expanded by adding switch-to-switch links. Fig. 5: Adding additional I/O switches increases the number of available end ports and overall fabric capacity. Fig. 6: Adding cross-connected switches raises overall system performance. Fig. 7: Diagrams show the effect of outages at different locations of a cross-connected network.

Larry Olson is a senior systems engineer with Ancor Communications Inc., in Minnetonka, MN. He can be reached at larryo@ancor.com.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives