SAN infrastructure: A strategic business decision

Posted on November 01, 2000

RssImageAltText

When designing a storage area network, be sure to plan for future requirements as well as current needs.

By Derek Granath

Click here to enlarge image

The exponential growth of data-and the need to store, share, and manage that data effectively-has made the selection of a data storage infrastructure a make-or-break business decision. The once-popular method for growing storage-hauling in yet another server with a disk array attached-is losing its appeal, as server-captive storage configurations have proven too inefficient, costly, and unmanageable to meet the needs of today's storage requirements.

In place of server-attached storage, networked storage architectures such as storage area networks (SANs) have been gaining popularity. In fact, some industry analysts estimate that more than 80% of the world's external storage will be SAN-attached by 2003. Why? Because SANs provide the scalability, accessibility, and manageability to satisfy present and future business computing requirements.

SANs are strategic investments that provide infrastructure-level solutions for data storage that can scale to meet future needs and reduce storage management costs over the long haul. However, IT managers need to consider several important factors to ensure their SAN technology choices will serve their needs today and scale to be a solid foundation for the large inter-networked SANs of the future.

Four key decision points

There are four fundamental decision points to consider when selecting and implementing a SAN-based storage infrastructure:

  1. How large will it scale, and can it be scaled without disrupting the storage environment?
  2. How highly available will it be, and will future growth compromise availability?
  3. Does the level of security in the SAN infrastructure support current and future requirements?
  4. How easy will it be to manage?

Scalability

There are many issues to consider when measuring the scalability of a SAN infrastructure. Scalability is a function of how large the SAN can grow (how many hosts and storage devices it can support), how flexible it will be as it grows, and how well it will support the future technologies.

In Fibre Channel-based SANs, there are two general approaches to building a framework for storage applications: centralized and networked.

The centralized model is usually based around a director, which has a larger number of ports than other switches, and is usually more expensive. In a centralized SAN, once the "core" interconnection is in place, the SAN can be expanded through the interconnection of smaller switched-fabric SANs in departments and workgroups that are cabled back to the large switch, or director, to provide an enterprise SAN architecture. One drawback to the centralized model is the relatively high up-front cost of the director.

Alternatively, in the networked model, interconnection devices are cascaded, or interconnected, to create a meshed fabric. This approach provides a cost- effective entry into SANs, yet can scale to support large port counts in a "pay-as-you-grow" fashion.

Many SANs are first deployed at the workgroup or departmental level and then expanded over time throughout the data center. In an example of a networked approach, an IT organization could deploy a networked fabric of two smaller port count switches for Unix storage consolidation in one department. The organization could then add two more switches to support NT consolidation in a separate department. When the IT organization is ready to centralize backup for the Unix and NT systems using LAN-free backup, switches can be added as needed to the fabric to support the backup application. In this example, the SAN has scaled seamlessly from a 4- to 8- to 10-switch networked fabric. The networked model enables users to grow the SAN incrementally, optimizing the infrastructure investment.

Scalability also entails being able to grow non-disruptively and dynamically, which acknowledges another reality: growth can come in different areas, at different times, and with different technological demands. For example, an IT organization may need to grow the SAN to accommodate more servers and storage devices or to support new applications. Or, a SAN may need to grow to accommodate new technologies such as higher speeds.

With organic SAN growth, it's essential that the environment be able to scale without affecting storage operations. In the example above, by adding switches to an existing fabric, there was no downtime for the existing servers and storage devices, and no redesign was required to scale the fabric. However, this level of scalability and flexibility is only achievable if the switching infrastructure has the distributed intelligence to automatically detect new switches in the fabric and disseminate certain fabric-wide information, such as addressing and zoning information.

Another significant benefit of the networked model is the ability to grow the SAN by adding new switches to an existing fabric. With a centralized SAN model, expanding beyond the port capacity of a monolithic switch can often mean replacing the entire SAN infrastructure with a larger, more capable switch-what's often referred to as a forklift upgrade, which is a disruptive scaling method at best.

A networked model also has inherent advantages in accommodating technology change. For example, in moving from 1Gbps to 2Gbps Fibre Channel speeds, users with a networked model can upgrade devices incrementally to take full advantage of each speed increment.

Another advantage is flexibility in working within Fibre Channel's 10km distance limitation. With a networked fabric approach, the host and storage subsystems can achieve distances of up to 40km through the distributed placement of switches that support 10km inter-switch links.

How available is it?

Availability within a storage network should be considered in the context of user access to data: it is imperative that the applications deliver end-to-end availability of the entire storage environment. When selecting a SAN infrastructure, it's critical to consider both the availability of the underlying hardware and the reliability of the SAN environment.

High availability means that an application has continued access to its data. Designing a highly available SAN is a matter of designing resiliency into the underlying storage network, so that in case of failure of a connector, a link, or an entire interconnection device, the application can still access the data without interruption.

There are many types of failures that can affect data access: hardware failures, software failures, physical events such as fire, flood, and earthquakes, and operator error. A SAN design that accounts for and minimizes the possibility of human error-the most common cause of failure-will inherently be more reliable.

When selecting a SAN infrastructure, IT managers must consider the level of automation. If a new switch is added to the SAN, will the SAN automatically detect the addition and educate the switch about the rest of the SAN, such as providing zoning information? This type of auto-discovery and auto-configuration is possible only if the fabric contains some distributed intelligence about the SAN to enable each device to self-learn the topology non-disruptively.

In addition, users should consider the resiliency of the underlying SAN infrastructure. Does the SAN heal itself in the case of a port or switch failure? Does it automatically reroute around failures? Are components such as power supplies redundant and "hot-swappable," or does replacing them require a complete shutdown of the device?

More importantly, consider the design of the SAN and its failure points. High availability and business continuance require duplication of servers, applications, user access to applications, storage, and more. IT managers should design resiliency into the network to achieve the highest levels of availability.

Security issues

IT managers must also consider security as a primary concern when designing and implementing a SAN. For example, zoning is a fundamental security feature of a SAN. Zoning enables users to logically segment a SAN with a visibility "firewall" to control access and visibility to servers and storage subsystems. With zoning, network administrators can arrange fabric- connected devices, servers, or workstations into virtual private SANs within the physical configuration of the SAN fabric. Zone members "see" only members in their zones and, therefore, access only one another. A device not included in any zone is not available to the devices in the zones.

There are two types of zoning: hardware and software. Whereas software zoning uses worldwide names to define membership in a zone, hardware zoning is specified by physical port. For optimal security, both types of zoning should be supported, because zone overlap enables a storage device (or server) to reside in more than one zone and to be shared among different servers, which may be in separate zones. Another important zoning factor is its scalability. How many zones are supported? As the SAN increases in size, any limitation in the number of zones can be problematic.

How easy is management?

When evaluating manageability, IT managers should consider both the underlying manageability of the infrastructure and the tools that are available to access it.

Companies such as Veritas now offer tools that can take advantage of switching platforms to monitor device status, automatically discover a SAN topology, and perform software-controlled zoning. How-ever, it's important to remember that the chain is only as good as the weakest link. Even the most sophisticated management tools cannot compensate for a SAN infrastructure that can't support zoning or other basic management features.

Perhaps more importantly, manageability of a SAN goes beyond passive monitoring to proactive management, which in-cludes finding and fixing problems before they're seen. With proactive management, users can establish policies and thresholds to prevent interruption of service.

Minimizing TCO

One of the most important benefits of a SAN is the return on investment from the optimal use of existing storage resources. As users grow the SAN over time to support larger environments and span wider geographic areas, it's essential that the SAN infrastructure can accommodate legacy and future technologies.

For example, many legacy SAN environments are based on loops. Will your new SAN infrastructure support these loop-based environments? How easily can the loop environment be upgraded to full fabric functionality? Is it a license key upgrade, a software upgrade, or a full switch replacement?

If your SAN infrastructure is already fabric-based, are the vendor's products themselves compatible and interoperable? Will future product enhancements be acces-sible without a hardware upgrade? If you need to expand servers and storage and therefore switch port capacity be accomplished without disrupting the storage environment?

Thinking strategically

These questions all point to the total cost of ownership advantages of SANs over server-attached storage. In addition to product acquisition costs, IT managers need to know what burden the SAN will place on staff and budgets as it grows beyond the workgroup or business unit. A SAN should be scalable and easily internetworked, supported by automated features such as auto-discovery and zoning information sharing. Moreover, the SAN should be highly manageable and available, with emphasis on resiliency through "hot-swappable" components and self-healing fabrics.

To avoid one of the greatest costs of a SAN-the need to re-architect the entire network from scratch once a limitation is encountered-IT managers should anticipate what the limits may be for the technology under consideration. Can another switch be added easily to the SAN? Can the SAN be internetworked over IP or through fiber-optic networks in a metropolitan area via DWDM technologies? The SANs of the future will require a higher level of inter-networking capability to facilitate growth and interconnection of the large fabric SAN beyond enterprise boundaries and into a global environment.

Coping with future data storage and storage management requirements requires that the right choices be made today, but with an eye toward the future. IT managers interested in SANs should deploy a scalable, manageable, and internetworking-ready storage technology infrastructure.

Following that model in choosing the proper SAN architecture will not only ensure immediate benefits today, but will create an infrastructure designed for the future.

Derek Granath is director of product marketing at Brocade Communications (www.brocade.com).


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives