Part I: Building storage networks: A book excerpt

Posted on September 01, 2001

RssImageAltText

Storage networking is built on three fundamental components: wiring, storing, and filing.

BY MARC FARLEY

To date, storage network products have been identified as either network-attached storage (NAS) or storage area network (SAN). NAS products have a heritage rooted in Ethernet-accessed data and are modeled after the concept of a network file server. SAN products are rooted in SCSI storage technologies and include several types of products designed to provide familiar functions all along the

I/O path, including host I/O controllers and storage devices and subsystems. Some of the most noticeable SAN products are those that have replaced the parallel SCSI bus with switches and hubs.

NAS products were in the market for several years before SAN products. When SANs arrived, a great deal of confusion followed surrounding the relationship between the two. The situation turned into a minor industry power struggle where both camps tried to gain the upper hand. This led to a number of interesting analyses, including some attempts at distinguishing the two as different architectures. While the two are structurally different, they are much more alike than they are different, and they have the potential to be integrated together in a number of ways. In fact, there is an excellent chance that NAS and SAN will be integrated and eventually viewed as feature sets of future storage networking products.

This article analyzes both NAS and SAN as filing and storing applications. By distinguishing NAS and SAN in this way, it is possible to find some solid ground for developing storage network designs and evaluating the potential of new products and technologies. But to understand SAN and NAS in these terms, it is also important to understand the wiring component. This article looks at some of the wiring characteristics that are optimal for storage network applications.

Wiring, storing, and filing

Storage networking is built on top of three fundamental components: wiring, storing, and filing. All storage products can be broken down into some combination of functions from these three areas. The way these components are combined in products can be a little bit surprising because storage products have not been developed along these lines and so a great deal over functional overlap occurs.

Many people have spent many hours trying to determine what the killer applications for storage networking might be and how to make the technology easier to understand by virtue of its successful application. While there are many opinions on this point, the view taken in this article is that storage is itself an application. Just as client/server applications and distributed applications of many kinds run on a variety of networks, storage is a unique and special type of application that runs on multiple networking technologies.

As storage processes are tightly integrated with systems, it may be more appropriate to say that storage networks are systems applications. Higher-level business and user applications can use the services provided by storage networking applications. As is true with all technologies, some types of systems match the requirements of various higher-level applications better than others.

Wiring
The term "wiring" applies to all the software, hardware, and services that make storage transport possible and manageable in a storage network. This includes such diverse things as cabling, host I/O controllers, switches, hubs, address schemes, data-link control, transport protocols, security, and resource reservations. So, if this is an article on network storage, why use the term "wiring?" The answer is simple: Bus technologies like SCSI and ATA are still heavily used in storage networks and will probably continue to be used for many years to come. In fact, SCSI and ATA bus products are by far the most common storage technologies used by the NAS side of the storage network industry today.

Storage networks differ from data networks in two very important ways:

  • They transfer data that has never existed before between systems and storage. In other words, data can be lost if the network loses packets.
  • Systems expect 100% reliability from storage operations and can crash when failures occur.

Storage networks demand a high degree of precision from all components to implement a reliable and predictable environment. Despite its distance and multi-initiator limitations, parallel SCSI is an extremely reliable and predictable environment. New wiring technologies such as Fibre Channel, Ethernet Storage, and InfiniBand have to be able to provide the same, or better, levels of reliability and predictability if they are to succeed as replacements for SCSI. Another perspective views wiring as a storage channel. The term "channel," which originated in mainframe computing environments, connotes a high degree of reliability and availability.

The following sections look at some potential characteristics of wiring that would allow it to operate as a channel. This is not to say that available wiring technologies incorporate all these characteristics, because they don't, but it is important to understand how the various technologies compare relative to these ideals.

Minimal error rates Storage networking involves massive data transfers where it is essential for all data to be transferred correctly. Therefore, storage networks demand the lowest possible error rates. Not only is there less risk for data corruption or failures, but lower error rates also reduce the amount of retransmitted data and the accompanying network congestion that can occur.

Flow control
Flow control is the capability to limit the amount of data transmitted over the network. This involves some method where the receiving entity sends a message to the sending entity telling it to stop transmissions so that it can complete processing the data it has already received. Alternatively, flow control can be implemented where a sending entity has the capability to send a certain amount of data but must wait for a signal from the receiving entity before sending more.

The goal of flow control is to prevent network entities from having buffer overflow conditions that force them to discard data transmissions when the amount of incoming data exceeds its capacity to temporarily store and process transmissions. With the assumption that a high percentage of wiring technologies have gigabit-per-second transfer rates, the flow control mechanism should be implemented in hardware at the data-link layer to be quick enough to be effective.

There are two types of flow control to consider. The first is flow control over an individual network link between adjacent entities such as a system and a switch. The other is end-to-end flow control between the sending entity and the receiving entity in the storage network. The two types of flow control are independent of each other, as there is no way to guarantee that implementing one type of flow control will prevent buffer overflows from occurring elsewhere in the network.

Full-duplex transmissions
Full-duplex communications provides separate send and receive circuits between communicating entities. This is an important function for supplying the most immediate and accurate flow control. While data is being transferred from the sender to the receiver on one circuit, the receiver can throttle back the sender immediately on a separate circuit without having to wait for data transmissions to stop first.

In addition to flow control benefits, full-duplex communications provides a fast means for acknowledging completed transmissions between receiver and sender. For high-throughput transaction processing environments that also require high reliability, the capability to quickly process transmission acknowledgments is paramount.

Low latency
Latency is the amount of time required for a network entity to queue, transfer, and process transmitted data. Most data networks have fairly relaxed latency characteristics. For storage I/O, however, latency can be a major issue. Transaction systems that process a high number of interdependent I/Os per second cannot afford to be slowed by latency in the channel.

For example, a hypothetical storage network with a latency of 10 milliseconds would have a minimum transaction rate of 20 milliseconds per second to account for the initial I/O request and its returned data or acknowledgment. Without including the time required by the subsystem, this translates into a maximum I/O rate of 500 I/Os per second, far below the I/O rates desired for most transaction systems. To ensure minimal impact, the wiring in a storage network should operate with a latencies of 20 microseconds or less.

In-order delivery
When data transmissions are received out of sequence relative to the order they were sent, the receiving entity has to sort them and detect missing or corrupt frames. This reordering is not necessarily common, but it can happen and therefore must be protected against. Traditionally, this is not an issue with storage channel technologies such as parallel SCSI. The speed at which storage runs demands the most efficiency from the channel. Out-of-order delivery in wiring can add unnecessary overhead to systems and storage subsystems.

A good question to ask is what component of the wiring should be responsible for ordering data in the network. There are two approaches: The first places the burden on network switches and routers to ensure transmission frames are transported in sequence, and the other is to place the burden on the receiving storage I/O controller to reorder frames as needed.

The wiring used in storage networks is independent of the storing and filing functions that may be used. This allows any networking technology with the characteristics listed previously to be used for storage networking. In other words, both NAS and SAN can use any available, qualified network. It's the word "qualified" that makes things interesting.

The subtle point to make about the independence of wiring is that both NAS and SAN products can use the exact same wiring-or type of wiring. Again, this requires the implementation details to be worked out sufficiently, which takes a tremendous amount of effort on the part of many companies and engineers. However, there are no architectural blocks preventing NAS and SAN products to work together on a single storage network.

Storage networking requires new methods for starting, establishing, and managing communications, which are considerably different than those used in bus technologies such as parallel SCSI or ATA. For example, storage networks, by their nature, provide the capability for a storage subsystem to carry on multiplexed communications with multiple hosts simultaneously. Multiplexing in this context refers to the capability to transfer and receive individual data frames with different network entities. This is considerably different from bus technologies such as parallel SCSI and ATA where there is only one entity controlling the bus at any time.

In addition, many more types of transmissions and protocols are typically used in storage networks than on storage buses. There are protocols used to coordinate the activities of switches and protocols, for addressing, for link state changes, and for all sorts of services provided by the network. There can also be different storing, filing, and communications protocols. Storage devices and subsystems on storage networks have to be able to screen all of them and accept or reject them with 100% accuracy. This is a major shift from bus technologies, where the number of protocols and services is much less.

Storing
Storing is mostly concerned with operations covering a logical block address space, including device virtualization, where logical block storage addresses are mapped from one address space to another.

In general, the storing function is mostly unchanged by storage networks, with two noticeable exceptions. The first is the possibility of locating device virtualization technologies such as volume management within storage networking equipment. This type of function is sometimes referred to as a storage domain controller or a LUN virtualization.

The other major shift in storing is scalability. Storing products such as storage subsystems tend to have more controller/interfaces than the previous generations of bus-based storage and also have much more storage capacity.

Filing
The filing function has two roles: representing abstract objects to end users and applications and organizing the layout of data on real or virtual storage devices. These two roles are depicted in the figure as the representation layer and the data structure layer.

File systems and databases provide the lion's share of the filing functionality in storage networks, with storage management applications such as backup also functioning as filing applications.

While the filing function has been mostly unchanged to date by storage networking, an obvious exception has been the development of NAS file systems such as the WAFL file system from Network Appliance.

A simple SAN/NAS paradox

SANs have been touted as a solution for high availability. The basic idea is that host systems no longer have to be single points of failure or network bottlenecks. The concept of SAN enterprise storage places the responsibility for data longevity on the storage subsystem. In other words, storage subsystems assume the responsibility to manage themselves and the data that resides on them. Implied in this notion is the possibility that host systems will come and go and change their processing mission, but the data these systems process will be safe and secure on the enterprise storage platform.

Enterprise storage makes a certain amount of intuitive sense. It's a nice idea, with a gigantic problem: How is the self-managing storage subsystem going to become intelligent enough to provide the management services and control of the data it stores? The capability of storage subsystems to support storing-level functions allows them to function as "super virtual devices," but it does not provide any power to act on data objects such as files the way IT managers would like.

The solution is much more difficult than simply placing microprocessors in the storage subsystems. Self-managing storage subsystems must have the capability to determine what blocks correspond to specific data objects (e.g., files, database tables, and metadata) if they are going to manage them. The missing link appears to be some amount of embedded filing functionality that provides the capability to associate data objects with their storage locations. This is squarely in the realm of the data structure layer of the I/O stack. The data structure layer can be thought of as the "bottom half" of the file system that controls the placement of data objects on real or virtual storage.

So here is the architectural problem for NAS and SAN: Storage subsystems with embedded filing technology are generally thought of as NAS products. So, what would you call a storage subsystem with half a file system? It's neither fish nor fowl. That's why analyzing storage network products as either SAN or NAS does not work. NAS and SAN are not orthogonal, independent entities. Wiring, storing, and filing are.

Marc Farley is a storage professional and author of Building Storage Networks, First and Second Editions.

Click here to enlarge image

This article is excerpted with permission from Building Storage Networks, Second Edition, by Marc Farley (Osborne/ McGraw-Hill, ISBN 0-07-213072-5, copyright 2001).


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives