NAS and SAN: similarities and differences

Posted on May 01, 2000

RssImageAltText

Network-attached storage and storage area networks address the same problems but from different angles.

By Frederick Shields

Enterprises are being overwhelmed by tidal waves of data. Market research firm Peripheral Concepts (www.periconcepts.com) pegs storage growth at more than 60% compounded annually, while International Data Corp. (www.idc.com) predicts 88%-per-year growth in Unix environments and more than 133%-per-year growth in Windows NT sites through 2001.

It's not amassing data that has become the problem, but managing it. The traditional way of storing data by daisy chaining disk storage to servers actually thwarts storage management by creating isolated islands of data. To deal with the data influx, new storage strategies are needed-ones that address several problems:

  • Availability. Enterprises are doing more and more business over the Web, where business is 24x7. Backup windows are shrinking. Centra-lized backup from distributed server-attached storage takes too long over LANs. Enterprises don't want to miss customers or stall employees by closing shop for "housekeeping."
  • Scalability. Distance limitations confine server-attached SCSI devices to the same box-or close to servers. This relationship hampers storage and server scalability. Enterprises need to be able to add massive amounts of storage, preferably without outgrowing their servers or having to back up and migrate data.
  • Performance. Managing the retrieval and distribution of data is one of a server's many functions in a server-attached storage environment. I/O bottlenecks on multi-purpose servers bog down performance. Enterprise backups and massive data migrations to data warehouses stress already overworked LANs, hampering e-commerce, customer relationships, and employee productivity.
  • Cost of ownership. Disks continue to drop in price, but other costs associated with server-attached storage are high. Enterprises often have to buy more storage for one island of data, even while capacity goes underutilized in another. Server-attached storage has to stay close to its server, so it can't be centralized in one place for cost efficiencies in management and maintenance. And it can't be shared by multiple operating systems, so enterprises can't reduce costs by standardizing on one enterprise storage solution.

Two new strategies have come to the rescue: storage area networks (SANs) and network-attached storage (NAS). Both take advantage of higher-speed networks for faster data access, let users access storage independent of servers, and enable users to centrally manage storage. Despite these similarities, SAN and NAS are very different. Yet contrary to some misconceptions, SAN and NAS can be used together.

SAN: dedicated storage network

The basic idea of a SAN is simple: Put storage devices on a separate high-speed network, where they can be directly accessed by multiple servers, workstations, and PCs and managed as a centralized storage pool.

SANs need the bandwidth of interconnects such as Fibre Channel for optimum performance, availability, and scalability. However, because TCP/IP does not run over Fibre Channel yet, some interim SAN implementations may use legacy networks such as Ethernet, FDDI, or Token Ring. When IP on Fibre Channel becomes available, all control and data traffic for server backup will be offloaded from the LAN to the SAN.

SAN implementation, however, is complex: Purchase Fibre Channel peripherals, switches and/or hubs, and add Fibre Channel host bus adapters to all your servers. Fibre Channel should be a lot easier to implement than SCSI, which can become very complex when configuring multiple hosts and arrays for high availability. In practice, installing a SAN isn't a "do-it-yourself" project.


(Left) With a storage area network, servers and clients on the LAN have switched access to all the storage resources on the SAN. (Right) Shared network-attached storage (NAS) appliances plug directly into the LAN.
Click here to enlarge image

One of the greatest challenges is interoperability. The goal is for Unix, Windows NT, and NetWare servers to have access to the same storage and share the same data.

However, the reality is quite different-at least today. Users cannot freely mix and match devices from different vendors. What's the hold up?

First, there is the issue of different device-level formats for each operating system. Until operating systems adopt a common structure at the device level, they won't be able to share devices. Many Fibre Channel fabric switches and host bus adapters have zoning and binding facilities that address this issue to some degree, and disk arrays are being equipped for multi-host filtering. However, standards for data transfer have yet to evolve, and cross-platform record-locking software is required so that multiple users can access large databases simultaneously. Some database vendors, such as Oracle, are offering some cross-platform capabilities through the use of file-system storage.

Despite the current drawbacks, SANs promise relief for some of enterprise's biggest headaches:

  • Performance. SANs improve performance by relieving congested LANs of high-volume data traffic generated by backups, large data migrations, business intelligence systems, and bandwidth-gobbling digital video and audio applications. Storage response time is faster because Fibre Channel links can transfer data at 100MBps. In the future, Fibre Channel bandwidth will double to 200MBps, and then to 400MBps, keeping ahead of SCSI advances.
  • Scalability. Fibre Channel out-scales SCSI. Multi-channel SCSI controllers support a maximum of 30 devices. A Fibre Channel fabric of interconnected switches can address thousands of ports. Bandwidth can be allocated on demand and network reconfiguration is relatively simple.
  • Availability. Storage and server resources can be added online without disrupting data access. In addition, SANs reduce the hardware costs of high availability, making it affordable for more applications. In SAN architectures, all servers can have direct access to all storage. This means that one server can provide fail-over protection for dozens of other servers-many more than is possible in traditional shared-disk clusters where two servers provide fail-over for four others-an expensive ratio. For backup and disaster recovery, mirrored sites can be located at a safe distance from each other, up to 10 kilometers using fiber-optic cabling.
  • Cost of ownership. By creating a central storage pool for the entire user community, SANs can lower total cost of ownership (TCO). Fewer administrators are required to manage the storage, management is centralized from a single management interface, and storage can be purchased separate from servers. The cost of external storage can be amortized over more servers, and the storage can be dynamically allocated and reallocated for maximum capacity usage. Also, Fibre Channel's high-channel speed and low latency shortens backup and restore times, freeing LANs and WANs for business applications that improve productivity and enhance revenue.

Currently, SANs are more expensive to implement than NAS because of the investment in Fibre Channel hubs, switches, and Fibre Channel-to-SCSI bridges. How-ever, the price difference between Fibre Channel and SCSI is narrowing, and the larger the enterprise, the higher the return on the investment. Given the right set of management tools, enterprises can see a return on investment in two to three years, according to Giga Information Group (For more information on SAN ROI, see InfoStor, April 2000, p. 30.)

NAS: LAN storage appliances

The basic idea of NAS is simple: Attach special-purpose storage appliances to the LAN, which can be shared by application servers, workstations, and PCs on the network. These appliances have only one job-file serving. NAS devices can be distributed across a large network and managed centrally.

And unlike SANs, implementation is also simple. Plug-and-play NAS appliances are available today. NAS appliances use standard file-system protocols, such as NFS and CIFS, for data sharing across multiple operating systems. Cross-platform data sharing is still a couple of years away for SANs.

NAS addresses the same problems as SANs, but from a different angle:

  • Performance. NAS is slower than SANs, but faster than server-attached storage. NAS improves performance by offloading file serving from the host, freeing CPU cycles for other work. Upgrading to Gigabit Ethernet makes NAS an even more attractive solution, although enormous amounts of data can still tax the network.
  • Scalability. With NAS, storage capacity is no longer tied to server capacity. Enterprises add storage as needed. NAS products scale to multiple terabytes, and by offloading file serving to these devices, servers can support more users.
  • Availability. NAS enables faster backups, minimizing the interruption to data access. Some NAS appliances feature data replication software for rapid data recovery. Network Appliance filers, for example, boast a recovery time from data corruption of less than five minutes. The simplicity of appliances makes them more reliable than traditional LAN file servers. And the appliance approach eliminates many failures induced by complex hardware and operating systems. In addition, NAS appliances can be configured for fail-over.
  • Cost of ownership. Specialized for high-speed file serving, NAS appliances are significantly less expensive than general-purpose file servers. Heterogeneous servers can share access to NAS appliances, so enterprises can save money on hardware, maintenance, and administration by consolidating data on fewer devices in a central location.

Frederick Shields is in storage product marketing at Amdahl Corp. in Sunnyvale, CA. The opinions contained in this article are those of the author and not of Amdahl.

null

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives