By Lisa Coleman
Leveraging existing Ethernet-IP infrastructure is saving money for users who want the advantages of a storage area network (SAN) but cannot afford a Fibre Channel SAN.
While low cost is the leading factor in the decision to choose IP instead of Fibre Channel, a few users also cite block-level I/O performance over long distances and Ethernet's simplicity and flexibility for upgrading.
For California's Contra Costa Water District (CCWD), price was the overwhelming factor in choosing an IP SAN, according to Jim Morton, CCWD's IS manager.
"We compared prices to direct-attached storage [DAS], network-attached storage [NAS], and a Fibre Channel SAN. The IP SAN was about 30 cents on the dollar compared to the Fibre Channel SAN, and it had adequate performance for our needs," says Morton.
The IP SAN cost slightly more than DAS (about $3,000) and approximately the same as NAS, but Morton felt he had more management control with an IP SAN than with NAS. In addition, he could leverage the Gigabit Ethernet backbone that had been installed at CCWD a year earlier.
"The cost of GigE has dropped so low that it's a natural backbone for IP SANs," says Morton.
At the heart of its IP SAN, CCWD uses StoneFly Networks' i1500 Storage Concentrator, a storage provisioning appliance. The company uses it as an IP receiver attached to an ATA-based RAID array from Nexsan Technologies. The hosts include a Dell Power-Edge 2650 2RU running Windows 2000, two IBM X360 P4 servers (for Windows 2000 and Linux/Oracle 8i), and an HP NetServer LH3000R P3 running Windows 2000 and Exchange.
The IP SAN allows flexibility in partitioning volumes and reclaiming partitions without involving the operating systems, according to Morton. Using CCWD's previous technology, it was a time-consuming process to reclaim storage partitioned into RAID 5 volumes.
"Now I have four different hosts connected to one storage volume, and I don't need to worry about what the host's RAID controller is, which makes it very simple," says Morton.
Although the IP SAN is simple to use and manage, performance is also better than expected, says Morton. Sustained throughput tests revealed the IP SAN configuration was faster than CCWD's original DAS (but not as fast as the new DAS the company considered purchasing). However, the IP SAN's management advantages outweighed the performance benefit, says Morton.
Later this year, CCWD plans to build a synchronous mirror for its campus with two StoneFly Storage Concentrators and two Nexsan disk arrays for disaster recovery.
Leveraging existing IP network
Houston-based INTEC Engineering, a project management and engineering company serving the oil and gas industry, is leveraging its existing Cisco IP-based networks for a new IP SAN.
INTEC has 10 offices worldwide with about 6TB of storage. The company's users perform high-speed calculations requiring large amounts of storage and scalability—one reason the company decided to investigate SANs.
"I've never been a huge fan of Fibre Channel SANs because they require a separate infrastructure," says Chris Warlick, INTEC's director of IT. But what drew him to IP SANs were the ease of expanding the storage environment and the ability to leverage Cisco's IP technology.
Six months ago, INTEC and Got Net Solutions, a network services company, installed a fully redundant IP SAN using two Cisco Catalyst 3550 switches connected via Gigabit fiber ports. The other Gigabit fiber ports connect to Cisco 5428 storage routers.
INTEC combines Cisco's multi-protocol storage routers and switches with Fibre Channel disk arrays in a fully redundant IP SAN.
The multi-protocol (Fibre Channel and IP/iSCSI) storage routers run in a cluster mode to ensure maximum uptime and allow the routers to share iSCSI instances and fail-over when necessary. The high-availability ports from each router connect to the primary iSCSI switch. The Fibre Channel ports connect to the disk arrays (see diagram).
The IP SAN includes three Kinetix Vector 1600FC disk arrays, each with two Fibre Channel ports. The arrays are configured with two RAID 5 arrays in each chassis. Two Dell servers are set up as mirrored drives with five NICs. Two NICs connect to the server VLAN, two NICs connect to the iSCSI switches, and one NIC connects to the other server via a crossover cable for "heartbeat" communications.
INTEC uses Cisco's software driver on the iSCSI side and Veritas' Volume Replicator and Volume Manager on the replicator side.
Prior to installing the IP SAN, INTEC was using NAS, which has now been relegated for deleted files storage. Warlick says the company gained a 30% to 40% performance increase with the IP SAN versus NAS.
Swapping FC for IP
The decision to move to an IP SAN was primarily a matter of price for Zenon Environmental, a water treatment company in Ontario, Canada. Zenon was using EMC Clariion Fibre Channel arrays, but determined it could not afford to keep them due to high renewal, maintenance, and related costs. Therefore, the company started looking at alternatives to a Fibre Channel SAN. After checking out a handful of IP SAN vendors, Zenon chose EqualLogic's PeerStorage Array.
"We're a mid-sized company with 500 to 600 users in our office, so the initial $100,000 for a Fibre Channel-based SAN was justifiable. But the $30,000 per year just to maintain it was not. To add 600GB of usable storage costs $25,000, and I got 3x that from EqualLogic for maybe one-half the price. The cost per gigabyte is much lower," says Shawn Eveleigh, senior systems administrator at Zenon.
Eveleigh is migrating all of Zenon's servers to the IP SAN. The configuration is simple and involves plugging the PeerStorage devices into an Ethernet switch that is attached to the servers.
Chris Warlick, director of IT, INTEC Engineering
Zenon's infrastructure is based on Windows (Dell PowerEdge and HP/Compaq ProLiant servers) except for a few special-purpose servers running Linux. Storage from five key servers—Exchange, Windows NT 4.0, OpenText, SQL, and Windows—is being migrated to the IP SAN. Zenon is using Microsoft's iSCSI software initiator on all the servers except the NT server, where it is using Adaptec's iSCSI hardware initiator.
Eveleigh has not benchmarked the IP SAN against the original Fibre Channel SAN, but he says the performance of the IP SAN is "good enough." "If you don't need the performance of a Fibre Channel SAN, why pay for it?" he says.
Cost was only one factor in making the move to an IP SAN for Wiss Janey Elstner (WJE), an architectural firm in Northbrook, IL. WJE also wanted a system that it could update with the latest technology advances, says Ray Jaskot, IT director at WJE.
Jaskot investigated many different solutions before choosing LeftHand Networks' Network Storage Module (NSM) 100 subsystems, which he claims fit his description of a "true IP SAN."
"You plug it into your existing switches or a dedicated switch running over base copper and you're able to use remotely attached storage and not have all these interface boxes in between," says Jaskot.
He uses off-the-shelf Intel network interface cards (NICs) on Micron servers, NetGear switches, and standard copper cabling. Jaskot was upgrading to a faster backbone when he decided to go with the NSMs, which plug directly into the Ethernet switches.
WJE has nine NSMs split into three separate groups with a total of 3.6TB. WJE has 19 offices around the country connected via a WAN. For security and data traffic control, WJE runs a split LAN as well. All the IP SAN traffic is on one LAN with backup, while file sharing and general traffic to the Internet are on a separate LAN on the same server. WJE runs Exchange, SQL Server, and Web-related applications.
LeftHand's NSMs are based on a proprietary block-level IP protocol, but the company plans to support iSCSI later this year (see "LeftHand Networks adds IP SAN options, " p. 12). However, Jaskot is satisfied with LeftHand's proprietary protocol because it has been reliable with no performance problems.
Sandia National Laboratories' High Speed Computing and Networking group is researching various uses of iSCSI with different cluster file systems for cluster supercomputing environments. Sandia wants to deliver high throughput, low-latency network, and storage performance to large-scale scientific applications running on supercomputers. Sandia is evaluating an IP SAN for its cost efficiency, especially in leveraging Gigabit Ethernet. In addition, IP provides a benefit that a Fibre Channel-based SAN does not: It can be used over long distances.
"IP storage provides us with the possibility of having block-level I/O performance over distance," says Helen Chen, a network researcher at Sandia. However, low cost was also a determining factor at Sandia. In the past, storage networking for supercomputers has been very expensive, so finding cheaper alternatives was an important goal, says Chen.
To achieve its goal, Sandia is testing Intransa's IP5000 modular IP SAN array connected to Dell 2650 servers using Adaptec's iSCSI host bus adapters (HBAs) and a Gigabit Ethernet switch. Eventually, an IP SAN may be used in conjunction with the lab's supercomputers.
Sandia chose to test the Intransa IP5000 because it used ATA drives and its back-end was IP-based, says Chen. "You can leverage IP and Gigabit Ethernet. Unlike other systems with back-ends that may be direct-attached storage or Fibre Channel, this is truly IP storage," says Chen.
The testing process includes evaluating raw bandwidth and throughput and protocol-processing overhead. So far, Sandia has achieved 160MBps with two concurrent sessions. Testing is expected to take at least a year.
From DAS to IP SAN
The University of Alabama's department of pathology began upgrading to an IP SAN about two years ago because its DAS was too cumbersome to upgrade and backups were very time-consuming. The department wanted a SAN, but price was a big issue.
"The cost of Fibre Channel was a lot more than we could budget for," says Sterling Griffin, IT director in the pathology department. A Fibre Channel SAN would have been 60% more than the price of its current IP SAN, he claims.
Griffin could not afford to replace his existing SCSI hardware with Fibre Channel, nor could he afford Fibre Channel HBAs and switches. However, when Griffin began looking at his SAN options in 2001, IP SAN choices were almost non-existent until SANRAD agreed to let the university beta test its iSCSI V-Switch 3000.
"We were able to re-use our existing SCSI subsystems, which was key," says Griffin. Using the V-Switch as the heart of his IP SAN, Griffin could also use standard Gigabit Ethernet switches and cables.
The department has 12 servers from Dell and SuperMicro, which are configured into three clusters. Some are connected to the iSCSI network and some are still connected to DAS, but will eventually be running on iSCSI. A Microsoft Exchange server has direct-attached storage, while a SQL server runs on iSCSI, and a separate server is backed up directly via iSCSI. The servers running iSCSI are plugged into an IP switch that is also plugged into the SANRAD V-Switch, which is attached to the department's Promise Technology RM8000 RAID IDE arrays with about 1.5TB. The IP SAN network uses standard Gigabit Ethernet NICs from Intel.
Although Griffin has not run performance tests, he believes the IP SAN is about the same speed as DAS. "Users did not notice any difference in file retrieval time," he says.
SANRAD ships IP SAN switch
By Lisa Coleman
SANRAD kicked off its IP SAN family recently with the iSCSI V-Switch 3000 and plans to introduce an entry-level product for general business applications later this year.
The V-Switch is a combination of a multi-protocol switch, storage virtualization engine, storage router, and a fabric bridge to SCSI and Fibre Channel. The switch gathers all physical storage resources (iSCSI, SCSI, and Fibre Channel) into a single storage pool.
The switch sits in the middle of the network and can tap into an existing Fibre Channel SAN. Administrators can turn pieces of the Fibre Channel SAN into an IP SAN and drive I/O over hosts connected to IP.
"Our goal is to look at IP SANs the same way as Fibre Channel SANs. The ultimate IP SAN should have the same level of high availability, security, fault tolerance, and active-active fail-over," says Zophar Sante, vice president of market development for SANRAD.
The V-Switch allows administrators to define new logical volumes from the storage pool and perform functions such as mirroring, striping, remote copy, volume concatenation, LUN carving, snapshots, multi-pathing, fail-over, security, port performance aggregation, and a selectable quality of service per host.
The switch does not require host agents and operates in the data path of a storage network. The 1U device includes three iSCSI Gigabit Ethernet ports, four Fibre Channel or SCSI ports, and hot-swap power supplies. It also uses dual core processors, 12Gb internal buses, and VXWorks, a real-time operating system. Storage controllers sit on two PCI buses. The V-Switch 3000 is priced from $15,000 to $25,000.