Getting SAN ROI right means getting what you need

Even with tight budgets, a return on investment (ROI) argument can usually make SAN spending a "no-brainer"—whether it's for initial deployment or SAN upgrades.

By Alan R. Earls

When BlueCross BlueShield of Tennessee (BCBST) needed to rein in costs associated with its exploding storage needs, the answer was not obvious. At the time (about three years ago), storage area networks (SANs) were still a relatively new concept and not an easy sell. "We started out just playing around with ideas," explains Bob Venable, manager of enterprise systems at BCBST.

As a primary health insurer in Tennessee, serving 4.1 million people with more than 4,300 employees, the not-for-profit organization had seen storage demand skyrocket. Explosive data growth, triggered by a record year in both sales and enrollment, left the IT team at BCBST wondering how they would meet these growing requirements. BCBST needed to gain control of its rapidly growing and disparate storage infrastructure (200 servers, 70 storage devices, and four operating system environments) with a solution that could easily scale with future growth while lowering overall IT costs. It also needed to do this in a non-disruptive manner.

Although SANs promised the potential of greater efficiency, it was also a case of risking everything in one basket. As Venable explains, with a SAN in place, everyone in the enterprise would become dependent on the reliability of that one infrastructure. However, he says it was clear something had to be done to get control of growth and to provide better redundancy. Employing an ROI tool provided by McData, one of BCBST's primary storage vendors, they began to see SANs as a solution and made the leap—and the spending commitment—approximately two years ago.

BCBST went ahead with a SAN that included McData's 6000 series directors, 3000 series fabric switches, 1000 series loop switches, and SANavigator storage network management software. And, more or less as predicted, the ROI numbers have added up. Venable says the year-to-year savings have been in the range of $1 million. "We've been able to grow our business by 15% while at the same time improving corporate efficiency by 40%," he says.

In fact, Venable says that over the last six years the amount of data traveling through BCBST's network has grown from 500GB to the current 60TB, while the manpower necessary to manage that data hasn't grown at all.

Venable reports that after only a month of running SANavigator it had already saved BCBST about $120,000 with its asset management features. More recently, Venable says the product was able to spotlight inefficiencies that helped avoid $300,000 in additional storage acquisitions.

The BCBST example shows that SAN ROI is real, but also underscores the complexity of analyzing costs and future growth. In Venable's situation, BCBST's heterogeneous collection of operating systems and servers—each with a different rate of storage growth—forced a degree of educated guessing. Fortunately, that guessing turned out to be accurate.

Bob Passmore, an analyst with the Gartner Group consulting firm, says the challenge of developing compelling ROI numbers in a SAN environment is that there is almost always more than one thing going on at once. In particular, storage resources are generally expanding, sometimes dramatically, at the same time that a SAN is being considered.

"If you make the mistake of trying to justify network storage in terms of trying to understand overall growth in storage, you are unduly burdening your decision-making," says Passmore.

"You need to first justify the initial storage capacity you are seeking, and then justify how to move from direct-attached storage [DAS] to networked storage where you can eliminate inefficiencies," he adds.

Passmore says the next step can be trickier because SAN storage looks a lot more expensive at first glance. In fact, Passmore says Gartner's research shows that it is often as much as four times as expensive to purchase on a per-megabyte basis when the cost of switches and other components are included.

However, running out of storage is a very expensive proposition for most organizations. To ensure against that possibility, IT organizations invariably overbuy. With DAS, that overbuying can increase the amount of storage under management by two or three times.

Put another way, DAS-based installations rarely achieve utilization rates higher than 30% to 50%, and they generally require much more management and labor-intensive backups.

In contrast, says Passmore, SAN installations generally achieve capacity utilization rates of 70% or higher and sometimes greater than 90% utilization.

What's more, management is simplified, greatly reducing the direct labor costs associated with storage.

Taking a similar tack, Paul Ross, manager of network storage marketing at EMC, cites a McKinsey-Merrill Lynch study showing the average total cost of ownership (TCO) for DAS is $0.84 per megabyte. By contrast, the average TCO drops to between $0.34 and $0.38 per megabyte for a networked storage configuration.

"So even though it may cost four times as much in dollars-per-megabyte, with the TCO you might see it as 50% lower than DAS," and with an ROI analysis the numbers can also look very favorable, says Ross.

Storage-related personnel costs

Passmore warns that one of the biggest challenges can be getting a handle on storage-related personnel costs. "Managing DAS is often a part-time job for a number of people, each of whom tends to under-report the time they spend," says Passmore. Indeed, he says it isn't uncommon for individuals to report that they spend only 15% to 20% of their time on DAS management when, in fact, they are spending 60% to 70% of their time on it.

Regardless of the actual figure, though, Passmore says DAS growth almost always means a linear growth in personnel. "If you're buying storage and not changing how you manage it, there will be proportionally more labor for things like backups and restores."

Passmore says the amount of storage that can be managed with SANs is dramatically higher.

"Anecdotally, customers tell us they are managing three to 100 times as much storage with the same staff, most typically around 10 times as much." Echoing Passmore's observation, BCBST's Venable says many people have trouble believing his claim that only one full-time-equivalent person can successfully manage his company's 60TB of data, "until we bring them in and show how it's done."

Cornèr Bank, a private Swiss banking institution that has invested in a SAN, also reports sharp boosts in staff productivity. The SAN, used to support two mainframes, two tape libraries, and more than 150 servers, paid for itself in just over 24 months, and this is without accounting for operational and management costs, according to IT director Charles Inches.

The bank's SAN is based on two Inrange IN-VSN FC/9000 Fibre Channel directors in a multi-vendor environment. The director-based SAN enables backup and recovery for business continuance, and provides a foundation for non-disruptive growth.

Inches says the SAN has boosted the amount of storage managed by an administrator by 35%. And he anticipates further dramatic boosts in SAN-related efficiencies when more-sophisticated storage resource management (SRM) software becomes available.

Making the case for spending

Even with such dramatic successes, though, it can be tricky for most organizations to make the case for any spending in the current economic environment. But there are ways.

Randy Kerns, an analyst with the Evaluator Group consulting firm, wrote a white paper last year entitled The Economics of a Storage Strategy, which helps make the SAN ROI case. Kerns argues that SANs are usually easy to justify.

He says that many end users who have gone to SANs find the ability of their administrators to manage storage increases by a factor of four, and the decrease in backup time leaves more time available to focus on applications. "Those are the dominating factors in an ROI equation: Don't make it too complicated," Kerns warns. However, he says, "If you don't have your ROI strategy in place ahead of time you will face a battle.

"People normally want ROI of 12 months or less because in tight times you can have a harder time justifying something longer-term," continues Kerns. He also points out the different weight attached to hard and soft costs (such as lost opportunity) in making an ROI argument.

Kerns says upper management can be "very argumentative" about soft costs (i.e., expected improvements in efficiency), so he advises developing a storage strategy that includes agreed-upon ROI criteria and metrics.

It is also advisable to be sure you are using the same terminology as everyone else. Dr. C.J. McNair, a professor of cost management at Babson College, in Wellesley, MA, warns that ROI is not the same as "payback."

Payback, she says, is the number of years required to recoup original investment in asset. ROI is the amount of profit in a year divided by dollars of investment. So, if an organization earns $10,000 on $100,000 of invested capital, that would be considered a 10% ROI.

In a recent SAN study, KPMG Consulting (now called BearingPoint) also carefully made the distinction between ROI and payback period. The study noted, "An ROI of 525% means that the SAN provided $5.25 of benefit annually for every $1 of investment required to implement and maintain the SAN."

Using that definition and experience from several companies, KPMG recorded annual SAN ROI ranging from 125% to 525% and payback periods of two months to nine months.

Michael Karp, an analyst with Enterprise Management Associates, a consulting group that specializes in IT decision-making, says ROI requires much more understanding of the business than does TCO—which simply looks at costs rather than potential benefits. "If you're comparing something that costs $1,000 and something that costs $10,000, TCO will always point to the $1,000 solution, regardless of whether it takes more time to use or provides less functionality," he says.

Karp recommends trying to understand not only the SANs themselves, but also the business processes they support. "Building a business case for storage investment requires that you be able to demonstrate things like the cost to the business of each minute of downtime," he says.

Once you have looked at both the savings and the opportunities for increased efficiency and also the possible incremental revenues—in short, a thorough ROI calculation—it is very easy to cost-justify SAN investments, according to Karp.

Passmore advises would-be SAN adopters to start with their reason for adding storage. Then they can justify the incremental purchase price, understanding that the TCO will be lower but that the real justification for SANs comes in eliminating the need for extra staff.

"It boils down to either hiring more people or finding a way to get more bang for the buck," he says.

As for advice for those attempting an ROI exercise, BCBST's Venable says, "I always tell people you should consider 'what if I'm successful' not 'what if I fail.' " He says you should look at where you need to be in two or three years and envision the steps you're taking today as part of that plan. "Plan to be successful," he says.

A similarly off-the-cuff approach is espoused at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC), which supports the U.S. Military and other government agencies with highly accurate weather forecasts around the globe. To provide the necessary storage and throughput to meet the needs of its multiple SGI Origin 3000 series servers, FNMOC installed a SAN with 8TB of high-performance disk storage.

Redundant switches and multiple Fibre Channel connections to each system and storage array provide the necessary bandwidth and availability. Mike Clancy, chief scientist and deputy director at Fleet Numerical, says the move to a SAN was entirely based on "seat of the pants" calculations, but was rooted in the need for performance and a long-term need to accommodate growth while living with shrinking budgets.

Running out of gas

Looking ahead, Passmore predicts that SANs will "run out of gas" within a few years in terms of their ability to deliver cost savings. However, by then Passmore says the industry will be delivering much higher-level management products.

"There is a promise from vendors of an order-of-magnitude improvement in storage management productivity," he says. Passmore says that 2003 will likely bring first-generation capabilities, "still with lots of holes," but by 2005 very capable SAN management products should be in the pipeline. And while smaller players are coming up with point solutions and exciting innovations, Passmore predicts that most of the functionality will eventually come from large, established players. "Even testing some of these products will take more time and money than most start-ups have," he says.

Alan R. Earls is a freelance writer in Franklin, MA.


1Gbps vs. 2Gbps FC: When to make the move

By Paul Manson

Now that many end users have invested in 1Gbps Fibre Channel storage area networks (SANs), they face another crossroads with the advent of 2Gbps technology. When should you upgrade, or shouldn't you? The decision may not be all-or-nothing, because you can mix-and-match 1Gbps and 2Gbps devices on the same network.

The benefits of 2Gbps Fibre Channel are obvious: twice the speed, more bandwidth. However, 1Gbps is sufficient for many applications, while 2Gbps is required for other applications.

Applications that will benefit from faster SAN speeds include backup/restore and disaster recovery, remote mirroring, online transaction processing, database applications, and audio/video. Backup/restore times, for example, could be cut by almost 50%, while increasing the number of tape drives that can be shared.

Higher-speed products—including host bus adapters (HBAs), switches, array controllers, and routers/gateways—run at 2Gbps when all components in a particular network segment are 2Gbps devices, or they can run at 1Gbps if the environment includes 1Gbps devices. Most 2Gbps products auto-negotiate the speed, making it possible to gradually transition to 2Gbps by initially focusing only on critical applications and/or network domains.

Moving to faster speeds does not require a "forklift upgrade." Applications that run efficiently at 1Gbps can be kept on 1Gbps links, while moving applications with run-time windows that continue to grow to a segment of the storage network built with 2Gbps products.

The case against upgrading

Despite the benefits of 2Gbps networks, you have to address two issues: justifying the cost (ROI) and taking advantage of the additional bandwidth (filling the pipe).

The question of ROI can be answered by analyzing your current network performance. If your network is performing well and you do not anticipate moving large amounts of media-rich content, then the upgrade costs are probably not justified.

Next is the question of whether you can actually fill a 2Gbps pipe. Consider that a single HBA in a 64-bit, 33MHz PCI slot can sustain a data rate of 240Mbps. A 1Gbps HBA runs at 200MBps in full-duplex mode, while a 2Gbps adapter can achieve a maximum 400MBps. At burst rate speed, the server bus becomes the bottleneck for the 2Gbps HBA. However, at 66MHz the throughput gets better, and the new PCI-X standard (133MHz) will enable an even better match between server bus and wire speed.

Finally, moving to 2Gbps introduces the prospect of doing an intrusive upgrade to a network that presumably has been stable, which presents potential risk.

Cost comparisons

Component costs vary, but on average, 2Gbps HBAs cost about 10% more than 1Gbps ones, and in some cases prices are equivalent.

The cost of moving to 2Gbps becomes more significant in the case of Fibre Channel switches, with 2Gbps switches costing 20% to 25% more than 1Gbps switches.

In the case of disk arrays, the increased cost for 2Gbps equipment varies widely among vendors.


Small and mid-sized companies with limited IT budgets may want to start with (or stick with) a 1Gbps SAN to realize the cost advantages.

List prices on 1Gbps switches and routers have dropped about 50% since 2Gbps devices were introduced. In addition, the biggest bottleneck in networks is likely to be servers, not switches or network interconnects.

For large companies that depend on constantly updated databases and other high-performance applications, the time has come to switch to 2Gbps because the price premiums can easily be justified by the increased performance. The increased bandwidth will also position you well as other network components offer higher throughput.

If your upgrade is planned properly, there is no need to "throw the baby out with the bath water." Since 2Gbps products will work with 1Gbps products, you can deploy both on the same storage network. For example, move slower switches out to the edge of the SAN, and build the core SAN with 2Gbps directors/switches. Then re-deploy your 1Gbps HBAs to segments of the network that don't require increased bandwidth while adding 2Gbps adapters for applications such as backups.

Paul Manson is a product manager at TidalWire (www.tidalwire.com) in Westborough, MA.

This article was originally published on January 01, 2003