Data security in the 'virtual SAN'

Posted on March 01, 2002

RssImageAltText

Building bulletproof security when connecting SANs over WANs, while maintaining gigabit speeds, presents challenges.

By Scott Lukes

Given the formidable array of dangers facing today's enterprise network operations, security has taken on a high priority for network designers and IT managers. As more corporations open their networks to users and services at off-site locations through operations ranging from branch and remote offices to high-speed data centers, it is as important to keep the network open to those who need access as it is to keep it closed to those who do not.

Today, storage area networks (SANs) have become the de facto standard for enterprise-level storage applications involving geographically dispersed sites. A SAN is a group of storage devices, usually-though not necessarily-in the same physical location, connected via a high-speed dedicated network to provide highly reliable data request handling.

While there are many advantages to a SAN, there are serious security issues when allowing outside access to the SAN or when connecting multiple SANs over a WAN, often referred to as "virtual SAN." First, centralizing storage for a multi-site enterprise requires branch offices, telecommuters, and remote users to be guaranteed secure access. Second, when connecting multiple SANs over a WAN, as in the case of storage backup for disaster recovery, secure high-speed connectivity must be provided over the WAN link. In a medium-sized or large corporation with many thousands of users, even a relatively non-critical storage server requires a massive amount of throughput bandwidth, often exceeding 1Gbps.

This poses a problem: How can the company provide robust high-bandwidth access to all these users, across potentially thousands of locations, while maintaining an adequate level of security?


In a "virtual SAN," SAN-to-SAN and user-to-SAN tunnels are secured with virtual private networks (VPNs) and firewalls.
Click here to enlarge image

From a network-level perspective, there are two major issues to consider when deploying security in a virtual SAN environment. The first issue is to simply connect these geographically disparate users and locations over a secure link. When a company occupies only one physical location, gigabit networking is comparatively simple: Run Category 5E or higher unshielded twisted-pair cable (or preferably fiber) for 1Gbps of bandwidth over the network.

However, in a large, multi-location corporation, laying out cable can be impractical or impossible. In a metropolitan area network, private or leased cable lines may be used. But for larger WANs or for networks with many connected users, a virtual private network (VPN) over the public Internet is often the best solution.

A VPN, being "virtual," piggybacks on a public WAN. The symbiotic relationship seems to make perfect sense: If distant offices and data centers can tie into a common network (i.e., the Internet), why even consider a direct leased line?

One answer is "security." No organization would transport all its data, unsecured, over an inherently unsafe public network like the Internet without some form of protection. To minimize such risks, VPN technology uses highly secure, encrypted data "tunnels" to transport sensitive data across the Internet.

The second issue when designing a distributed SAN environment is protecting individual sites and users from other security threats via firewall devices. A firewall is a network device that filters incoming streams of packets, accepting or denying them on the basis of a set of established criteria that defines an organization's security policy. Some firewalls can even scan for viruses, unauthorized users, and other security threats.

In any network environment, firewalls must be placed at strategic locations to monitor and control access to information residing in various locations. While most SAN switch vendors have their own protocols that secure data flow inside the SAN, once outside the SAN, all bets are off. Two challenges facing enterprises and service providers are how to protect the rest of the switching network surrounding the SAN, and how to firewall the massive incoming data streams originating from other SANs or from local and remote users requesting data from the corporate SAN.

The figure depicts a typical virtual SAN environment, where multiple remote offices and remote users must access the SAN over public network infrastructure.

The problem

Unfortunately, until recently there have been no security solutions with enough horsepower to manage the mass amount of traffic sourced from and destined to the SAN in a Gigabit Ethernet environment. Like the enterprise or data-center Gigabit Ethernet backbone, security devices like firewalls and VPNs have acted as bottlenecks to full gigabit throughput.

In a multi-location network like that in the figure, users at remote locations access the SAN at corporate headquarters. If other traffic at these satellite sites is significant enough, some of them may need gigabit links. However, the only sites that really need gigabit connections are the ingress point to the main corporate network, where the SAN is located, and the secondary network, where the backup SAN is located. Ideally, the bandwidth available to the SAN should equal or slightly exceed the sum of the bandwidth of all the incoming links. (Telecommuters typically need less bandwidth but can comprise a significant load if there are enough of them.)

All locations need to secure both the access connection (with an end-to-end VPN) and their own internal networks (with a firewall). A common way to provide this security is through a firewall device that also provides VPN functionality. At corporate headquarters, whose LAN backbone (and more commonly, access link) is Gigabit Ethernet, this firewall/VPN device needs to support full-gigabit throughput to prevent traffic bottlenecks.

Unfortunately, there are no solutions on the market today that provide full-gigabit throughput under normal traffic conditions and security policies (traffic may consist of variable packet sizes and hundreds of thousands of TCP sessions, while a security policy may consist of thousands of rules). To overcome this limitation and provide full-gigabit security, enterprises have resorted to the "firewall sandwich," which consists of multiple, sub-gigabit firewalls running in parallel, which are load-balanced so that none of the individual firewalls present a bottleneck to traffic (see "The 'firewall sandwich,' " above).

While some security appliances on the market can achieve high throughput (still sub-gigabit) using ASIC technology, they are paradoxically limited in performance by the same features that drew designers to them in the first place. ASICs are hardwired, dedicated circuits that perform specific calculations and tasks-in the case of security, the crucial parts of various algorithms formerly performed in software. The resulting hardware can be fast at specific operations but lacks the flexibility and easy re-programmability of software-based approaches. Any changes-fixing newly discovered bugs, adding requested features, or increasing performance-require redesign of the ASIC, a lengthy and expensive process.

Hence, although ASICs can push a security appliance close to the 1Gbps hurdle, they have flexibility tradeoffs. For complex security setups, next-generation encryption algorithms, or non-standard network paradigms, ASICs can be more a hindrance than help.

The solution

A new class of security gateway based on network processor technology may redefine the rules for secure connectivity to allow full-gigabit throughput under virtually any loading condition. For SANs, the result is robust security that doesn't compromise speed.

Until recently, a tradeoff between software-based (better flexibility) and ASIC-based (better performance) solutions was necessary. But network processors change that.

Emerging security solutions will be based on network processors. One example is Intel's IXP-1200. Unlike a conventional, general-purpose CPU, the network processor architecture is optimized to process streams of packet data, including the typical operations performed on such packets.

Network processors usually contain multiple programmable processing elements. The Intel IXP-1200, for example, contains a StrongArm processor and six "micro engines," each of which can be programmed to handle four lightweight processes or threads.

Each engine is normally assigned to specific repetitive tasks. Data is handed between the processors in a pipeline fashion, so that each processing element performs a separate job, much like a station on an assembly line.

The key to network processor perfor mance is to make sure that the right information is available to the right computing element at exactly the right time. This means that memory caches must be carefully designed. Also, the memory must be sufficient to hold session and state information locally for each session-which, in a complex network like the one discussed above, could number many hundreds of thousands-as well as sufficient memory for policy lookup. A complex network could need thousands of security policies, which all must be rapidly accessible, since the firewall must examine every packet in the light of every policy. Because this new class of security gateway can create and maintain multiple VPN tunnels and simultaneously provide full-gigabit firewall throughput, it is well-suited for applications such as multi-site, multi-user SAN access as well as SAN virtualization.

Conclusion

SANs offer huge benefits to multi-site companies with mission-critical data needs. However, to deliver their full potential, SANs must connect to other locations over secure links through public network infrastructure, which mandates security solutions that not only offer VPN and firewall functionality, but also have enough horsepower to provide full-gigabit security without choking the network.

The next generation of network connectivity, including SANs, is upon us, as is the next generation of security threats. The stakes are high: Securing the entire SAN infrastructure, as well as fully utilizing gigabit links, can mean the difference between success and failure for IT organizations.


Scott Lukes is director of marketing at ServGate Technologies (www.servgate.com) in San Jose, CA.

Bulk Reprints
Do you need bulk reprints of an InfoStor article?
If so, contact
Dick Arzivian
Tel: (603) 891-9315
Fax: (603) 891-9297
dicka@pennwell.com


The 'firewall sandwich'

Most firewall solutions have a limited throughput-far less than the gigabit capacity needed by data-intensive applications like storage area networks. Without a single-box solution, most network administrators fall back on a workaround using load balancers in a "firewall sandwich."

In a firewall sandwich, a load balancer distributes network traffic more or less evenly, in parallel across a group of firewalls or other network devices. As shown in the figure (a), the firewalls must be run in parallel to make their processing power additive. It takes several load balancers on each side to provide full-gigabit throughput in both directions, which requires additional investment in equipment and administration.

Click here to enlarge image

If a company needs transparent fail-over support, there must be at least n + 1 firewalls, with a corresponding increase in the amount of load-balancing equipment, complexity, and administrative and related costs.

The figure (b) shows an alternative approach, with a single firewall capable of gigabit throughput. For fail-over redundancy, two firewalls can be used, but the configuration is still simpler than the traditional approach. And, since an array of load balancers can be eliminated, the total cost is lower.

Originally published on .

Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives