Being smart about 'intelligent' switches?

Q: What's your take on intelligent switching and the role that these types of switches should play in storage management?

Jacob Farmer
Cambridge Computer
Click here to enlarge image

They say that if you have nothing nice to say, you should say nothing at all. So in keeping with that philosophy, I have been biting my tongue on the subject of intelligent storage switches for more than two years. Why suddenly break my silence? Well, I feel like all of the press on this subject has been one-sided, touting intelligent switching as the eventual panacea for all storage management ills. But there is another side to the story—one that paints a different picture. Not only are there reasons not to load up switches with storage management logic, but also there are some existing solutions that solve all the problems these switches promise to address.

I believe switches should be as simple as possible. Their job is to move data from lots of Point As to lots of Point Bs while keeping latency to an acceptable minimum. The perfect switching technology is transparent to the devices connected to it, scales gracefully, does not require a lot of firmware patching or routine maintenance, is highly interoperable with both new and legacy equipment, and is affordable.

Take the example set by the Ethernet industry. If I walk into your data center holding a random Ethernet switch and ask if I can plug it into a port on your enterprise switch, you would probably be okay with that. Okay, maybe you have security rules that would block me, but you would not be worried about my breaking anything. However, if I walk into your data center holding a Fibre Channel switch and tell you that I want to plug it into your enterprise director switch, you'd probably freak out or at the very least say "no way."

The storage switch industry has a poor track record with delivering interoperability. It took forever before one vendor's fabric switch could talk to another's, and we still regard mixing switches from different vendors as something you only do if you absolutely have to. I worry that broadening the scope of the switch's responsibility could result in less interoperability, more consumer lock-in, and higher prices.

Why put the brains in the switch?—I often ask students in my classes where they would like to see storage management logic reside: at the host, at the storage device, or somewhere in the middle? They disproportionately answer that they would like to see it in the middle, so that it can be centrally managed. From a cursory glance, the switch is the thing in the middle, so I can't blame them for wanting to put the management logic there.

I acknowledge the temptation to put the smarts in the switch, but I point out that it's a terrible over-simplification. Storage management logic and abstraction can happen anywhere in the I/O path from application to disk spindle. For instance, you can drop logic into or above the file system, a layer down in a volume manager, down another layer in or above the device driver, in a host bus adapter (HBA), between the host and a switch, in between the switch and the storage device, etc. The best choice depends on what you are trying to do, and it often makes sense to insert different pieces of functionality into different layers.

Centralization without sitting in the middle—Storage logic does not need to sit in the middle of a data path to be centrally managed. The industry has been aggressively moving toward centralized management tools for all kinds of technologies. For years, we have had centralized management tools for multiple disk arrays and switches. Lately, central management consoles have come to market for host-based software (e.g., volume managers and replication software). With SMI-S, you can get management tools that will manage all kinds of devices from a variety of vendors. In short, you don't need intelligent switches to manage centrally.

The switch is not necessarily the center of the universe—Yes, the switch sits in the middle of the data path from host to storage, but there are also a number of products on the market that can be spliced in at the same point to bring new functionality. Virtualization and replication appliances are examples. With some simple zoning on your switch, you can splice one of these appliances into your data path. Now your management logic is sitting in the middle, between host and storage, but it's not in the switch.

Note that one of the incarnations of intelligent switching is to take virtualization and replication appliances and move them into the switch. In most cases, this is a packaging gimmick. The "blade" on the switch is an x86 PC running the same software it could run if it were in an external chassis. Okay, it might be neat and tidy to have a blade in a switch, but I think it complicates the role of the switch and the support role of the switch vendor.

What about performance?—One of the potential benefits of intelligent switches is that application logic could be burned into ASICs on the switch, thus reducing latency that could be introduced by an external appliance. I see two fundamental problems with this argument. First, I believe most performance challenges are imaginary. Everyone wants more IOPS and more bandwidth, but seldom can the hosts or applications take advantage of the theoretical performance. Seldom do in-band appliances or additional software layers negatively affect storage performance.

The more fundamental problem is that burning application logic into ASICs defies the industry trend of moving the magic from expensive proprietary hardware toward software solutions that run on commodity hardware. A great example is that of network file services, affectionately known in the storage industry as network-attached storage (NAS). Everyone knows that a single NAS computer cannot deliver scalable I/O performance. Is the answer to move it to proprietary ASICs on a switch, or is the answer to develop parallel file systems that run on commodity hardware? Commodity hardware gets cheaper and faster every year and can be upgraded piecemeal as new technologies become available. Proprietary hardware evolves much more slowly and usually gets upgraded forklift style.

One exception: Protocol Conversion—I will concede that it would be preferable for a fabric switch to handle FCP and iSCSI conversion so that external bridging products are not required to use these protocols. It would be nice especially for iSCSI protocol conversion to happen with minimal latency and minimal interoperatibility hassles.

I believe the role of the switch is to move data—not to mess with it. Looking for ways to cram application logic into a switch makes sense if you are a business strategist for a switch manufacturer, but intelligent switching probably won't help the consumer or advance the state of technology. Okay, bring on the hate mail!

Jacob Farmer is chief technology officer at Cambridge Computer (www.cambridgecomputer.com) in Waltham, MA. He can be contacted at jacobf@cam bridgecomputer.com.

If you have a question you would like to ask one of our experts, please e-mail Heidi Biggar at heidib@pennwell.com.

This article was originally published on March 01, 2004