By Heidi Biggar
IBM late last month put an end to speculation about its virtualization strategy, announcing the phased rollout of initial products this year. The rollout begins with the release of IBM TotalStorage SAN Volume Controller in July and culminates with the much-anticipated delivery of TotalStorage SAN File System (a.k.a. Storage Tank) later this year.
SAN Volume Controller is an in-band (in the data path) virtualization appliance that will be configured with clustered IBM xSeries servers running Linux. The virtualization software installed in the device was developed by IBM, not by former OEM partner DataCore Software.
The SAN Volume Controller appliance has 4GB of cache and four 2Gbps Fibre Channel ports. IBM officials claim that the appliance, unlike some other in-band virtualization products, does not suffer from performance issues as the environment scales. "It is our implementation and use of cache that makes the difference," claims Brian Truskowski, general manager of storage software for IBM's Storage Software Products Division.
Other potential differentiating features include its integrated hardware/software approach, clustering (the appliance is available in 2-pair nodes), and compliance with industry standards.
SAN Volume Controller will initially work only with IBM's FastT and Enterprise Storage Server (ESS) disk arrays; however, support for non-IBM arrays is expected by year-end. The appliance is currently installed at five beta customer sites and at 20 IBM TotalStorage Solution Centers and is designed for users with existing storage area networks (SANs).
For users who don't have a SAN or want a simpler, lower-cost option, IBM in August will offer a pre-configured SAN. SAN Integration Server will house the SAN Volume Controller, a switch, and low-cost disk storage (e.g., FastT) in a 19-inch rack.
Later this year, IBM plans to expand beyond its block-level virtualization roots and enter the realm of file virtualization with the heavily hyped and long-overdue Storage Tank, which IBM will rename TotalStorage SAN File System.
In a sense, SAN File System is to files what SAN Volume Controller is to blocks. It aggregates, or virtualizes, files in a storage network (later to include all files in the enterprise) into a common pool. The two processes—block-level and file-level aggregation—can be done in conjunction with one another or separately.
The purpose of a SAN-wide file system is to simplify the management of files within a storage network by arranging files and metadata into a common pool, explains Truskowski. Users can then access these files regardless of operating system, which means that a file that was created in Windows can be accessed by, say, a Solaris client and vice versa. SAN File System will support AIX, Solaris, HP-UX, Linux, and Windows 2000/XP in its initial release.
The file system is also expected to play a key role in automation, or policy-based management, because of its knowledge of files and metadata. "For example, Storage Tank can create and migrate files, deciding where to place each file based on certain criteria, such as file type," explains Truskowski.
This capability will ultimately allow users to define storage pools based on application demands (e.g., performance and latency). For example, mission-critical applications could be earmarked for storage pool x, and less-critical applications for storage pool y.
Like SAN Volume Controller, SAN File System will based on clustered IBM eServer xSeries servers running Linux. SAN File System licenses will reportedly be available at no cost to the user. Analysts, such as the META Group's Phil Goodwin, expect the cost of virtualization engines to approach $0 by year-end. The value, he says, is in the services they enable, not in the core technology.
Reaction to IBM's virtualization news has been largely positive, which analysts say attests to the market's readiness for a comprehensive virtualization solution, the strength of IBM/Tivoli's overall storage management software portfolio, and the increasing importance of industry standards.
"End users have been biding their time waiting for a comprehensive solution to their storage challenges—one that is based on industry standards and that minimizes the complexity and cost of deployment," says Richard Lee, president and CEO of The Storage Consulting Group.
Despite its late entry into the market (virtualization products have been available for more than two years), IBM may emerge as a leader in the space. "IBM has taken to heart all the necessary factors," says Lee.
"IBM isn't necessarily the technology leader, but they are one of the standards leaders, which is a differentiator in a sea of proprietary point solutions."
The Storage Management Initiative Specification (SMI-S) is expected to become the standard for storage management (see "Time for a management standards 'reality check' ", InfoStor, December, 2002, p. 1). IBM has had an instrumental role in driving support for SMI-S within the Storage Networking Industry Association (SNIA).
Lee says that the benefits of standards-based virtualization schemes like IBM's are immediate in terms of market acceptance and functional-level integration with existing and emerging lines of business applications. This compares to non-standards-based virtualization engines, which rely heavily on cross-licensing agreements and vendor API swaps for interoperability and product integration, he says.
IBM officials say that standards such as SMI-S benefit end users in several ways, notably by improving user choice (prevents vendor lock-in), interoperability (eliminates complex API matrices), and product functionality (vendors can focus on feature development, not interoperability).
Other distinguishing features of IBM's virtualization strategy, according to analysts, are its plan to provide both block-level and file-level virtualization and to converge SAN and network-attached storage (NAS) environments.
"The real significance of IBM's announcement is not the actual specifications of the SAN Volume Controller, but the fact that IBM has started to deliver on its promised virtualization strategy," says Dianne McAdam, an analyst with the Data Mobility Group.
All eyes are on IBM—with its history of product delays—to see if the company meets its product delivery schedule.