By Jon William Toigo
In part one of this series (see March 2008, p. 18), we looked at the unfulfilled promise of the Storage Networking Industry Association’s Storage Management Initiative—Specification (SMI-S) to wrangle hetero-geneous storage platforms under a unified management approach. Without such an approach, storage administrators continue to use a “quiver of arrows” approach to storage management, leveraging device-level interfaces and scripts of their own design, to manage capacity and perform other storage management tasks on a platform-by-platform basis.
About a decade and a half ago, storage resource management (SRM) software began to appear in the market, seeking to provide a horizontal view across disparate platforms and components from different vendors. Analysts claimed that using these horizontal managers could drive upwards of 40% of OPEX cost out of storage by enabling more capacity to be effectively managed by fewer administrators, reducing avoidable downtime, and providing a single pane of glass to monitor storage trends.
Ironically, SRM vendors were among the earliest advocates of SMI-S because platforms instrumented with “SMI providers” would have provided “free” (or at least much less expensive) access to the kind of storage configuration information and controls presently exposed only via custom API hooks. However, the view of these vendors on the progress toward SMI nirvana is almost universally negative. In its last SRM product update announcement, a Symantec/Veritas product manager dedicated exactly one line in his press release to a mention of SMI support. Queried on this point, he explained that SMI-S “just isn’t that relevant.” This comment took on special importance because pre-acquisition Veritas had been a driving force behind the development of SMI-S.
Ken Barth, CEO of Tek-Tools, observed recently that he liked SMI-S and that it was very useful in managing devices instrumented with a provider. However, he noted that comparatively few platforms actually provide SMI-S support and those that do, do so unevenly. One vendor implements the full provider spec and delivers it with each box, while another charges $30,000 for a software development toolkit that customers must implement themselves if they want an SMI-S management capability, Barth noted.
Where SMI providers are delivered on gear, many SRM vendors complain that the information delivered by the provider is the same as what could be obtained via Simple Network Management Protocol (SNMP) hooks that are already available on a lot of gear. Almost always, vendor SMI-S providers deliver information and access that is inferior to what can be obtained from proprietary APIs.
Tek-Tools, for example, chooses to cobble together information from whatever sources are available on storage hardware (APIs, SMI providers, or SNMP MIBs), supplemented by some home-grown agents developed by Tek-Tools where these are needed, to get the storage status information for presentation on the company’s Storage Profiler console.
Like other SRM vendors, Tek-Tools is striving to add functionality into Storage Profiler that will address more than mere capacity monitoring, which is only a subset of true storage management. Eventually, the company would like to show storage resources allocated to specific applications to facilitate not only capacity management, but also infrastructure troubleshooting and tuning for optimal application performance. Ultimately, Tek-Tools wants to map storage resources to business processes.
Most SRM vendors are pursuing a comparable application- or business process-centric strategy, and each evolution of SRM, they insist, is delivering greater value to end users. However, in so doing, SRM vendors are moving away from the original definition of SRM set forth in the 1970s by IBM in conjunction with the release of the mainframe storage resource management tool, Systems Managed Storage (SMS).
IBM originally conceived of a three-tiered model to define storage management functions (see diagram). The lowest tier consisted of product management: interfacing with devices via some common management model, SNMP, or APIs to discover and map LUNs. The second tier covered storage management functions such as asset management, quota and capacity management, performance management, and configuration management. The third tier was called systems management and managed how servers and applications accessed storage. This model has been leveraged for the past three decades as the foundation for storage resource management.
It is worth noting that IBM did not develop systems managed storage as an act of generosity. A poll in the late 1970s of its user group, GUIDE, revealed that the most storage capacity that a single manager could administer was 11GB. Without some sort of management facility, IBM would be constrained to sell its customers additional capacity, since doing so also required the customer to hire another administrator. Hence, Big Blue was driven by self-interest to develop an SRM solution to make fewer workers more capable of managing more capacity.
Getting to value
Along the way, the key value proposition of SRM was refined: Unified storage management would enable the efficient management of increasing storage capacity by fewer administrators. This is a goal that resonates even more today given the Wild West nature of storage in the distributed computing realm, the expanding amount of data that companies are amassing, a recessionary economy, and the need in most IT organizations to manage more capacity with the same or fewer staff.
With respect to this “do more with less” goal, however, the SRM story remains incomplete. Despite vendor marketing claims, there is no definitive data to suggest that current SRM tools have delivered improved capacity-to-administrator ratios. To be fair, the lack of evidence is partly owed to slow uptake by consumers of SRM products over the past decade. Simply put, all management products tend to suffer when the economy is flush and companies do not think twice about throwing more capacity, bandwidth, processing cycles—or personnel—at every issue.
Now that the economy is less supportive of profligate spending, storage management products might see an uptick in sales. This, in turn, should help identify the real value of SRM in terms of work efficiency.
Another benefit promised by improved storage management is the improved allocation—and ultimately improved utilization—of storage resources. In the absence of effective storage management, the storage sprawl continues unchecked, making it the bane of CAPEX budgets and a target of bean counters with cost cutting on their minds. At one major automotive company, a recent assessment revealed that a 300TB Fibre Channel fabric, which was nearly maxed out in terms of capacity, was actually storing less than 30TB of business-relevant data. The company had no idea how all of the orphan data, contraband data, duplicates, and “junk” had amassed on its spindles, nor could they identify readily where in their infrastructure the data from specific applications was being stored. A good SRM tool could help them cull out the chaff from the wheat and defer the rate at which new capacity needs to be acquired.
With appropriate SRM tools—even those providing a low-level view of horizontal infrastructure—IT professionals may be able to identify misallocated storage capacity, as well as “hot spots” and “choke points,” and achieve greater capacity allocation efficiencies than are possible using only the point management products on the arrays themselves. Moreover, SRM might be used as a means to corral hardware vendors into getting on board with unified management schemes. Whether you want to standardize on SMI-S or on SRM tools, a good strategy is to select the SRM product you wish to use and then advise hardware vendors that you won’t buy their wares unless they can be managed using your company’s standard management software.
Jon Toigo is CEO and managing principal of Toigo Partners International LLC (www.toigopartners.com).