An industry analyst provides tips on setting up a sound information life-cycle management strategy and avoiding the next client/server trap.
By Phil Goodwin
Information life-cycle management (ILM) is likely to be the "hot" storage topic through 2004-05, given that ILM (sometimes referred to as data life-cycle management) is being promoted by nearly every major storage vendor (e.g., EMC, Hewlett-Packard, Veritas, and StorageTek. Although IBM does not promote ILM by name, many of the elements of its On Demand strategy for storage are similar to ILM.)
Many characteristics of ILM date back to the days of client/server computing. A number of these characteristics are positive, such as addressing the shortcomings of existing solutions (e.g., low personnel productivity, limited business solution flexibility, and high infrastructure costs). Unfortunately, however, there are also a number of the more negative aspects of the client/server days (e.g., overstatement of benefits, underestimation of effort, and unrealistic implementation horizons).
Although early client/server implementations were disappointing, client/server computing as a whole was ultimately successful. We expect a similar outcome with ILM, whereby organizations that approach ILM as a strategic project implemented in stages over time will achieve significant improvements in not only storage infrastructure, but also in data management services to the organization as a whole. However, those organizations that view ILM as a silver bullet and sell it to management as an event and not a process are likely to suffer cost overruns and disappointing results.
This article is intended to provide a structure around which organizations can begin the transformation process to holistic data management.
ILM achieves its goal of improving storage management by automating common tasks and optimizing asset utilization. However, storage management is actually only a portion of the benefit of ILM; ILM pervades the organization by combining content management with storage infrastructure to bridge the gap between technological functionality and business requirements. As such, successful ILM projects must be rooted in understanding business requirements with respect to data flow, use, regulatory compliance, security, and management.
Nevertheless, many aspects of ILM are fundamentally implemented within the purview of storage management. Indeed, the physical instantiation of ILM is performed primarily in the storage layer, although significant aspects are in the application (content management) layer. The following sections describe practical steps to implementing ILM, with a focus on process improvement prior to technology deployment.
Building a business case
Although any pro forma financial cost model inevitably lacks specificity for a given situation, the model in Table 1 provides a basis for building a business case for ILM. However, before embarking on an ILM project, you should develop a reasonable business case for the project. This may produce a "go/no-go" decision. Moreover, the business case cost model is useful when you are articulating the benefits of ILM to senior managers. Of course, one of the key assumptions in this model is price.
Therefore, the model should be run twice: once in the beginning for a "go/no-go" evaluation, and again during technology consideration/acquisition.
The model in Table 1 focuses on the hardware and software acquisition aspects of storage management. How-ever, given that META Group's research indicates that 60% of storage management ownership costs are hardware and software-related (40% and 20%, respectively), any improvement in these costs has a substantial impact on the total cost of ownership. Moreover, adding tiers of storage will not substantially increase the operating costs (e.g., people). Although some additional training/retraining may be necessary to manage the various systems, IT organizations (ITOs) probably will not need to add staff for these tasks.
The limitation of this model (and similar models) is the inability to quantify other benefits (e.g., reduced risk, improved document management, etc.). However, a system that can be both tangibly and intangibly beneficial is the ideal "win/win" scenario. Readers should also note that the financial benefits of this model are reduced to percentages, which should prove relatively the same over time even though the absolute price points may change.
In order to connect business requirements with technology, META Group recommends that five steps be followed:
1. Categorize/group data types—Obviously, treating each data type individually is impractical; the number of associated policy implementations would simply be unmanageable. Therefore, data elements should be classified according to specific attributes. The objective is to reduce the number of elements, and therefore the number of policies, to an amount that can be effectively implemented and managed. Although this number may vary by organization, an ideal target would be three to five groups. Also, these attributes should be rank-ordered according to priority of the attribute.
Moreover, the categorization process is not necessarily as simple as identifying applications. Data elements within an application environment (e.g., e-mail) may need to be treated separately. Listed in Table 2 is a sample data element attributes and categories. The list of important attributes will vary with the organization.
2. Relate business rules to data types—Creating business rules to match data characteristics can be accomplished by creating a business rules matrix, as shown in Table 3. It should be noted that this matrix essentially decouples the various data attributes and allows them to be treated individually.
3. Determine service levels—It is important to note that the attributes in Table 3 can be mixed and matched. This means that service levels cannot be easily sorted into three or four levels (e.g., platinum, gold, silver, and bronze). Thus, service levels should not be delineated by application, but rather by attribute (i.e., a three-tier model for criticality, response time, recovery time, etc.). A sample of such service levels is shown in Table 4.
Cost, of course, is a key driver of service levels. Using a bronze service level as the baseline service level where the relative cost equals 1.0, then silver service will likely cost 1.5-2.0, and gold will cost 2.0-3.0. Whether a service level is justifiable may be as simple as comparing the deployment cost to offsetting cost factors (e.g., lost revenue due to downtime, lost opportunity, or lost productivity).
4. Establish tiered services—The process of establishing tiered services is the point at which business requirements are linked to tiered services. Most of the requirements specified in the services levels are expressed in business terms, not technology terms. Importantly, the first three steps are those that involve the stakeholders (e.g., end users, management, application developers, DBAs). Moreover, that is as far as the stakeholders need to be involved. Thus, tiered services become the "product offering" from which stakeholders may select services.
5. At this point, the storage team/architects take over the process and select specific products to meet the service level demands. For example, they may decide to use EMC Symmetrix DMX or HDS 99XX storage arrays for premium storage, IBM Shark or HP EVA for midrange storage, and EMC Centera or NetApp NearStore/SnapLock for low-cost, secured storage. Similar decisions would be made for data replication, tape products, etc.
As these models indicate, approaching ser-vice levels in a "gold/silver/bronze" approach is overly simplistic. Instead, the appropriate models become a matrix of service descriptions, from which a service level agreement (SLA) is created. The matrix approach is possible because there is no technological dependency between the elements of the services.
ILM is not a simple, short task. Nevertheless, the task can be made manageable by first establishing application priorities and then defining a series of goal-oriented milestones that can be both measured and managed. ILM can be justified on both intangible (e.g., reduced risk, better compliance) as well as tangible (e.g., total cost of ownership) criteria.
Phil Goodwin is program director, server infrastructure strategies, at the META Group consulting firm (www.metagroup.com) in Stamford, CT.