The role of ILM in a virtual data center

Posted on December 01, 2005

RssImageAltText

SNIA is developing standards that will ensure interoperability in ILM implementations.

By Nik Simpson and Matthew Brisse

Despite advances in data-center technology, IT organizations are still grappling with growing complexity and cost, and administrators are under pressure to control TCO for the IT infrastructure. The data-center infrastructure of the future will be designed to adapt to changing business and application requirements by automatically provisioning resources as demand requires.

In such a virtual data center, when business unit managers roll out new applications they will establish business requirements for performance, availability, compliance, and data retention. In response, the data center will have the ability to provision the necessary hardware and software resources to meet business requirements without further administrative intervention.

The virtual data center constantly monitors and adjusts resources allocated to each application to ensure business requirements are met, creating a closed-loop system that becomes increasingly optimized over time.

The concept of the virtual data center is powerful, yet full exploitation requires a strategy for information lifecycle management (ILM). These requirements for ILM come from the basic concept of the virtual data center: Installing an application is a logical task where the administrator enters basic requirements or SLOs and the management layer provisions the appropriate resources. In the virtual data-center context, ILM requirements must be reflected in the SLOs when an application is deployed. Examples of ILM requirements that might be specified include

  • Downtime-The amount of downtime that is acceptable for the application;
  • Time to zero value-The length of time for the value of the information to reach zero; and
  • Regulatory compliance-Regulations, such as Sarbanes-Oxley, that may affect the information created by the application.

Although the basic concept of ILM is well-understood in the mainframe world, it is still in its infancy in the open systems world. Most open systems environments have some procedures in place to manage the information produced by applications and users. Unfortunately, these procedures typically suffer from the following:

  • Lack of integration-Few IT organizations have a holistic approach to information management; procedures tend to be application- and operation-specific, using different tools with different management interfaces for each task;
  • No concept of information value-Not all information is of equal value, and the procedures for tasks such as backup and disaster recovery should be driven by the value of the information; and
  • Inability to scale-Existing ILM approaches can be difficult to scale as the number of servers and the quantity of stored information continue to increase.

With better management tools for physical infrastructure, many companies are looking to address information management in a consistent, flexible, and scalable way.

Centralized control of how information is managed can be a key first step toward more-ambitious goals such as creating a virtualized data center. However, the biggest roadblock for many organizations remains the lack of standards-based building blocks.

The need for open standards

A standards-based aproach to ILM procedures will require broad cooperation from the entire IT industry. To that end, the Storage Networking Industry Association (SNIA) is currently focusing on information management and the underlying storage infrastructure through two projects:

  • Storage Management Initiative (SMI)-The SMI aims to create a standard set of interfaces for the management of storage. With support from more than 20 vendors, the SMI is creating a set of interfaces that are designed to manage storage infrastructure components regardless of vendor; and
  • Data Management Forum (DMF)-The DMF is charged with becoming a resource for data management and ILM in a storage context, and working with groups such as SNIA’s ILM Technical Workgroup to establish interoperability among ILM solutions and data services.

The goal of SMI and DMF efforts is to create a set of four service layers (information, data, and storage management, and infrastructure) that will communicate through open interfaces so that high-level information management SLOs can be translated into low-level data management and storage management service requests.

As with other elements of the virtual data center, this goal is far from being fully realized. However, the DMF road map defines a data management strategy that will help organizations develop the fully interoperable tool sets required for broad adoption of ILM technologies.

Click here to enlarge image

Understanding the DMF’s model for ILM (see figure, above) is critical to understanding how ILM may be implemented in the future. The ILM model reflects a pragmatic approach to data management, in which management and control of the IT infrastructure are based on needs defined by the business framework. Business requirements that come from the business framework drive the management of applications and information. Within the ILM framework, the goals management layer transforms these business requirements into policies that are enforced in the service and infrastructure layers. In turn, the goals management layer provides feedback to the business framework regarding cost, risk, and status.

Based on the SNIA DMF’s ILM model, the system administrator would register an application with the data-center management layer (see figure, below). The administrator would then specify a level of service and performance, or a compliance requirement such as Sarbanes-Oxley. The virtual data-center model would reference the ILM requirements to deploy the application and provision the necessary resources and methodologies to meet the SLO without requiring the administrator to provision resources, tune the application, or define a backup-and-recovery strategy.

Click here to enlarge image

null

Flexible data management

Because the information management needs of an application change, information should be continually re-evaluated to ensure it is managed appropriately. Achieving this goal requires a flexible and intelligent data classification system capable of analyzing and applying data management rules that match the needs of the application and the value of the information.

However, intelligent classification is only part of the solution. Once information has been classified, intelligent data placement is required to direct information to the appropriate tier of storage and ensure that the services provided meet the SLO.

This requirement for dynamic information management is spurring a new generation of software designed to automate data classification and use the results to drive intelligent data placement. The combination of intelligent data classification and data placement can allow software to execute simple tasks such as ensuring adequate capacity is provided for data.

However, it can also enable more-complex tasks that meet the high-level requirements such as Sarbanes-Oxley. Creating such software will require the development of standard interfaces for every aspect of an application’s environment, allowing data to be moved, copied, purged, archived, or otherwise manipulated.

Standards-based data-center components for compute, storage, and network resources can enable enterprises to transition to a virtual data-center infrastructure that automatically provisions hardware and software resources in response to changing business and application requirements. However, without ILM this vision will never be fully realized.

Nik Simpson is co-chair of the SNIA DMF’s Information Lifecycle Management Initiative Technical Liaison Group and director of product marketing at Scentric Inc. Matthew Brisse is vice chair of the SNIA board of directors and a technology strategist in the office of the CTO at Dell.


Comment and Contribute
(Maximum characters: 1200). You have
characters left.