CommVault adds data-protection modules

By Ann Silverthorn

CommVault split its annual upgrade to the QiNetix suite of data-management products into three announcements this quarter in the areas of data protection, lifecycle management, and operations. The upgrades add more than 100 new features and functions.

The first announcement, last month, introduced ContinuousDataReplicator (CDR), a seventh data-recovery module built on the QiNetix platform. It joins the Galaxy Backup & Recovery, DataMigrator, DataArchiver, QuickRecovery, StorageManager, and QNet modules. CDR protects data located at remote offices and also between data centers.

CommVault describes CDR’s function as “continuous protection of data,” instead of today’s popular buzzphrase “continuous data protection” (CDP), because CDP is just part of the company’s solution, according to CommVault officials. CDR protects data at the byte level, which is more granular than CDP products that protect data at the block level.

Marc Staimer, president of Dragon Slayer Consulting, says that other vendors see CDP as a stand-alone market, but CommVault sees it as just one feature of its data-management portfolio. “What I like about CommVault is that all of its products are built on the same agent. There’s one agent for everything-backup, replication, SAN management, and storage resource management [SRM].”

Chris Van Wagoner, director of product marketing at Comm-Vault, explains: “We don’t believe CDP is a category separate and distinct from other technologies, but a tool that customers can use to better protect and reduce vulnerability to data loss. We’re offering an integrated spectrum from traditional backup to tape, disk-to-disk technology, snapshots, replication, and continuous data protection.”

Van Wagoner says Comm-Vault applied the CDP-like functionality of continuous change and capture technology to two user needs: the need for byte-level continuous change capture and data movement, and the ability to indicate when application data has referential integrity. CDR addresses those two functions through continuous replication of file and application data between the source and designated targets. The software employs “Recovery Points” designed to provide referential integrity of application data.

Users can decide per application how many recovery points to designate. For example, a SQL Server application can tolerate more of the “freeze-thaws” needed to mark recovery points than an Exchange server can. To capture I/O, CDR software requests that the application commit all of its operations to disk, placing Exchange in a suspended state (although still online and processing requests).

“Since users are banging on Exchange all day, you don’t want to pause it with great frequency,” says Van Wagoner. “A database, like SQL, is more tolerant of being paused because it captures the I/O onto a log and then replays the log in the background, so you can have more-frequent referential integrity.”

CDR moves the captured images, through integration with CommVault’s backup software, to any media. Later, it can restore from any of those images directly back to the host. Administrators can replicate entire volumes, individual directories within a single volume, or selected folders. Administrators can choose a recovery point and then, since all transactions are recorded, those transactions can be applied to roll forward or backward.

Dianne McAdam, senior analyst and partner with the Data Mobility Group consulting firm, comments that CommVault’s entry into the CDP arena marks further validation of the technology. “CDP began with start-ups and now the bigger vendors are embracing the technology. Some large vendors acquired start-ups’ technology, but CommVault developed its own,” she says.

Data classification

CommVault’s second announcement last month addressed data lifecycle management for large environments. The company introduced a data-classification engine at the front-end of its data-movement technology that allows users to address unstructured file information and organize it based on who owns the data and the business value of that data.

“Most data-management organization happens just before the data is written to storage. So you end up with a lot of silos and independent policies, depending on what kind of device you’re using. If it’s stored on server A, it gets one policy. If it’s stored on server B, it gets a separate policy,” says Van Wagoner. “We believe that companies should set their data-management policies at the beginning of the process rather than at the end. This way data can be handled in a specific way no matter where it sits in the environment.”

CommVault’s modules migrate or archive data based on policies set by administrators. Initially, the data classification feature on the third QiNetix generation will only be integrated with the migration and archiving product around Windows.

Proactive data operations

CommVault has observed that the nature of data management is evolving from a repetitive process, as in backup where it’s usually repeated daily, to single-instance archiving where significant failure rates can’t be tolerated. Storage strategies such as information lifecycle management (ILM), content-addressed storage (CAS), and archiving are causing users to rethink their data-retention methods.

CommVault’s third announcement, made this month, drives the reliability and management of data from a reactive mode to a proactive approach, according to Van Wagoner. The company has developed “intelligent QiNetix operations” (iQ Ops), which provide self-healing, re-startability, and diagnostic capabilities to the QiNetix suite. iQ Ops features “preflight” checks that identify potential failures, and it links to CommVault’s support infrastructure. iQ Ops alerts CommVault’s technical support, which initiates a problem/resolution approach from both a support and service perspective.

All of the upgrades to the QiNetix platform will be available in the first quarter of 2006.

This article was originally published on December 01, 2005