In our 2006 research paper, Taneja Group presented a vision to the industry introducing the concept of Continuous Data Technologies (CDT) (Taneja Group TECHNOLOGY IN BRIEF Continuous Data Technologies: A New Paradigm April, 2006.) The underlying issue was that data protection and the associated data management methods available to the market for the last three decades were broken and, in our opinion, required a complete overhaul.
Traditional data protection and data management involves making multiple copies of the production data for backup, replication, snapshots, mirroring, Continuous Data Protection (CDP), cloning, and more. These operations generate anywhere from 25 to over 100 mish-mashed copies of the production data that are cumbersome to manage, resulting in storage silos that get littered across backup disks and tapes, Virtual Tape Libraries (VTLs), storage vaults, archive systems, snapshot repositories, virtual volumes, data analytics platforms, data clones and now even cloud storage environments.
This data deluge, also considered big data, causes organizations to purchase more storage hardware and related products to protect and manage the exponentially growing volumes of data copies across one or many physical locations. This untenable situation increases the secondary storage spend by a factor of five, or more, relative to the cost of high priced production storage.
Complications that arise from data sprawl include:
- Inability to backup data
- Network congestion
- Multiple expensive software licenses