Data deduplication vs. data compression

Posted on August 11, 2010

RssImageAltText

August 11, 2011 – "It's easy to see why dedupe gets all the attention. When you boast 20-to-1 data reduction rates, that means 1TB takes up only 50GB of space. But what is sometimes missed is that data compression of larger volumes of data recovers far more storage." That's according to an article recently posted on InfoStor partner site Enterprise IT Planet.

Here are some excerpts from that article, by Drew Robb:

"Clariion and its compression announcement are not getting enough love," said Greg Schulz, an analyst with the StorageIO Group. "Being able to reduce a storage array from 100TB to 50TB is huge for storage administrators."

Schulz doesn't think it's a case of dedupe vs. compression. He believes organizations must employ several technologies to reduce the data footprint.

"You have to look at the bigger picture of reducing the data footprint via different techniques," Schulz said. "Dedupe is one way, but there is also compression, thin provisioning and other methods."

Read the full article, which includes analysis of some of EMC's recent data deduplication and compression announcements, at Enterprise IT Planet: "Data Compression vs. Deduplication."

Related articles:
IBM scoops up Storwize
Dell to acquire Ocarina for data deduplication (blog post)
Top 10 storage acquisitions of 2010 (blog post)


Comment and Contribute
(Maximum characters: 1200). You have
characters left.

InfoStor Article Categories:

SAN - Storage Area Network   Disk Arrays
NAS - Network Attached Storage   Storage Blogs
Storage Management   Archived Issues
Backup and Recovery   Data Storage Archives