EMC takes aim at HDS

By Dave Simpson

Billing it as the company's most significant announcement in the last decade, EMC earlier this month introduced the long-awaited Symmetrix DMX line of disk arrays, formerly referred to as Symm 6. According to analysts, EMC may have hit the home run that it needs to jump ahead of the current performance leader, Hitachi Data Systems.

"This is a leapfrog market, and based on the information we have at this point, EMC has now taken the [performance] lead," says Randy Kerns, a senior partner at The Evaluator Group research and consulting firm.

The key architectural change in the DMX line is a move away from the (1.6GBps) bus-based design of the existing Symmetrix arrays to a point-to-point architecture that EMC refers to as a matrix (as in Direct Matrix Architecture). Bus-based designs suffer from arbitration, contention, and scaling issues. In contrast, a matrix architecture allows each I/O component in the array (front- and back-end controllers and cache elements) to communicate directly with each other over a dedicated point-to-point connection. EMC claims a (theoretical) maximum aggregate bandwidth of 64GBps in the matrix through the use of up to 128 500MBps serial links.

The point-to-point matrix architecture is in contrast to switch architectures (such as Hitachi's), which route communications between array components.

Point-to-point architectures can lead to advantages in reliability, availability, and scalability, but the key advantage is performance. EMC claims across-the-board performance advantages over HDS's 9980V array, but for existing Symmetrix users, comparisons against the Symmetrix 8000 may be more useful.

According to Barry Burke, director of Symmetrix platform operations at EMC, the DMX is 3x to 8x faster than the Symmetrix 8000, depending on application load and the specific DMX model (see table for model specifics).

Benchmarks vs. benchmarketing

As for performance claims, consultants advise end users to benchmark high-end arrays using their own production applications to get real-life performance comparisons. However, many IT organizations do not have the time or resources for this type of testing, in which case they are forced to depend on benchmark numbers.

Kerns, for one, recommends that IT users ask for benchmark results from the Storage Performance Council's suite of independent tests. However, EMC does not currently participate in the SPC program.

"More and more end users are asking for SPC test results," says Kerns. "Otherwise, they're faced with benchmarketing."

Click here to enlarge image

For more information about the SPC-1 benchmark tests, visit www.storageperformance.org.

In addition to the architecture and performance numbers, analysts such as Kerns applaud the way that EMC ported its existing Engenuity embedded control code to the new hardware. Kerns says that using essentially the same code (only about 20% of the code needed modification) means that the DMX will be relatively stable. This is in contrast to architectural shifts that require extensive re-writes of control code, which can lead to platform instability.

On the software front, EMC claims that virtually all of its existing applications will work unmodified on the DMX arrays. The company also says that replication applications such as Symmetrix Remote Data Facility (SRDF) and TimeFinder will run 3x to 10x faster on the DMX array versus the Symmetrix 8000.

One surprise in the DMX lineup is the "low-end" entry—the DMX800—which uses a modular (as opposed to "integrated" or "monolithic") architecture with the same enclosure, drives, and packaging used in EMC's Clariion arrays, giving the company potentially significant economies of scale and lower costs. (The DMX800 offers approximately 2x the performance of a Symmetrix 8820, according to EMC's Burke.)

Another departure in the DMX series is support for 3+1 or 7+1 parity RAID (as opposed to EMC's traditional mirrored configurations).

Other architecture features include up to 116 PowerPC processors; 512GB of cache (128GB in initial versions); 16GBps cache bandwidth and support for as many as 32 concurrent I/Os through cache; 64 Fibre Channel drive loops; and up to 96 Fibre Channel or ESCON ports (with support for FICON due in the third quarter).

This article was originally published on February 01, 2003