As you might have seen from lots of press releases, Gartner has released their view of enterprise arrays and for the first time, mid-range arrays. For the enterprise arrays the Gartner analysts have weighed in on best to worst, based on their criteria.
As least in the trade publications I have read (no way I am going buy the reports), there seems to be not much, if any, discussion of out-of-the-box experience or operational experience over a significant period of time or lots of other things. We do know that Gartner says that performance does not matter anymore for enterprise arrays because of flash.
To me this is clearly proves as clear as day that the reports are seriously flawed. Nowhere is there a discussions of the backend performance (from array cache to storage) of any of the arrays. I know for sure that some of the 12 arrays on the enterprise list have far more performance and some have, well shall we say, abysmal performance.
Gartner picked five use cases:
3. Server virtualization and VDI
I could see where performance would not matter in the cloud as the network latency is likely the dominate factor, but I cannot understand the statement for the other applications.
So here are a few of my questions, in no particular order:
1. Do these analysts every get on one of the arrays they are reporting on? Do they test it from the time it gets off the loading dock to, say, a month or two from then?
2. Do these analysts look at the underlying architecture of the storage including things like Memory bandwidth of cache? Design of the front and back end connectivity to cache (think PCIe buses and channels)?
3. How are reliability, manageability and usability measured, and how does it fit into the rating?
There can only be one winner in any analysis but there has to be a clear understanding of what you are analyzing and why – and why it matters. At least from the press reports I am seeing I am more confused than ever on the Gartner analysis methodology.
Photo courtesy of Shutterstock.