The market for traditional enterprise NAS, SAN and DAS products is going to suffer a meltdown over the next decade as businesses switch to lower cost server SAN products.

That’s the conclusion of Stu Miniman, senior analyst and principal research contributor at research community Wikibon.

He expects the server SAN market to grow at a staggering compound annual growth rate (CAGR) of 38% over the next five years, while the traditional enterprise storage market shrinks by 16% CAGR in the same period.

The obvious question then is “Why?”

The answer, according to Miniman, is that server SANs make life easier for IT departments, and that means that they also cost less than traditional storage solutions. “The main reason for the rise in server SANs is operational simplicity,” he says. “The biggest sin committed by IT teams is undifferentiated heavy lifting. Infrastructure is a temple, and people spend far too much time and effort on tweaking, adjusting, configuring and migrating. IT spends 70% of its budget just keeping the lights on.”

That being the case, the capital cost of storage systems becomes largely irrelevant. What’s important is how much they cost to keep running. Miniman reckons that Facebook builds five server configurations each year, and any application that it introduces has to run on one of them. “The chosen configuration might not be ideal, but Facebook can roll out racks of them in a very short space without having to tweak, adjust, or configure them.”

His conclusion then is that enterprise should – and will – increasingly adopt server SAN storage solutions that require less of this tweaking, and generally meet the organization’s storage requirements.

“We need to change the way we think about storage infrastructure,” says Miniman. “We need to shift from hardened systems to a distributed storage architecture where any part can be allowed to fail. And rather than including lots of functionality, we need systems with APIs so you can add the features that you need.”

This ability to add features through software modules is particularly important in a world where software as a service offerings have made companies become accustomed to frequent improvements to the applications they use without the need to migrate to new versions of the application.

“If I buy a storage array today then I will probably plan to keep it for about five years. But do I really want to be using something in five years’ time that is backward and uses old technology? What I really want is something to which I can add features and functionality as new technologies emerge,” he says.

Server SANS are also much more flexible than traditional SANS, and can grow with an organization’s storage needs, he believes. “We think that 30% of storage budgets are spent on migrations. 30% is a staggering amount, but when you consider that a storage array can easily take six months to ramp up, and the same to ramp down that figure is entirely possible. But with a Server SAN you just add capacity and the storage pool continues to grow.”

If, as Miniman suggests, Server SANs can beat traditional systems on costs – both capital and operational – and also on useful functionality, then what about performance?

It may be that expensive, top-end arrays offer better performance than a typical VMware Virtual SAN or something similar, but Miniman argues that actually very few applications that enterprises typically run actually require the very highest performance.

More to the point, for those applications that really do need the very highest storage performance the solution is unlikely to be an external storage array. In fact it is more likely to be some sort of storage that is located close to the application itself – probably some form of flash storage attached to the compute resources themselves inside a hyperconverged system.

Wikibon’s predictions are for the next ten years or so, but there’s already evidence that enterprises are already adopting server SANs in large numbers: sales grew by 187% from $370 million in 2013 to $1.1bn in 2014, according to its Server SAN Research Project 2015 (updated July 2015.)

What kind of enterprises are the early adopters? Miniman reckons many of them are mid-range companies that have bought server SANs for specific projects – particularly VDI implementations. “This prompts a Nutanix purchase, or something similar, and then they add more applications to it,” he explains.

Today the server SAN market is dominated by a mix of smaller storage vendors such as Nutanix, SimpliVity and StorMagic, and larger vendors such as HP and VMware. But that’s likely to change in the near future. Here’s why.

Apart from flash arrays, the traditional enterprise storage market is already in decline, and it’s Wikibon’s contention that it is in terminal decline. If the traditional storage market does shrink at a CAGR of -16% then on the face of it that should have a serious impact on the businesses of storage giants like EMC, HP, NetApp and IBM. But these companies are well aware of the threat of Server SAN technology, and the opportunity that it presents, and are prepared for the change. “In fact every one of the big vendors has a plan for this,” says Miniman.

For example, EMC bought Israeli storage software vendor ScaleIO back in 2013, and also offers its ViPR controller software (and its CoprHD open source version.) That means that it is well placed to head full steam into the server SAN market whenever it feels the opportunities merit that sort of action.

When is this likely to be? Clearly the large vendors won’t want to jeopardize sales of their conventional products, but in the near future the benefits of concentrating on the server SAN market will outweigh the costs.

“I expect that the likes of IBM and EMC will be top 3 or top 5 players in the server SAN space very soon,” says Miniman. “I think that the traditional SAN or NAS of today will be a very small part of these companies’ sales ten years from now. What customers will be buying is going to be very different.”

“Nutanix did about $225m in sales, and SimpliVity about $75m, so it won’t take much for IBM and EMC to overtake them,” he adds. “Today Nutanix is the leader but next year they will face tougher competition.”

Where does that leave traditional storage products? Miniman believes that they will still have a role to play in a typical enterprise. But that’s likely to be restricted to the long-term data retention segment, which usually requires infrequent access to the data, but more frequent access to metadata about the archived data.

He adds that “flape” (flash and tape) systems from the likes of DDN with WOS, Cleversafe and Scality that distribute data across multiple sites using highly efficient erasure coding are likely to be popular for some time to come.