6. Flash Can Be Tailored to Workloads.
Adam Fore, director of solutions and product marketing at NetApp, said that the ability to deliver predictable performance has enabled a new class of enterprise SSD storage by allowing users to assign capacity and performance resources to workloads independently. The SolidFire product from NetApp, for example, leverages these SSD characteristics to create performance and resource pools that can be dynamically applied to workloads based on service levels with predictable and consistent performance and capacity levels. SSD storage options like this may work well for service providers, for example, who want to deliver tiered services with flexible characteristics at guaranteed service levels.
7. Don’t Dash to All-Flash.
Keith Parker, director of product marketing at Violin Memory, advised anyone looking to migrate to all-flash to consider asking a few questions, such as: Is the array they’re looking at able to optimally support multiple workloads? Can that array support their database, web servers, email server, SharePoint, multiple applications on a single array -- and provide optimum performance?
“There are going to be different needs depending on the application,” said Parker.
8. Databases Have Different Needs.
Many enterprise SSD deployments feel the need for speed. Those creating the storage to support a bunch of web servers, for example, will probably care more about throughput and IOPS. A database, on the other hand, is typically going to need extremely low latency, yet many make the mistake of neglecting this in favor of high throughput and IOPS.
Why is that? Databases don’t tend to push a lot of IOPs, said Parker. You can have a huge database and it’s only going to be using 75,000 to 100,000 IOPs. However, where databases are challenged is in response times – how long does it take between the making of a request to getting a response when you’re putting information into a form on a website, or some other similar function.
“It’s the time in between when you hit submit; getting that time down is important with databases,” said Parker. “The amount of data is trivial – it’s that time to the storage and back again ,which is why you need low latency.”
9. When Is Latency Good Enough?
Anyone moving from a disk-based system is probably used to plenty of latency. So any move to enterprise SSD is going to be an immense improvement. For databases and certain other workloads, it is vital to determine in advance what kind of improvement are you really looking for? Is 10 milliseconds of latency enough or do you need to take it that down to 1 millisecond or even lower?
“If you can go to less than half a millisecond, that’s the difference between a 10x improvement and a 20x improvement,” said Parker. “That small difference can make a huge impact on all of your users.”
10. Will You Scale Up or Out?
Scale up is where you buy a single box to start out small and add capacity. Scale out is where you can cluster multiple boxes together to increase both capacity and performance. Each alternative has advantages to it based on specific environments and workloads.
Some flash platforms favor scale out. Others favor scale up. A few do both well. But again, it’s a case of defining what you really need in advance. You may have to pay a little more for a platform that does both, so do you really need it? If so, it’s worth the investment. But the worst move would be to inadvertently purchase scale up when you need scale out, or vice versa.
“Look for a system that can meet your needs in terms of scale up or scale out,” said Parker.
Those are some of the ongoing flash trends and a few tips to help you navigate through the vendor maze to choose the right option to fit your needs. But regardless of how much flash or what flavor you deploy, the one certainty is that we are all going to be seeing a lot more flash as time goes on.
“As effective flash dollar-per-GB costs drop, flash deployment in the enterprise is accelerating,” said Eric Burgener, an analyst at IDC.
Photo courtesy of Shutterstock.