Data storage performance is one of those areas where no matter how good it gets, it will never be enough. As soon as more bandwidth, better IO or faster processors become available, applications immediately find ways to utilize all of it and then they want more.

That said, here are several smart ways to get more bang for your storage buck.

“There are many different ways to boost storage performance that include switching to a different storage platform (hardware or software defined such as DataCore among others) or leveraging storage optimization tools such as Diskeeper,” said Greg Schulz, an analyst at StorageIO Group.

Parallel IO

It’s not uncommon for online transaction databases such as SQL to be unable to respond quickly enough to keep up with frequent spikes in the volume of inquiries and orders. The system becomes so sluggish at times that users decide to go elsewhere for their business needs. Yet the servers appear to be equipped with ample CPU and memory, as well as plenty of networking and storage resources to satisfy the demand.

Given the symptoms, many organizations contemplate splitting up the database into multiple instances on separate machines in an effort to shorten the long queues inside the servers. However, this is not only difficult to execute, it can be an expensive approach to boosting storage performance.

DataCore Parallel I/O technology addresses the database latency problem differently. Rather than fielding the I/O requests serially, as the native operating system and hypervisor do, it processes multiple I/O requests in parallel queues using several of the multi-core server’s logical processors. Those parallel requests are cached in server RAM and scheduled to storage in optimal payload sizes.

“The technology available in DataCore Hyper-converged Virtual SAN effectively accelerates response time several fold using all of the server’s fastest hardware resources, without having to spill over to additional servers,” said Augie Gonzalez, Director of Product Marketing, DataCore Software.

Fragmentation Prevention

We’ve all seen performance of a Windows server slow to a crawl. No matter what you do, it takes half an eternity to open some files due to the architecture of the Windows operating system, whereby the OS becomes progressively slower the longer it is used. The more you add software and large volumes of storage, the worst the machine runs.

The traditional remedy has been to defragment the hard drive. But that doesn’t work in a 24/7/365 world where many production servers operating within mission critical storage environments can’t be taken offline.

The latest version of Diskeeper by Condusiv solves this dilemma. Perhaps surprisingly, it no longer defragments – except for one feature that addresses a particularly badly fragmented file that can be done without stopping the server. Instead, the favored approach is fragmentation prevention technology. Instead of picking up and consolidating the pieces once a volume has been splintered into thousands of pieces due to the way Windows writes data, the new approach is to prevent fragmentation before data is written to the server.

“IT administrators can immediately boost the performance of critical applications like MS-SQL running on physical servers using Diskeeper,” said Brian Morin, Product Marketing Manager at Condusiv. “It keeps systems running optimally via a fragmentation prevention engine that ensures large, clean, contiguous writes from Windows, eliminating the small, tiny writes that inflate IOPS and steal throughput.”

Intelligent Caching

Another new feature built into Diskeeper takes an entirely different approach to boosting storage performance. Its DRAM caching function is said to have resulted in workload performance boosts of up to 6C on MS-SQL benchmarks and an average of 40% latency reduction across hundreds of servers. This is achieved by dynamically caching hot reads with idle DRAM i.e. putting idle DRAM to work by serving frequently requested reads without memory contention or resource starvation.

“Diskeeper’s intelligent caching brings about a major leap in SSD write speed as well as extended SSD lifespan,” said Morin. “It also solves even the worst performing physical servers and brings them better than new performance.”

Performance Troubleshooting

Application performance suddenly plummets. A meeting is called to determine the cause. The software pros blame storage, the storage team blames the network and the network experts insist that server bottlenecks are the real culprit. But who is right?

“When server teams and storage teams get involved in resolving an IO performance issue, more often than not it leads to finger pointing,” said Dino Balafas, Senior Director Product and Strategy, TeamQuest.

TeamQuest’s Vityl Adviser application uses multiple vectors in resolving storage-related performance issues, which often manifest at the service level. It gathers server and storage metrics and utilizes algorithms to determine the health of system and storage I/O, as well as automating the modeling of future issues. Analytics are harnessed to determine the health of the systems disk IO by analyzing IO utilization, disk rate, IO intensity and IO rate to detect the root cause of a sluggish storage performance.

“This provides analysts the visibility into the shared storage to determine if the problem is caused by a server or the storage,” said Balafas.

Parallel NAS

The performance of traditional scale-out NAS has doubled in the past five years, while the amount of data generated and processed in a high-performance workflow may have increased up to 10s of thousands of times. In genomics, for example, the data generated by a single sequencer is growing 250x faster than traditional NAS performance.

You would think that adding a lot of flash would solve it. But it doesn’t really work since the core architecture cannot scale to take advantage of it, and is overburdened servicing every request of each new node, client or piece of software that is added. Further, traditional scale-out NAS typically replies on fixed hardware nodes that limit customer choice of scaling performance versus capacity, are not space efficient and are bottlenecked by traditional network protocols.

One way to solve this is to implement parallel file system solutions such as DDN GRIDScaler. It can start at 4U and a few hundred TB and can scale to over 17 PB in 2 racks.

“DDN GRIDScaler is built on a parallel file system architecture that provides consistent low latency access to massive amounts of data via high performance clients,” said Laura Shepard, Senior Director Vertical Markets, DDN.

Virtualized Storage Performance Boost

According to IDC, the digital universe could balloon to 44 zettabytes of information by 2020. This data explosion not only creates demand for new services but also reinforces the need for companies to optimize their storage infrastructure and capabilities. To unlock maximum performance in virtualized datacenters, organizations need to evaluate, understand, and optimize storage management.

SVA Software’s BVQ storage optimization solution is focused on solving storage virtualization challenges via visualization and heat-map analysis. It continuously collects data for on-demand and scheduled analysis, alerts on potential issues and helps to meet SLAs. In particular, BVQ provides deep visibility into the performance, utilization and health of an IBM virtualized infrastructure. Cost optimization features make it possible to drive high storage performance at low cost.

“BVQ is a comprehensive performance, capacity monitoring and analysis software for IBM’s Spectrum Virtualize family – IBM SVC, IBM FlashSystems, IBM Storwize, VersaStack – and all existing and new heterogeneous storage,” said Don Mead, Vice President of Marketing, SVA Software.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *