The usual fix to poor applications design is to buy hardware to solve the problem. What I mean by poor applications design is:
- Applications that make small sequential requests when they could be making fewer larger requests.
- Applications that make large requests and do not use direct I/O (yes, it might be a bit faster for the user, but it is far more work for the kernel)
- Applications doing random I/O when they could be doing sequential I/O.
- Applications doing I/O not aligned on disk and/or RAID sector boundaries.
I am sure we all could provide other examples of poor application design choices. The problem is that the file system and operating system most often have no choice but to do whatever dumb request the application tells it to do, as there is little to no communication along the data path. The usual solution to the problem is the hardware approach -- in other words, throw hardware at the problem. The hardware solution works to a point, and with advent of SSDs, the hardware solution works for a point farther down the path, given the lower latency and often higher bandwidth of SSDs and the ability of SSDs to handle more IOPS.
The question then becomes, is using SSDs to solve an application design problem the right solution? The hardware vendors, of course, will tell you yes, and in the short run they might be correct. Buying a few SSDs might be less expensive than paying to redesign applications in the short run, but in the long run just throwing hardware at the problem is going to have limitations. When that happens, you will have no choice but to re-write your applications.
Then, you will realize all of the money you have spent over the years throwing hardware at the problem, only to still be faced with the cost of the re-write, and you will not be happy.