Successful capacity planning is a prerequisite to supplying storage on demand in a storage utility model. Is capacity planning black magic, good luck, science, or all three?
By Dick Benton
The storage utility model’s basic premise is to provide storage in much the same way as electricity, phone services, cable TV, and water are delivered. Utilities serve as centralized or consolidated suppliers that predict usage, loads, and new connections so that the service is always (or nearly always) available without interruption.
The storage utility model can have problems similar to those any utility experiences. But a business unit’s tolerance for loss of storage-and therefore, loss of applications and data-is significantly lower than the general public’s tolerance for the occasional blackout.
So how does a prudent storage manager do capacity planning? Let’s take a look at a mature utility industry (albeit one built on a creaking infrastructure)-the electric grid. How does an electric company plan capacity? Looking at projected new home construction, projected industry construction, and current and past usage by season allows electric companies to make reasonably accurate projections for the future. Their dilemma is that efficiency requires close-to-capacity utilization. In addition, any capacity need that triggers new power plant construction brings on a host of problems, not the least of which are investment levels and lead time to online capacity.
Notably different for a storage utility is that service levels for on-demand usage require under-capacity utilization. How can you apply the successful aspects of the utility model concept to storage? There are many similarities, as well as important dissimilarities. The electric company has a strong handle on past usage patterns spanning years, but is hampered by long lead times for building and commissioning new plants. Most IT enterprises, in comparison, have little history of usage, but have the ability to bring a new array online quickly. The two industries do have many similarities: applications rather than customers, storage arrays instead of power plants, and storage fabrics instead of the electric grid. And, like the electric company, there are significant differences in consumer size.
Consuming storage in large gulps
Fortune 500 companies consume storage in 100TB gulps and have accordingly significant purchasing leverage. A local insurance company may consume storage in 25TB chunks with correspondingly less leverage. And some organizations nibble away at storage in 5TB chunks and have very little leverage.
If you consume storage in large gulps it is highly likely that you can do away with detailed capacity planning altogether and strike an agreement with your vendor(s) whereby multiple terabytes are installed on-site but billed only as they are used. The smaller the organization, however, the more work is required for capacity planning and consequent purchase planning. Without the ability to activate storage on demand from a pre-installed base, capacity planning includes consideration for that big crunch breakpoint when a new array is needed just to support another 500GB.
A number of capacity planning options can be combined or modified to suit your specific circumstances. First, we will make an assumption that Pareto’s law applies to storage and that 20% of applications are responsible for 80% of the disk utilization. Three steps give us a basis for capacity planning:
- Identify the “20% applications” and assess their current storage utilization. If possible, look at how much storage these applications used last year to get an idea of growth, although that data is often not available. (Another option might be to examine any application that consumes more than 5% of storage and focus the planning effort around those applications.)
- Determine what attribute triggers the storage growth. Talk to the business analyst or database administrator and get an understanding of what drives the storage requirement of the application. Often it’s either a transaction or a dollar value. With this information, the business unit can give you some of its budget numbers or sales projections to determine future expectations.
- Attempt to define best-case and worst-case assumptions for data growth in the coming period. Again, the business analyst or business unit manager can help.
Armed with this basic information, you can now look at the type of data these applications use. Usually it is structured data (e.g., a database). The penalty for running out of space is so severe that there is a natural tendency to allocate everything in sight.
Negotiate with DBAs and business analysts for an agreement on a suitable over-allocation to mitigate the risk of running out of space. The more important the application, the less risk the company can afford.
Working with the 20% applications list, you can capture information on the criticality or “business impact analysis” ranking to understand how much risk is tolerable for the database on each application. If you were to rank the risk into three tiers, perhaps the top tier would allocate 100% over worst-case growth, tier two might provide a 50% over-allocation, and tier three might provide 10% over worst case. The key is that these decisions be formalized in a matrix and supported by all the stakeholders. Your estimates might be wrong, but at least you know how you arrived at them, which implies you should be able to improve your decision-making over time.
E-mail and spam
E-mail is another key area affecting storage utilization, and it seems to be on an exponential growth path. In an e-mail system, you can plot the growth in e-mail capacity over the past 12 months. This plot can then be extended into the future. It may be prudent to add a certain percentage-say, 10% per month-because you don’t really know how fast the curve will grow. Again, although you might not know the correct number, striking a specific number and monitoring it will provide a better basis next time around.
What about the impact of viruses or spam, where within the space of a few minutes the entire year’s allocation may be consumed? This situation is difficult to effectively plan for in storage capacity planning. It may be useful to establish what the boundary is. Then you could institute a policy and notify the organization that if e-mail arrives in excess of that boundary, then e-mail will be disabled.
There are no easy answers. Our advice is to develop an empirical base for storage capacity planning where your assumptions are documented and monitored.
Over time, this process will become more sophisticated based on your experience with the organization’s data growth patterns. Understanding the business impact from an application disablement (caused by lack of storage) allows you to make an intelligent assessment in partnership with key stakeholders, balancing risk against investment in storage capacity.
Dick Benton is a senior consultant with GlassHouse Technologies (www.glasshouse.com) in Framingham, MA.