The economics of blade computing involves trade-offs, including storage-related issues.
By Dave Vellante
The idea behind blade computing is a good one: Strip out and share key components such as power, cooling, and storage across multiple servers; reduce costs; and simplify the IT infrastructure.
If every application workload fit well into a blade environment, then we wouldn’t need any other computing approach; unfortunately, that’s not the case. Blade computing works great in certain situations, but can actually increase costs in others. The allure of blade computing is compelling, and aggressive marketing by blade vendors can make blades sound like the logical solution-and often they are. In general, however, users must become more aware of the benefits and drawbacks of blade computing and fully understand the marginal costs and marginal benefits. This is especially true in smaller environments where companies don’t have the critical mass of “blade-friendly” applications to exploit the economics of blade servers. Often, the incremental costs of the chassis and the drawbacks of sole-sourcing outweigh the benefits and can sometimes actually increase costs by as much as 2x to 3x.
This article answers the following questions:
- Where do blade servers fit, and where do they not fit?
- What economic value can blades bring?
- What are some of the storage best practices for blade servers?
- What does the future hold?
What is blade computing?
Blade computing combines blade servers with enclosures that house these servers. Blade servers are very compact, high-density servers each with a unique CPU and memory. The number of components on each server is reduced by sharing power, cooling, cables, networking, storage, and consoles with other servers that reside in an enclosure or chassis. This approach squeezes more costs out of distributed computing by spreading the cost of these components across other servers in an enclosure, dramatically reducing the number of parts on each server.
Blade technology drives simplicity and commonality into IT infrastructures, making them easier to manage, more space- and power-efficient, and more resilient if designed properly. Moreover, by separating out storage and allowing multiple blade servers to access storage pools, organizations can build more-resilient infrastructures that can better withstand failures commonly seen in commodity disks and operating systems. As always, there are caveats.
Where does it make sense?
Blade servers are excellent solutions for applications such as Web serving, e-mail, certain analytics applications, and especially workloads that are “parallelizable”-in other words, applications that can be easily spread across many different servers. But more- complex workloads with higher transaction rates and update activity are often not well-suited for blade architectures.
For applications such as very large transaction processing workloads, especially those with high write-to-read ratios, making applications work on specially configured hardware that cannot be as standardized as blades will remain a complex and challenging undertaking.
However, for a broad array of applications such as Web serving, small databases, file-oriented, and other unstructured workloads, blades can be an excellent fit, with some caveats that we’ll explore in the best practice section of this article.
What’s the economic value?
Storage is an important consideration in blade economics. Stripping storage out of the blade server improves packaging, reduces the cost per blade, and often brings other flexibility benefits, but providing access to storage is sometimes cost-prohibitive, especially with block-based storage using SANs. Often, the cost of SAN infrastructure cannot be justified in blade environments. File-orientated workloads appropriate for NAS are more economically feasible, and newer alternatives such as iSCSI will ultimately lower the cost of consolidating block-based storage for blades.
The storage issues underscore the paradox of blade computing. On one hand, blade servers make sense from a packaging standpoint, but they often bring other dependencies that bog down the business case.
It’s useful to look at the economic benefits of blade infrastructure in two dimensions:
- What hard dollar savings can be realized from blade computing?
- What incremental business benefits (“soft dollars”) can blade computing bring?
While it’s always important to assess hard cost savings, experience shows that the real benefits with blade computing are seen in terms of better application availability and much faster deployment of change, specifically related to the provisioning of new server capacity.
The degree of cost savings, or reduction in total cost of ownership, will vary as a function of three factors:
- The number of workloads that are candidates for blade computing;
- Best practices used to exploit emerging hardware and software technologies; and
- Current levels of server and storage consolidation that have been achieved.
Many small businesses simply don’t have enough “blade-able” applications and therefore are not good candidates for blade computing. These shops often are better served by staying with traditional server technology and shopping around for the best deal as their requirements dictate. Best practice in blade computing drives commonality into the server infrastructure (e.g., same server speed, memory, networking standards, external storage, and the same vendor), and this approach isn’t always the best for all businesses, especially those that want a dual-source strategy.
Also important is the degree of server and storage consolidation. If servers and storage today are highly underutilized, distributed, and not being professionally managed by IT people, then greater savings will be realized with blade computing than if servers and storage are already physically collocated in a data center. In most businesses, this physical movement into a centralized location has already taken place, but not always. The less efficient your server and storage operations are today, the more you’ll get out of blade computing or any other consolidation strategy.
Here are two representative examples that can help quantify the hard dollar benefits of blade computing. Case 1 is a smaller organization that sees blades as a more logical approach. Case 2 is a larger firm that’s trying to wring greater cost savings out of server and storage infrastructure. In both cases, the predominant storage approach was to provide blades access to NAS.
A mid-sized company that targets blade computing at 1,700 online services users, predominantly served with traditional collocated servers.
- 35 servers
- 4TB of disk storage
- Server growth of 25% per year
- Storage growth of 50% per year
- ~5% on capital costs and environmentals
- ~1/2 FTE
- ~$350,000 over five years
- The workloads at this organization are not conducive to huge cost savings.
- Small improvements in utilization and environmentals lower capital costs.
- Improvement in staff productivity, but not enormous.
A large telecommunications company that provides Web and file services to more than 25,000 users, predominantly served with traditional rack-mounted servers and direct-attached storage (DAS).
- 250 servers
- 35TB of disk storage
- Server growth of 15% per year
- Storage growth of 55% per year
- ~10% on capital costs and environmentals
- ~3 FTEs
- ~$1.2 million over five years
- Larger scale installations benefit from critical mass.
- Utilization and environmentals improvements are more meaningful.
- Staff productivity is substantially better.
- Still well below savings seen on many traditional consolidation projects.
Neither of these case examples would cause a CIO to run to the CFO and urgently make a case for blades. In both cases, the capital cost and environmentals savings were marginal, and in Case 1 barely offset the incremental cost of enclosures. Improved productivity of IT staff was significant, but not nearly as significant as in many traditional consolidation projects. Cost savings, as we’ll see in Case 3, below, is not always the predominant driver with blade computing and associated storage.
Here is the conundrum of blade computing: Typically, the applications and data that can benefit the most from a blade approach are not the complex applications that are extremely labor-intensive, as is often seen in traditional server consolidations. The ones that are “blade-friendly,” unfortunately, are not going to allow you to take huge costs out of your infrastructure. But every little bit helps.
Business side benefits
It’s the non-IT benefits, or so-called “soft dollars,” where use cases suggest the greatest benefits of blade computing. Specifically, the advantages of creating a common server infrastructure that can be virtualized and supported with shared, centralized storage can be enormous. Consider the following case:
An Internet services provider with very high growth, serving more than 10 million subscribers. The company was having a problem: Service levels for subscribers were poor, and it was constricting the company’s growth. Blades were the answer.
The IT staff determined that the major problem with service levels related to commodity computing, namely OS failures and the inability to quickly fail-back to previous versions of software. Disk failure and complex operating procedures were also culprits. The solution involved replacing rack-mounted and stand-alone servers that used internal storage with a virtualized blade computing infrastructure and consolidated NAS-SAN for file- and block-based storage.
- 500 servers
- 100TB of disk storage
- Server growth of 10% per year
- Storage growth of 70% per year
- Application availability improved from 97% to 99.8%.
- Reduction in time to provision new servers, and consequently new subscriber services, went from days to minutes.
- The company experienced major improvements in system stability during the installation of new operating systems and software releases.
- Completely eliminated IT infrastructure as an inhibitor to business growth.
- The company estimated this change was worth more than $15 million, measured in improved customer satisfaction, and increased revenue and employee productivity, all realized in less than one year.
- Blade computing allowed servers to share boot devices.
- The failure of commodity disks and operating systems was addressed by blade computing.
- The approach removed IT as an impediment to corporate growth.
The point of these three cases is that although there are some cost savings to be realized with blade computing, the real business benefits stem from simplifying change management and addressing fundamental IT challenges associated with managing commodity operating systems and components.
Case 3 underscores this as the IT staff determined that the major problem with service levels related to commodity computing-namely, OS failures and the inability to quickly fail-back to previous versions of software. Disk failure and complex operating procedures were also culprits. The solution involved replacing rack-mounted and stand-alone servers that used internal storage with a virtualized blade computing infrastructure and consolidated NAS-SAN for file- and block-based storage.
A clear point emerged at a recent Wikibon.org (www.wikibon.org) research meeting comprising storage and server experts, consultants, and practitioners: Blade computing works best when organizations apply a “one-size-fits-all” strategy, meaning all the blades in the chassis are as similar as possible and, ideally, identical. This means same CPU, same speed, same memory, same everything, including the same vendor. By standardizing on blade servers, operating procedures can assume that every component in the chassis is identical, and IT operations doesn’t have to worry about the sensitivity of a particular server component to an application’s unique characteristics. This makes blades more swappable, easier to manage, simpler to back up, and cheaper to acquire and inventory. Greater diversity within the chassis defeats many of the benefits of blade computing.
If, for whatever reason, you don’t want to enforce this degree of commonality, it is advisable to take an n and n-1 approach to blade server technology, which means standardizing on a couple of blade server types, one current technology and one current-minus-one generation, and replacing existing server technologies every few years to keep the infrastructure simple. The business benefits of commonality, as seen in the case examples, will outweigh any incremental hardware costs associated with this approach.
The other best practice cited by Wikibon practitioners calls for using virtualization to separate storage from processors and ensure there is no fixed association of an application with a physical server. By using virtualization engines, system software can be centralized and version control can be managed.
This means OS failures can recover from a central repository of OS versions, enabling super-fast OS problem resolution by, for example, reverting to a previous version of an OS. By virtualizing the storage, data can be striped across multiple arrays so that no single disk failure will cause applications to crash. By separating storage from servers, all recovery files such as journals that preserve the state of an application can be accessed by other servers, which minimizes the time to recover.
The action item here for IT is to configure blades with storage external to the servers and ensure the servers have no fixed association with applications. Focus blade virtualization projects on creating simple, robust environments that can withstand the failure of commodity components, and don’t worry so much about saving processor cycles.
The future of blades
Most server architectures today, including blades, generally assume a one-to-one correspondence between an application and a server. Web service delivery, however, is driving demand for new blade server architectures, and vendors are beginning to re-think the traditional definitions of servers. Blade computing can be an underpinning of new approaches to architecting network-based systems where the presumption of frequent component failure and highly distributed computing resources are fundamental to designs.
By using inexpensive servers and assuming components will fail, the industry is being led to an architecture that spreads operating systems, file systems, applications, and data across entire server infrastructures worldwide, ensuring lightning-fast response times and always-on application availability. While, for the most part, architectures being deployed today presume a one-to-one relationship between server and application, Web services and potentially blade computing are paving the way for new growth opportunities by spreading everything everywhere and making worldwide grid computing a reality.
The storage implications of this trend are enormous as services providers and IT will increasingly use virtualization technologies at both the back-end (servers to disk) and front-end (applications and operating systems) to “over-provision” virtual capacity, allowing applications to think they have access to more storage than is physically available.
This means IT can get innovative (assuming appropriate chargeback models and metering software is in place) and “double dip” by charging users for the virtual capacity that is provisioned and “reselling” that capacity to other users, all the while transparently reporting to users on what capacity is available and exactly how much is being consumed.
Now there’s a twist: selling the same resource twice and making users happier, to boot.
Dave Vellante is a co-founder of The Wikibon Project, an open community of practitioners, consultants, and researchers dedicated to improving technology adoption. Every Tuesday, the Wikibon community meets in an open forum at 12:00 EST to discuss critical issues in the storage industry in Peer Incite Research Meetings. For more information, go to www.wikibon.org.