Virtualization is not a project that should be isolated in an IT technology silo. It affects the entire organization.
By Rob Latimer
Virtualization is one of the hottest topics in the IT world these days, due to compelling stories about return on investment (ROI), rapid provisioning capabilities, and application deployment flexibility. Although most of the focus is on server virtualization and its consolidation benefits, virtualization at the network, file, and/or storage layers may also provide a variety of potential benefits. Virtualization has the potential to drastically reduce IT acquisition costs as well as ongoing maintenance costs, helps facilitate disaster-recovery (DR) strategies, and can support critical (but typically painful) data-migration activities as well.
However, several recent surveys have indicated end users’ dissatisfaction around these value propositions: Customers that implement virtualization often cannot quantify or meet the ROI numbers they expected, and in many ways are finding management of the virtualized environment more complex than originally envisioned.
Obstacles to success
What is the root cause of this dissatisfaction and missed expectations? To best answer that, we can start with another question: What did these organizations do to assess how ready they were for virtualization prior to implementation? Most companies have greatly heightened their focus on preparation for disaster, but it seems that similar rigor has been largely overlooked during the implementation of virtualized environments. One should note that this lack of preparedness is not limited to virtualization implementations: It’s often seen with a variety of other IT projects as well, including enterprise resource planning (ERP), customer relationship management (CRM), and storage resource management (SRM) implementations.
Companies encounter the first obstacle to success in a server virtualization implementation when proper expectations with both management and the operations team in the implementation trenches are not set. The misunderstanding typically starts with a lack of context-how the virtualization implementation will fit into the overarching IT strategy, from both a short-term and a long-term perspective (assuming there is a strategy that’s documented and agreed to).
Fundamentally, no IT project should move forward unless it is driven by a defined business requirement. When you’re setting expectations there should be a defined and documented strategy statement that 1) clearly articulates the business objectives, the critical success factors, and the expected ROI; 2) provides a comprehensive understanding of the costs, as well as the real constraints and risks; and 3) achieves consensus from the organization’s key stakeholders, articulated up the chain of command. This strategy statement is critical for the success of a virtualization initiative.
Now that we’ve covered obstacle number one, let’s discuss how setting expectations fits into the overall virtualization strategy, and what business requirements the virtualization environment will support. It’s important in the early stages to clearly define the key stakeholders, including C-level sponsor(s), application owners, server architects, LAN/WAN architects, and data-center managers/operators. This early involvement and participation leading to buy-in with these stakeholders is critical to success.
The second obstacle to a successful virtualization implementation involves outside influence, or the gap between vendors and the client. The gap between perception and reality often begins with the vendors, who are more than likely to downplay the specifics regarding the level of effort required to implement their technology, or the number of hours required to properly test and develop new operating procedures or to modify existing ones.
Internally, this misguided enthusiasm may also get propagated with IT personnel. Many times these projects are sold via a “dollar figure per hour of downtime,” which is fine as a unique metric. However, this approach overlooks other benefit areas such as power, cooling, and floor space, as well as the question of whether the implementation team is fully capable of success and whether the potential impact to your operational processes has been fully considered.
Given these common obstacles, what benchmarks are available to determine the “virtualization readiness” of your organization? We will offer several below.
Let’s start by examining some of the impact areas of adopting a virtualized IT environment. The first question you should consider is: How mature does the governance, policy, and process framework need to be to handle the changes associated with virtualization?
To answer this question you first need to understand whether your organization is more technology-centric or service-centric. In technology-centric organizations, IT projects are more often driven by an “inside-out” approach, in which the IT organization recommends technology and the benefits they perceive the business will derive from implementing these recommended technologies. In a service-centric organization the approach to implementing new technology is reversed. A service-centric organization first looks at the business problem(s) to be solved and the technologies that might best solve these business problems, and then makes recommendations based on this analysis.
Many companies have embarked on an ITIL initiative, which at its core is designed to move the IT organization further along the maturity curve from technology-centric to service-centric. If you are implementing ITIL, then the identification of where your company fits along the ITIL maturity curve may help explain how the virtualization project needs to be justified internally. If the organization is moving more rapidly to a service-centric model, then the benefits to the business must shine through in the justification analysis. Any cost/ benefit model must include business objectives (external to the business and internal to IT), critical success factors (ROI, TCO, provisioning process cycle time efficiency), and potential risks or constraints (budget shortfalls, staff capacity for change, training).
Any proposed benefits of a virtualization implementation must be derived and communicated in the context of the service provider maturity level with appropriate weighting applied to those benefits that fall into a technology-centric bucket and those that fall into a service-centric bucket. For example, technology-centric benefits might include lower capital costs for server purchases, decreased data-center footprint, and lower operational costs. Service-centric benefits might include shortened provisioning cycle times leading to more-rapid application deployment, which translates to competitive advantage for a particular line of business.
It’s important to know how the virtualization technologies will affect all process areas, including not just the obvious ones such as SAN configuration, provisioning, and disaster recovery, but all processes along the ITIL spectrum of service support (release, change, incident, problem management, etc.) and service delivery (capacity, availability, service level, and financial management, etc.). Also, do you have defined business process criticality documentation, supported by underlying application criticality documentation, which in turn defines the underlying architecture and infrastructure? This documentation set may guide the determination of which environments are the best targets for virtualization. Typically, this documentation is generated through an overarching data-classification effort, which defines the value of a given business process’s data through a Business Impact Analysis related to the cost of downtime of that process. While data classification initiatives can be time-consuming and difficult to complete, “proof-of-concept” approaches limited to few business processes can expose large potential benefits in risk mitigation and compliance/regulatory areas.
The second question you need to consider is: How does the implementation of a virtualized IT environment impact the organization, and what level of training is required to really make this work? As mentioned earlier, many companies can trace their dissatisfaction with virtualization implementations to the lack of internal experience with the technology itself during design and implementation. Considerations here must include all functional teams tasked with implementing, testing, operating, administering, troubleshooting, and repairing the virtualized environment. The assumption organizations make is that it’ll be easy, when in fact substantial training and knowledge are imperative to the project’s success.
A virtualization project has broad and deep personnel effects. Before any project gets underway, it’s critical to ask the hard questions related to personnel readiness and any gaps therein, and to accurately estimate the training time required to close those gaps. Clear identification and communication of roles and responsibilities for the various functional groups before, during, and after the implementation must be accomplished to ensure success. Project management, server engineering, database administration, technical documentation, network engineering, storage, backup, and disaster-recovery engineering, performance and capacity planning, application ownership, and procurement personnel must all be involved at the beginning of the project.
Question number three involves the technology impacts a virtualized environment will have on the overall organization. A virtualization implementation impacts server-farm design, including storage, clustering, network connectivity, and switching, and this will drive the requirement for new or updated operating system “templates.” The relative maturity of your IT standards and documentation is important to understand, as work will need to be done to fit the virtualization components into the existing architecture. The virtualization components must also fit within corporate guidelines for new technology deployment and life cycle, including potential retirement of legacy equipment.
In addition, there must be a defined approach to ongoing monitoring and reporting on the virtual machines- an area that is often overlooked but highly necessary. Salespeople may not touch deeply on this area as it can be an arduous process, but the implementation team must understand what tools are currently in the environment that provide application-to-host-to-server-to-storage mapping, such as an SRM tool, or overarching IT environment management tools such as Hewlett-Packard’s OpenView or CA’s UniCenter. This ongoing monitoring and reporting will provide vital feedback on performance, capacity planning, service-level management, and ROI.
When you design your monitoring framework, be sure to include visibility to the hosts, not just the virtual machines themselves, and the ability to monitor and report on the virtual environment in the same way that you report on the physical environment. This is particularly important in those organizations that are more mature and service-centric, where service-level agreements (SLAs) are required. SLAs related to the virtualized infrastructure will need to be developed and adhered to, and the monitoring toolset and its operating policies, processes, and procedures are at the crux of this adherence.
The last question you’ll need to think about is: How do you successfully deploy your virtualization strategy? If you followed the strategies above so far, you should have the foundational design or architecture for a successful project. Now you must take this and the deployment plan to all of the key stakeholders and start talking. Collaborative sessions will need to be scheduled where you discuss and agree upon aligning the stages of the project with the organization’s ability to absorb change, identifying dependencies within the virtualization project and how these dependencies affect other ongoing projects.
This article has covered some of the obstacles to success that must be considered when an organization undertakes a virtualization effort. We’ve also touched on the potential impacts of a virtualization project on an organization’s people, processes, and technology, from the perspective of preparedness and relative maturity as a service provider. It’s clear that although virtualization implementations can have far-reaching consequences and do require appropriate due diligence in understanding the work at hand, with solid preparation, success can be achieved.
Organizations must consider virtualization as a strategic component of the long-term infrastructure, and not pigeon-hole it as a point solution. Because the potential benefits are so significant, having a diligent, big-picture view from the start is essential to fully realizing the benefits of the virtualized environment.
Rob Latimer is principal consultant for GlassHouse Technologies (www.glasshouse.com).