Server virtualization has proven to be a great change agent for wringing out costs and consolidating physical server infrastructure for greater operating efficiencies. Server virtualization enables administrators to pool compute resources and co-host multiple operating environments within the same physical server. Given the ease and speed of creating and deploying virtual machines, organizations quickly encounter virtual machine sprawl where many virtual servers proliferate throughout the infrastructure.

Managing all of these virtual servers-each complete with its own guest OS and applications-requires the same amount of effort as managing a physical server and its operating environment. Physical servers, virtual servers, networking, and storage must all be managed- and since they impact one another, they must be managed in concert. Rapid adoption and wide-scale use of virtual servers can impact IT’s ability to manage, troubleshoot, and optimize the overall infrastructure.

What is needed is a new approach to IT management to cope with the new challenges and complexity that server virtualization engenders. Toolsets for discovering and managing separate domains exist, but these traditional tools are inadequate in the interconnected and virtualized infrastructure. The lack of infrastructure-wide visibility and control in mixed physical and virtual environments is becoming a serious problem for IT.

A New Set Of Challenges

Without the tools to optimize and manage end-to-end virtualization, IT is unable to do system-wide capacity planning, lacks visibility into server and storage resource allocation and utilization, has difficulty troubleshooting and managing change, and can no longer carry out historical analysis. The impact can be severe.

Challenge #1: System-wide Capacity Planning Becomes Impossible

The data center exists to maximize application performance, and IT keeps service level agreements (SLAs) to track its success. Meeting service level objectives requires IT to optimize resource capacity and planning, but the growth of complexity in mixed environments has made this increasingly difficult to do. In an age where virtual machines can be created easily, manual capacity planning across the infrastructure becomes impossible. The inability to do accurate capacity planning forces IT to over-provision resources. This is particularly true with storage, but extends to other domains as IT is forced to provision more performance and bandwidth while trying to stay ahead of application demand. Without the means to correlate critical applications with the underlying infrastructure, IT is taking shots in the dark.

Challenge #2: Utilization And Relationships Become Murky

Even a purely physical network is a challenging entity, given its hundreds or thousands of computing objects that make up a complex web of relationships. If you put in virtualization as an additional layer of abstraction, then complexity grows by leaps and bounds.

Maintaining visibility into server and storage allocation and utilization in this environment is a very challenging assignment. Manual correlation and root-cause analysis can become impossible, even though the need for correlation and visibility are growing across the infrastructure.

Individual devices come with their own diagnostics, but device-level information cannot solve the problem of correlating computing behavior from a variety of sources.

Challenge #3: Device Failure Leads To SLA Failure

In a virtualized infrastructure, failed devices kick off multiple device failures downstream, resulting in a diagnostic and predictive nightmare. For example, in a compact physical network a failed host bus adapter (HBA) sends out an alert, making it simple to find and replace. But in a complex network with virtualized layers, a failed HBA is not the only element alerting IT. Also alerting IT are the dozens (or more) of devices that the HBA failure is affecting downstream.

With all the alerts coming in at once, manually tracking down the problem ranges from time-consuming to impossible. Worse, the alerts do not only reach a single server or storage administrator. Because devices and applications throughout the infrastructure are affected by the failure, specialized IT administrators and database administrators are receiving alerts as well. And because the affected applications are failing or slowing down, SLAs go unmet and end-user calls start pouring in.

Challenge #4: Historical Analysis Withers In The Face Of Virtual Mobility

In a virtualized environment, virtual server resource allocation can change at the drop of a hat. However, if historical trending is tied to the physical machine via the WWN (and most are) then the trending is cut loose from the VM and storage usage analysis becomes impossible. This is a significant loss, leaving IT without a way to track SLA success rates across virtualized infrastructure or to report on resource usage patterns. The lack of historical analysis makes it increasingly difficult to plan for growth and to protect SLAs.

Cross-Domain Correlation

The answer to these virtualization challenges is cross-domain correlation: the ability to correlate and manage (and in some cases control) relationships and workflow between multiple domains in the data center. Cross-domain correlation represents a new world of insight and control. It works by federating across multiple layers of the technology stack, layers including not only servers and storage-traditional virtualization targets-but also applications, networks, and data protection.

Cross-domain correlation offers deeper and broader insight into multiple layers of the technology stack, including applications, servers, switches, and storage. In addition to advanced insight, cross- domain correlation also provides fine-tuned control mechanisms to optimize the virtualized data center end-to-end.

We define cross-correlation toolsets as maintaining five common qualities:

  1. the ability to correlate between multiple levels of data;
  2. operating at a heterogeneous level;
  3. supporting real-time to near real-time insights;
  4. the ability to do predictive analysis; and
  5. a workflow-centric design as opposed to device-centric.

Cross-Correlation Benefits

IT can gain significant benefits from cross-domain correlation in mixed physical and virtualized infrastructures. These benefits include safeguarding SLAs for virtualized applications, balancing resource provisioning, providing clarity to cross-functional IT teams, and improving return on investment (ROI) by treating applications and infrastructure holistically rather than as a collection of discrete parts.

Provide SLA assurances for virtualized applications-Tracking SLAs in virtualized environments is challenging. Cross-domain correlation monitors physical and virtual workflows and alerts IT to poorly aligned, conflicting, or over- capacity resources. This allows IT to proactively adjust the infrastructure to maintain service level objectives, or to model the impact of infrastructure changes on applications.

Eliminate under- and over-provisioning-Without the ability to see into the virtualized infrastructure, provisioning becomes a matter of guessing at end-to-end performance, bandwidth, and capacity requirements. This shoot-from-the-hip, reactive response usually results in too little or too much compute power, storage capacity, or network bandwidth. Cross-domain correlation optimizes resource usage by replacing device-centric provisioning with application-centric provisioning across the entire workflow.

Foster cross-organization coordination in IT-Cross-disciplinary teams of storage, server, network, and database administrators can struggle to manage and troubleshoot the infrastructure. Cross-domain correlation toolsets automate root-cause analysis across the entire infrastructure and applications, resulting in effective troubleshooting for all stakeholders. This in turn enables IT to meet SLAs with database administrators and end users.

Optimization models and automation-In mixed physical and virtual environments, the ability to model and automatically optimize resources is in its infancy. However, cross-domain correlation is a foundational technology for achieving this level of sophistication.

Improve ROI by eliminating utilization “guesstimates”-Without cross-domain correlation and visibility, IT is forced to optimize individual compute elements in a vacuum. Cross-domain workflow visibility allows IT to address the impact of resource utilization on downstream elements, such as the impact of five new VMs on a virtualized storage pool.

Cross-Domain Management Tools

Cross-domain correlation is the new frontier in evolving management tools. It is a vital toolset for managing complex IT infrastructure applications and services. However, specific implementations vary widely among vendors.

Cross-domain management tools are not completely unknown to storage administrators and managers. Vendors such Bocada, WysDM, Aptare, and Servergraph pioneered a new category of cross-domain tools sometimes referred to as data-protection management (DPM) that are focused on delivering predictive analysis for backup operations. These types of tools have proven to be highly valuable for solving many of the operational issues around data protection.

In addition, other vendors have embraced the cross-domain concept and applied it to solve different IT management challenges. Among these are EMC Smarts, Managed Objects, HP Mercury, Onaro, and Akorri.

EMC Smarts (specifically, Storage Insight for Ability) uses a cross correlation engine to automate fault management and root cause analysis in Fibre Channel SANs. Managed Objects correlates business metrics from a variety of sources and presents them for business-level viewing. HP Mercury’s SiteScope monitors distributed IT infrastructures and networks.

Onaro is primarily focused on managing change in the SAN by connecting storage configuration and change management to applications and their SLAs. However, Onaro is evolving its product to meet the challenges of capacity, performance, and configuration management in a virtualized server environment.

All of these vendors’ products contain cross-correlation capabilities. However, only Akorri is presently focused on solving the visibility, performance optimization, and capacity management issues rising from server virtualization.

Akorri maps the entire topology and dependencies from application to virtual operating environment to the physical server and through the storage network, all the way back to the actual storage media. Using this detailed dependency tree, Akorri’s appliance can present a holistic view of the infrastructure and its relationships. This allows IT to manage application service levels by alleviating the visibility, troubleshooting, and capacity management challenges present in today’s virtualized infrastructure.

IT must think holistically across both server and storage domains and should not optimize for one domain to the detriment of the other. As server virtualization picks up steam, cross-domain management tools become the key to optimizing performance and cost efficiency across the entire infrastructure. To balance performance and capacity across the infrastructure, IT needs visibility and control into multiple domains. Only cross- correlation for mixed environments grants that level of control. Cross- domain correlation will increasingly become a necessity for understanding and managing virtual environments.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *