Deploying virtual servers in a SAN environment

By Mark Jones

September 27, 2008 -- IT organizations are turning more and more to server virtualization as a means to transform data centers into "service-centric" shared pools of resources that can be dynamically aggregated, tiered, provisioned, and accessed through an intelligent network. Virtualization of standardized server resources dramatically increases performance levels and reduces total cost of ownership (TCO), while allowing IT organizations to rapidly deploy and scale resources on-demand to match business and application requirements.
A shared storage infrastructure, most commonly a SAN, is required to implement the most compelling features of popular virtualized server environments such as VMware VI3, including VMotion, Distributed Resource Scheduler (DRS), High Availability (HA), Consolidated Backup, and ESX Server remote boot. Consolidating the virtual servers on one (or more) networked storage arrays provides opportunities for cost-effective and simplified disaster recovery and business continuity.

Virtual server connectivity
The portability and recovery capabilities of server virtualization implementations rely on external shared storage and are most effective in a SAN connectivity environment. Virtual servers typically reside in the main data center and draw on enterprise storage resources where the Fibre Channel protocol dominates. The high performance delivered by Fibre Channel serves the higher I/O requirements for multiple virtual machines running on a single server. SAN connectivity helps enable server virtualization, while server virtualization drives an increased need for SAN connectivity.


This article originally appeared on Virtual Strategy Magazine's site: www.virtual-strategy.com, an online publication dedicated to covering virtualization trends, technologies, and products. InfoStor has a content exchange agreement with VSM.
A major challenge for virtual server storage administrators has been the use of the physical Worldwide Port Name (WWPN) of the Fibre Channel host bus adapter (HBA) to define fabric zones, mask storage LUNs, and configure virtual machines. In addition, virtual server administrators have typically defined one zone where all disks are exposed to every virtualized server to support virtual machine migration to new servers. Such a design creates concerns for safe isolation of Raw Device Mapping (RDM) disks and requires the reconfiguration of the network if more than one zone is defined. The creation of virtual HBA ports (VPorts) using N-Port ID Virtualization (NPIV) allows virtual server administrators to bind virtual machines to storage and define multiple zones using VPort parameters, which creates an easier-to-manage and more protected virtualized server environment.
NPIV overview
NPIV is an industry standard that extends virtualization to the HBA by providing a way to assign multiple WWPNs on the same physical link. NPIV technology virtualizes the physical HBA port configured in a point-to-point SAN topology. Virtual HBA technology allows a single physical Fibre Channel HBA port to function as multiple logical ports, each with its own identity.
Each virtual machine can attach to its own VPort, which consists of the combination of a distinct Worldwide Node Name (WWNN) and up to four WWPNs, as shown in the figure. Storage administrators that deploy virtual machines using popular server virtualization environments such as VMware ESX Server 3.5, and RDM, can create virtual machines that are easier to manage and maintain. Virtualized servers utilize NPIV to generate a unique VPort, which each virtual machine can be persistently bound to and which the HBA transparently registers with the Fibre Channel SAN fabric. 


 Virtual port attachment using NPIV




 Virtual machine-specific boot occurs seamlessly using the defined VPort. During virtual machine migration to a new physical server, storage administrators no longer have to reconfigure their network settings (e.g., zoning, masking, binding) since they are maintained in the logical port configuration.

NPIV use cases
IT managers are deploying NPIV in virtualized server environments to enhance storage management capabilities. NPIV is most valuable in managing storage access for mission-critical, or SLA-driven virtualized environments, as opposed to the consolidation of less critical file and print servers, or test and development environments. Below are some specific use cases that are now possible with an NPIV-enabled virtualized server deployment. 

--I/O throughput, storage traffic, and utilization can be tracked to the virtual machine level via the WWPN, allowing for application- or user-level chargeback. As each NPIV entity is seen uniquely on the SAN, it is possible to track the individual SAN usage of a virtual server. Prior to NPIV, the SAN and virtualized server could only see the aggregate usage of the physical Fibre Channel port by all of the virtual machines running on that server, except for some vendor-specific LUN-based tools;

--Virtual machines can be associated to devices mapped under RDM to allow for LUN tracking and customization based on application needs. SAN tools tracking WWPNs can report a virtual machine's specific performance or diagnostic data. As each NPIV entity is seen uniquely on the SAN, switch- and array-based reporting tools can report diagnostic and performance-related data on a virtual machine basis;

--Bi-directional association of storage with virtual machines gives SAN administrators an enhanced ability to both trace from a virtual machine to an RDM and trace back from an RDM to a virtual machine (significantly enhanced with NPIV support);

--Storage provisioning for virtual machines can use the same methods, tools, and expertise in place for physical servers. As the virtual machine is once again uniquely related to a WWPN, traditional methods of zoning and LUN masking can continue to be used, enabling unified administration of virtualized and non-virtualized servers. Fabric zones can restrict target visibility to selected applications hosted by virtual machines. Configurations that required unique physical adapters based on an application can now be remapped onto unique NPIV instances on the virtualized server;

--Storage administrators can configure Inter Virtual SAN Routing (IVR) in virtualized server environments up to the individual virtual machine, enabling users to reconfigure their fabrics, aggregating islands of storage, fragmenting massive SANs into smaller, more manageable ones and assigning resources on a logical basis; 

--Virtual machine migration supports the preservation of the VPort ID when the virtual machine is moved to the new virtualized server. This improves the tracking of the RDMs to virtual machines. Access to storage can be restricted to a group of virtualized servers (clusters) on which the virtual machine can be run or migrated to. If the virtual machine is moved to a new virtualized server, no changes in SAN configuration would be required to adjust for the use of different physical Fibre Channel ports; and

--HBA upgrades, expansion, and replacement are now seamless. As the physical HBA WWPNs are no longer the entities upon which the SAN zoning and LUN masking are based, the physical adapters can be replaced or upgraded without changing the SAN configuration.
Benefits of NPIV
Virtualized server environments provide enterprise data centers with NPIV support enabled by Virtual HBA technology. Data centers choosing to deploy virtual server environments with NPIV can achieve
--Lower TCO: Server consolidation through server virtualization lowers TCO by improving asset utilization and simplifying management. When used with Fibre Channel and NPIV-enabled HBAs, a single intelligent HBA port can relay the traffic for multiple virtual machines, offloading network processing, thereby allowing more cost-effective servers to be deployed;
--Guaranteed Quality of Service (QoS): When used in conjunction with fabric QoS, each virtual machine can be allocated its own logical HBA port, which creates multiple I/O paths for traffic prioritization;
--Higher availability: Multiple logical ports create redundant paths to virtual machines and their data. They also facilitate the use of standard storage and fabric diagnostic tools for isolating and resolving issues;
--Role-based management and security: Each virtual machine and its associated storage are completely isolated from other virtual machines, under control of the administrator in charge of protecting corporate data; and
--Simplified management: Eliminates the need to reconfigure fabric zoning and LUN masking parameters during a VMotion migration.
Mark Jones is director of technical marketing at Emulex.

This article was originally published on September 26, 2008