VMware vSphere 5.1 - Implementer's Lab

33
Deployment Guide VMware vSphere 5.1: 16Gb Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage Create robust, highly available vSphere 5.1 environments with best- of-breed Fibre Channel HP 3PAR Storage and HP ProLiant DL servers

Transcript of VMware vSphere 5.1 - Implementer's Lab

Page 1: VMware vSphere 5.1 - Implementer's Lab

Deployment Guide

VMware vSphere 5.1: 16Gb Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Create robust, highly available

vSphere 5.1 environments with best-

of-breed Fibre Channel HP 3PAR

Storage and HP ProLiant DL servers

Page 2: VMware vSphere 5.1 - Implementer's Lab

Table of contents

Emulex Solution Implementer’s Series .............................................................................................................3

Executive summary ..........................................................................................................................................3

Introduction ......................................................................................................................................................5 About this guide ...........................................................................................................................................5

Solution components ........................................................................................................................................6 ESXi 5.1 .......................................................................................................................................................7 HP ProLiant Gen8 servers ...........................................................................................................................7 Deploying the solution components .............................................................................................................8

Pre-installation .................................................................................................................................................9 Updating firmware ........................................................................................................................................9

Configuring network connectivity .................................................................................................................... 10 Planning the network environment for the host .......................................................................................... 11

Configuring storage ........................................................................................................................................ 12 Using the VMFS-5 filesystem for additional functionality ............................................................................ 12 Configuring Fibre Channel connectivity ...................................................................................................... 13

Deploying ESXi 5.1 ........................................................................................................................................ 18

Post-installation .............................................................................................................................................. 19 Configuring the HBA .................................................................................................................................. 19 Using NPIV to identify HBA ports ............................................................................................................... 21 Configuring the storage array ..................................................................................................................... 21 Provisioning virtual LUNs ........................................................................................................................... 22

Performance comparison ............................................................................................................................... 23 Test method ............................................................................................................................................... 24

Results ........................................................................................................................................................... 25

Summary ........................................................................................................................................................ 30

Appendix A – Configuring BfS ........................................................................................................................ 31

For more information ...................................................................................................................................... 32

Page 3: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

3 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Emulex Solution Implementer’s Series

This document is part of the Emulex Solution Implementer’s Series, providing Implementers (IT

administrators and system architects) with solution and deployment information on popular

server and software platforms. As a leader in I/O adapters – Fibre Channel (FC), Ethernet,

iSCSI and Fibre Channel over Ethernet (FCoE) – the Emulex technology team is taking a lead

in providing guidelines for implementing I/O for these solutions.

Executive summary

With vSphere 5.1, VMware continues to raise the bar for hypervisor products, introducing many

new features along with support for more and larger virtual machines (VMs) that can now utilize

up to 64 virtual CPUs (vCPUs).

vSphere 5.1, like 5.0, does not include a service console operating system (OS); VMware

agents and Common Information Model (CIM) providers run directly on the hypervisor layer

(VMkernel). There are three options for communicating with VMkernel: VMware vSphere’s

enhanced command-line interface (vCLI), vSphere PowerCLI or the vSphere Management

Assistant (vMA) virtual appliance.

vSphere 5.1 provides many new features and enhancements in areas such as storage and

networking. Indeed, VMware has several new features,1 claiming, for example, that vSphere 5.1

can run VMs that are two-times as powerful as those supported by earlier versions and support

for new VM hardware formats, such as VM Virtual Hardware Version 9. In storage, there is

added support for 16Gb Fibre Channel (16GFC), however, as shown in Table 1, the new, larger

VM will place heavy demands on data center infrastructure.

1 http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere51.pdf

Page 4: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

4 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Table 1. Resources supported by various VMware hypervisors

Component

ES

X 1

ES

X2

VM

wa

re

Infr

as

tru

ctu

re 3

VM

wa

re

vS

ph

ere

4

VM

wa

re

vS

ph

ere

5

VM

wa

re

vS

ph

ere

5.1

Sc

ale

fac

tor

vCPUs 1 2 4 8 32 64 x 64

Memory (GB per VM)

2 3.6 64 256 1,000 1,000 x 500

Network (Gb)

< 0.5 0.9 9 30 >36 >36 x 72

Fibre Channel SAN (MB/s per HBA port)

1GFC: 100

2GFC: 200

4GFC: 400

8GFC: 800

8GFC: 800

16GFC: 1,600

x 16

To help you transition to an infrastructure that is capable of supporting the storage needed by

new-generation VMs, Emulex has validated the functionality and performance of vSphere 5.1 in

conjunction with 16GFC connectivity. The proof-of-concept (POC) environment included the

following components:

Best-of-breed HP ProLiant DL380 Gen 8 server

Dual-port 16GFC adapter produced for HP by Emulex

HP 3PAR P10000 V400 Storage

SANBlaze 16GFC storage emulator (16GFC connectivity end-to-end)

In addition, since 16GFC Storage Area Network (SAN) storage is better suited for the new

release of vSphere, this implementer’s guide outlines the process for deploying vSphere 5.1

with 16GFC connectivity and outlines the results of performance tests carried out in such an

environment.

The performance test section was executed by the Emulex Technical Marketing team in their

labs and it was to validate and test 16GFC connectivity end-to-end.

The performance testing demonstrated that, with a 16GFC Emulex Fibre Channel Host Bus

Adapter (HBA), I/O performance was significantly higher (as much as 99%) than with 8GFC

technology, without requiring additional CPU cycles. Thus, as VM density increases, you are

placing more burden on the storage array by migrating from 8GFC to 16GFC since I/O

bandwidth has doubled on the array.

Intended audience: This document is intended for engineers, system administrators and

VMware administrators interested in deploying vSphere 5.1 on an HP ProLiant Gen8 server

Page 5: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

5 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

featuring an HP Ethernet 10Gb 2-port 554FLR-SFP+ Adapter (684213-B21) and HP SN1000E

16Gb 2-port PCIe Fibre Channel Host Bus Adapter (QR559A).

Testing performed in August –September, 2012, is described.

Introduction

vSphere 5.1 supports an unprecedented number of VMs on a single host; moreover, these VMs

can reach unprecedented size depending on the application and workload for each VM. Thus,

Emulex expects to see more and more workloads being virtualized, as well as additional

resources being assigned to existing VMs in order to meet the needs of particular workloads. As

noted by VMware,2 “the VM will only get bigger and faster.”

VMware sees more VMs being deployed than ever before,3 with vSphere 5.1 allowing these

VMs to grow as much as two times larger. With this increased density, virtualized environments

must be able to provide additional network and storage resources in order to support the

increased workload.

About this guide

This implementer’s guide describes how to configure a 16GFC SAN with a DL380 Gen8 server

in a vSphere environment. Guidelines and instructions are provided for configuring servers,

adapters and storage using technical documentation provided by VMware and HP.

Information is provided on the following topics:

Solution components

Networking configuration

Storage configuration, including boot from SAN (BfS)

Performance testing to compare 8GFC and 16GFC

2 http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere51.pdf

3 Based on interim results of VMware customer surveys performed in January 2010 and April 2011

Page 6: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

6 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Solution components

Emulex built the POC environment shown in Figure 1 in order to validate 16GFC connectivity

with vSphere 5.1.

Figure 1. 16GFC POC

Page 7: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

7 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Table 2 outlines the key components deployed in this environment.

Table 2. Test environment

Component Device Comments

Tested server HP ProLiant DL380 Gen8 Virtualization host running ESXi 5.1; 10 identically-configured VMs

Management server vCenter Server 5.1 VM running on ESXi 5.1 host

vCenter Server 5.1 is a VM running Windows Server 2008 R2

AD & DNS server Generic Windows Server 2008 system Support for Microsoft Active Directory (AD) and Domain Name System (DNS)

Storage HP 3PAR Storage P10000 V400 Storage

Fibre Channel connectivity

16GFC HBA HP SN1000E 16Gb 2-port PCIe Fibre Channel Host Bus Adapter (HBA)

16GFC fabric switch 2 x Brocade 6510 16GFC SAN switch

If this is your first time installing VMware products on a ProLiant server, it is important for you to

have a basic understanding of each of the solution components so that the terminology used in

this guide is clear.

vSphere 5.1

VMware’s latest hypervisor, vSphere 5.1, extends the core capabilities of vSphere 4.1 and 5.0

to provide the foundation for a cloud infrastructure, whether public or private. Areas where you

can expect to see improvements after deploying vSphere 5.1 include server consolidation,

performance, management, provisioning and troubleshooting.

For more information on vSphere 5.1, refer to “What’s New in VMware vSphere 5.1,” available

at http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-vsphere51.pdf.

HP ProLiant Gen8 servers

The latest DL380 Gen8 servers are based on Intel Romley processors. These systems continue

to be the servers of choice for many HP shops in the VMware space and are widely used – from

small and medium-sized businesses (SMBs) to large data centers – for their high availability,

scalability, CPU horsepower and memory capacity. In addition, these 2U rack-mount servers

save space and power, making them ideal for large data centers moving to a cloud

infrastructure.

Page 8: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

8 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Using Fibre Channel for shared storage

When deploying vSphere on ProLiant servers, you should always consider using a SAN so that

you can take full advantage of the many hypervisor features and enhancements that require

shared storage.

While VMware supports most of the more common protocols, Fibre Channel is the predominant

choice for shared SAN storage. Thus, while storage protocols continue to evolve, introducing

options such as iSCSI and NFS storage arrays, this guide focuses on Fibre Channel

connectivity.

Server sizing

HP has developed an automated tool – HP Sizing Tool for VMware vSphere – that can help you

size and scope a server for a particular vSphere deployment. Based on your responses to a

questionnaire, the tool provides a quick, consistent method for identifying the best server for

your environment. It also creates a bill of material.

For more information on this sizer and other HP solutions for VMware, visit

www.hp.com/go/vmware.

HP 3PAR Storage

Highly virtualized from the ground up, HP 3PAR Storage can enhance the benefits of a vSphere

environment by taking care of the demands that server virtualization places on the storage

infrastructure.

HP 3PAR Storage combines highly virtualized, autonomically managed, dynamically tiered

storage arrays with advanced internal virtualization capabilities to increase administrative

efficiency, system utilization and storage performance. As a result, HP 3PAR Storage can boost

the return on a vSphere investment by allowing you to optimize your data center infrastructure,

simplify storage administration and maximize virtualization savings.

A key area for HP 3PAR P10000 V400 storage arrays which makes a significant impact on

performance when migrating from 8GFC to 16GFC is the wide striping architecture. As noted in

HP’s documentation, wide striping distributes each vSphere storage volume across all array

resources. When you increase bandwidth to the array, you are not disk bound on the LUN like

you are with traditional arrays that narrow stripe data.

Deploying the solution components

Having introduced the key components of the POC, this implementer’s guide now describes

how to configure 16GFC connectivity. Guidelines are provided for the following areas:

Pre-installation

Configuring network connectivity

Page 9: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

9 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Configuring storage

Deploying vSphere 5.1

Post-installation

Pre-installation

There are several steps to consider before applying power or installing an operating system on

a system. First, you need to ensure that sufficient rack space and appropriate power and

cooling are available; you should also verify that firmware is at the latest levels and download

any necessary patches and drivers using the HP and VMware links provided in this section.

The following pre-installation process offers guidelines for pre-configuring a network to support

an ESXi host, as well as suggestions for configuring storage systems and storage area

networking.

As a best practice, Emulex recommends verifying with HP technical support that you are

running the very latest HP firmware and drivers on components, such as HP FlexLOM4 and PCI

adapters.

Updating firmware

Before deploying vSphere 5.1 on a ProLiant server, you should determine the latest versions of

the following firmware and, if necessary, update:

ESXi host:

o System BIOS

o Flexible-LOMs and PCI adapters

Storage array and controllers

SAN switches

You can review the latest firmware levels recommended by HP and VMware at the following

sites:

HP: Visit www.hp.com/go/vmware and refer to the Certified ProLiants and Certified HP

Storage links under Tools/Resources.

VMware: Refer to the VMware Compatibility Guides at

http://www.vmware.com/resources/guides.html.

Note

Always contact HP support to identify the latest firmware updates and drivers.

4 Where LOM refers to LAN on motherboard

Page 10: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

10 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

As always, plan your deployment or upgrade before installing software; read all the

documentation provided by VMware and HP before starting. Planning will speed up the process

– particularly if you intend to deploy multiple servers.

With pre-installation activities complete, you can now start configuring your network.

Configuring network connectivity

Before installing vSphere 5.1, you need to understand the network requirements for the

particular ESXi host and the connectivity supported by the physical server. For example, while

many physical servers feature LOM or integrated network interfaces, ports are typically 1Gb

Ethernet (1GbE), though newer models such as the DL380 Gen8 server offer 10GbE ports.

In order to best use 10GbE, you should understand the requirements of the traffic being carried

on the network, as outlined in Table 3.

Table 3. Typical network requirements

Traffic type Bandwidth usage Other requirements

Management Low Highly reliable, secure channel

VMware vMotion High Isolated

VMware Fault Tolerance (FT)

Medium – to -high Highly reliable, low-latency channel

iSCSI High Reliable, high-speed channel

VM Application-dependent Application-dependent

In general, combining management port traffic, which is relatively light, with VMware vMotion

traffic is acceptable in many deployments that utilize four network interface cards (NICs). Since

vMotion traffic is heavier, it is not a good practice to combine this with VM traffic; thus, you

should consider separating such traffic on different subnets.

For simplicity, Emulex placed management and vMotion traffic on the same virtual switch in the

POC. In such an implementation, it is a good practice to use virtual LANs (VLANs) to enhance

security and isolate traffic.

Following VMware’s best practices for performance5 provides a good starting point. Remember

that, as time passes, you should regularly revisit your environment to ensure the network

configuration is still effective.

5 http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf

Page 11: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

11 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Planning the network environment for the host

You should plan the network environment for the host in conjunction with your network

administrator.

The DL380 Gen8 server utilized in the POC was equipped with an HP Ethernet 10Gb 2-port

554FLR-SFP+ Adapter, which was used to configure the single network interface card (NIC)

that was needed.

In vSphere deployments featuring 1GbE network adapters, it is not uncommon for the use of six

or even eight NICs in order to meet VMware’s networking requirements for performance,

isolation and redundancy. However, for this 10GbE environment, Emulex was able to utilize just

two 10GbE ports; VLANs were used in conjunction with VMware’s networking I/O control to

manage bandwidth.

Table 4 shows VMware’s best practices for two 10GbE ports.

Table 4. VMware’s best practices, which were used in the POC

Traffic type Port group Teaming option

Active uplink Standby uplink

NIOC shares

VM PG-A LBT dvuplink 1, 2 None 20

iSCSI PG-B LBT dvuplink 1, 2 None 20

FT PG-C LBT dvuplink 1, 2 None 10

Management PG-D LBT dvuplink 1, 2 None 5

vMotion PG-E LBT dvuplink 1, 2 None 20

After setting up network connectivity, you can now configure storage.

Page 12: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

12 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Configuring storage

This section provides information on the following topics:

Using the new VMFS-5 filesystem6 to support additional storage functionality

Configuring Fibre Channel connectivity

Implementing fabric zoning

Configuring the storage array

Configuring BfS

Using the VMFS-5 filesystem for additional functionality

Introduced with ESXi 5.0, the VMFS-5 filesystem provides support for VMFS volumes up to

64TB in a single extent. With VMFS-3, 32 extents would have been required to achieve 64TB.

Note

The volume creation wizard for VMFS-5 uses GUID Partition Table (GPT) format rather than Master Boot Record (MBR), which allows you to create VMFS volumes that are larger than 2TB. GUID refers to a globally-unique identifier.

With the ability to create large VMFS volumes, you must now manage storage array queue

depth as well as LUN queue depth. For example, queue depth for an HP SN1000E 16Gb 2-port

PCIe Fibre Channel HBA is set by default to 30 and may be adjusted via Emulex OneCommand

Manager, the OneCommand Manager plug-in for vCenter Server, or vMA.

Other benefits delivered by the VMFS-5 filesystem include:

Support for up to 130,000 files rather than 30,000 as before

Support for 64TB physical-mode RDM LUNs

Virtual mode allows you to create snapshots, which is beneficial when a file exceeds 2TB

For space efficiency, there can now be up to 30,000 8KB sub-blocks

There is small-file support for files of 1KB or less; in the past, such files would have occupied

entire sub-blocks

As you plan the deployment of shared storage, take into consideration the benefits of VMFS-5.

For example, if you are migrating from a hypervisor that is pre-vSphere 5.1, you may also wish

to migrate to VMFS-5 to take advantage of the new features and enhancements.

6 VMFS is a VMware Virtual Machine File System format.

Page 13: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

13 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Configuring Fibre Channel connectivity

The Emulex Fibre Channel driver is in-box with vSphere, making it is easy to transition to

16GFC connectivity from an earlier platform – there is no need to install the driver during the

deployment of vSphere 5.1. As a best practice, it is recommended to update the fibre channel

driver since the in-box driver will be out of date. Thus, the configuration of 16GFC connectivity

via HP SN1000E 16Gb 2-port PCIe Fibre Channel HBAs is a simple process in a vSphere 5.1

environment, with just the following stages:

Implement fabric zoning

Configure the storage array

Configure BfS

Before you begin the configuration, it is a best practice to review the appropriate VMware

Compatibility Guide to ensure firmware for the storage array is at the latest levels and the array

has been certified, as shown in Figure 2.

Figure 2. Showing firmware levels specified in the VMware Compatibility Guide for HP 3PAR P10000 Storage

Page 14: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

14 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

IMPORTANT

Whenever you need to update code or firmware, Emulex recommends backing up your data and configuration.

Implementing fabric zoning

Zoning has become a standard component of VMware deployments; indeed, most if not all

storage array vendors recommend zoning LUNs that are presented to ESXi hosts. Fabric zones

can enhance manageability while providing support for advanced features such as vMotion and

Fault Tolerance that require multiple hosts to access the same LUN.

Zones can also enhance security. For example, consider what might happen if you were to

connect a new Microsoft Windows server to the same SAN switch as an existing ESXi host.

Without zoning or some other security measure, the new server would be able to access the

same storage as the existing host and could potentially overwrite the filesystem, obliterating VM

data and files. Thus, since the POC features two 16Gb HBA ports, there should ideally be two

or more fabric switches, each configured with a zone that includes one of the ports.

Page 15: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

15 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Figure 3 shows the zoning used in the POC.

Figure 3. Zoning in the POC, with four paths to the LUN

Page 16: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

16 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

The POC utilizes two Brocade 16Gb SAN switches and a total of four zones, as shown in Table

5. The zones are added to an alias zone configuration, which is then activated.

Table 4. Zone configuration for the POC

HBA Storage controller

Alias Zone Zone configuration

Switch

Port 0

Port A1 Zone 1 ZoneSet 1 ZoneConfig_1 1

Port B2 Zone 2 ZoneSet 1 ZoneConfig_1 2

Port 1

Port A2 Zone 3 ZoneSet 2 ZoneConfig_1 2

Port B1 Zone 4 ZoneSet 2 ZoneConfig_1 1

This zone configuration gives the ESXi host four paths to a single LUN. At this stage of the

deployment, no LUNs have been created; thus, LUNs cannot yet be bound to WWN ports on

the HP SN1000E2P 16Gb HBA.

Setting the storage array

HP 3PAR Storage is very popular in VMware environments due to its extensive virtualization

capabilities and ease of management. HP has documented deployment and tuning best

practices for this simple yet robust and highly available array; for more information, refer to

“3PAR Utility Storage with VMware vSphere.”

Emulex followed HP’s best practices when configuring HP 3PAR Storage for the POC. The

process included the following stages:

Update storage array controller firmware and management software as needed

Configure virtual domains

Configure virtual LUNs

Assign host mode

Create hosts

Present LUNs to the host

If the correct zoning and host mode have been applied, LUNs will be visible to the assigned

hosts. There should be four paths to each LUN for optimal performance and redundancy.

Page 17: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

17 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Configuring Boot from SAN (BfS)

Enterprise server manufacturers such as HP continue to offer local disk storage; however, with

the growth of virtualization and the increased use of BfS,7 server configurations are evolving.

For example, HP offers a diskless server that would allow you to deploy vSphere via a USB

flash drive or SD card.

BfS capability is often configured in a vSphere environment, where its benefits include:

Enhanced manageability

Faster deployment

Easier backup8 of the hypervisor

Enhanced disaster recovery capabilities

The process for configuring BfS via an HP SN1000E2P 16Gb HBA is simple and can be

achieved in the following stages:

Load the latest boot code to HBA

Provision the boot LUNs

Configure the ESXi host

Specify the desired boot volume

Place the HBA first in the boot order

This vendor-specific process is described in more detail in Appendix A – Configuring BfS.

Note

If you plan to install or upgrade ESXi 5.1 with local storage, Emulex recommends disconnecting the Fibre Channel cables from the SAN to prevent the OS from being accidentally installed on the SAN.

Once storage has been configured – and you have verified that the hardware has been certified

by VMware9 – you can deploy ESXi 5.1.

7 ESXi is installed directly on a LUN instead of local storage

8 Since the array owns the LUN, array-based copies can be made without server intervention.

9 Refer to the VMware Compatibility Guides at http://www.vmware.com/resources/guides.html.

Page 18: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

18 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Deploying vSphere 5.1

The main points to remember before beginning a vSphere deployment are as follows:

Check all firmware on host and adapters and make updates as needed

Check HBA drivers (vSphere 5.1 has inbox drivers for Emulex 16GFC)

Plan the networking configuration

Plan the Fibre Channel storage configuration and LUNs

Decide on local or BfS storage

Select the most suitable deployment option for your environment

Since vSphere 5.1 has been designed for flexibility, you have a range of options for deploying

this hypervisor on a ProLiant server. These options include the following:

Interactive installation: Suggested for fewer than five hosts

Scripted installation: Unattended deployment for multiple hosts

vSphere Auto Deploy installation: Suggested for a large number of ESXi hosts; uses

VMware vCenter Server

Custom installation: vSphere 5 Image Builder command-line interfaces (CLIs) provide

custom updates, patches and drivers

For the POC, Emulex elected to use the interactive method, downloading an vSphere 5.1 image

from the VMware website to local storage. The deployment process is fairly straightforward and,

in many ways, identical to the deployment of earlier versions of ESXi. Since this process is

detailed in VMware’s technical documentation, it is not described in this guide.

It should not take more than a few minutes to install vSphere 5.1.

Following the installation of vSphere 5.1 you can configure the management network and, if

appropriate, enable lockdown mode via vCenter Server or the ESXi direct console user interface

(DCUI). You can then proceed with the post-installation process, which includes configuring the

vSphere 5.1 host for 16GFC SAN connectivity.

Page 19: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

19 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Post-installation

After vSphere 5.1 has been deployed on the host, you should review the installation and

perform any updates that are necessary – for example, a vendor may have recently released

new firmware or drivers.

Next, you should configure NIC teaming, which is not configured automatically unless you are

using a scripted installation.

If you are using local storage, remember to reconnect the Fibre Channel cables to the SAN and

then verify that the host can login to the fabric and view any LUNs that have been assigned.

You can now configure the vSphere 5.1 host and storage array for 16GFC SAN connectivity,

which may involve the following activities:

Planning the network environment for the host

Configuring the HBA

Configuring the storage array with features such as multipathing

Configuring the HBA

If you are migrating to vSphere 5.1 or are installing vSphere 5.1 for the first time and have

installed the HP SN1000E 16Gb 2-port PCIe HBA in a full-length PCIe 3.0 slot, with the correct

small form-factor pluggable (SFP) transceivers, configuration is simple.

Since vSphere 5.1 already provides an in-box VMware driver for Fibre Channel, there is no

need to install a driver during the initial setup. After the vSphere 5.1 installation, however, you

should verify with VMware or Emulex that the in-box driver is the latest and, if necessary,

update it.

Page 20: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

20 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

After configuring the HBA, review the Configuration tab of vCenter Server; you should see your

device listed under Storage Adapters, as shown in Figure 4.

Figure 4. In this example, ESXi has automatically recognized a 16GFC HBA

Alternatively, you can use vMA to remotely send commands to the ESXi host to verify that the

driver has been configured correctly.

Page 21: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

21 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Using NPIV to identify HBA ports

N-Port ID Virtualization (NPIV)10 is supported in vSphere 5.1. This capability allows a single

Fibre Channel HBA port to register with the fabric using multiple worldwide port names (WWNs),

each having a unique entity. To learn more, refer to the VMware technical note, “Configuring

and Troubleshooting N-Port ID Virtualization,” which provides specific information on Emulex

adapters.

For additional information, refer to the VMware document, “vSphere Storage Guide,” which is

available at http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html.

Configuring the storage array

vSphere 5.1 introduced several new storage features and enhanced others. You should be

aware of the following:

VMFS-5

o Support for 32 hosts for single read-only file on a VMFS volume

vStorage API for array integration (VAAI)

Storage Distributed Resource Scheduler (SDRS)

o Datastore correlation detector

o New I/O metric – VMobservedLatency

Storage I/O Control (SIOC)

o Automatic setting for threshold latency

Storage vMotion

o Four parallel migrations concurrently

Storage protocol enhancements

o 16GFC support

While these valuable new features are beyond the scope of this guide, be aware that you may

be facing additional steps after you install vSphere 5.1. For example, after mounting a LUN and

formatting it with VMFS-5, you may need to determine if additional, array-specific agents are

required to support features such as VAAI or vSphere APIs for Storage Awareness (VASA).

For this POC, Emulex configured the HP 3PAR Storage based on information provided in the HP document, “HP 3PAR Storage and VMware vSphere 5 best practices.”

10

Maintained by American National Standards Institute (ANSI), Technical Committee T11

Page 22: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

22 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Note

For more information on ESXi 5.1 features that are supported by a particular HP array, you should see VMware’s HCL to see which specific array features are supported for each model and also consult HP technical support.

Provisioning virtual LUNs

You can use the HP 3PAR InForm Management Console (shown in Figure 5) to configure and

manage HP 3PAR Storage.

Figure 5. View of the HP 3PAR InForm Management Console

For this POC, Emulex started off by configuring a single ESXi Common Provisioning Group

(CPG) named VMware and used all drives in the array for provisioning virtual LUNs (VLUNs).

Using this CPG group, Emulex had to validate FC connectivity provisioned to the DL380 Gen8

server with a single 600GB LUN to provide support for VMs.

The next concern, after deploying vSphere 5.1, is HP’s suggested configuration for multipathing.

Page 23: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

23 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Configuring multipathing

By default with vSphere 5.1, HP 3PAR Storage uses Fixed path policy for active/active storage

arrays. This policy maximizes bandwidth utilization by designating the preferred path to each

LUN through the appropriate storage controller.

According to HP documentation, HP 3PAR Storage also supports Round Robin path policy,

which can improve storage performance by load-balancing I/O requests between active paths,

sending a fixed number of requests through each in turn.

Note

The fixed number of I/Os is user-configurable. You should consult HP technical support for their recommendations.

You might consider enabling the array’s Asymmetric Logical Unit Access (ALUA) feature,

which can improve storage performance in some environments.

Performance comparison

As VM density increases, the burden placed on storage by applications running on these VMs

will also increase. As a result, both Emulex and VMware have carried out performance tests

designed to demonstrate the ability of 16GFC storage to sustain a significantly higher workload

than 8GFC storage without increasing CPU utilization on the host.

Page 24: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

24 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Figure 6 shows how storage was configured in the VMware test environment.

Figure 6. Test environment

Test method

The Iometer tool was used to compare sequential read and write throughput and CPU

effectiveness with 16GFC and 8GFC HBAs. Various block sizes were used.

A VM was configured with a single IOmeter worker thread. The VM was hosted on a DL380

Gen8 server that was configured with the following:

Two six-core Intel Xeon E5-2640 processors

HP SN1000E dual-port 16GFC HBA

Emulex LPe12002 dual-port 8GFC HBA

A Brocade 6510 16Gb FC switch was also in the POC.

Page 25: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

25 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

The VM was configured as follows:

One virtual CPU

Guest memory: 4,096MB

Virtual disk: 10GB OS and 256GB RDM

SCSI controllers: One LSI Logic and one PVSCSI virtual controller

VM Virtual Hardware Version 9

Guest OS: Microsoft Windows Server 2008 R2, 64-bit

The target RDM LUN for the testing was placed on a SANBlaze VirtualLUN 6.3 16GFC

appliance, which is used to emulate Fibre Channel drives in order to characterize read/write

performance. The SANBlaze device was configured as follows:

HP DL380 G7

256GB RAM

16GFC HBA

Results

Sequential read throughput

Figure 7 shows sequential read throughput for a range of block sizes.

Page 26: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

26 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Figure 7. Sequential read throughput in MB/s in the test environment

With larger block sizes the result is what was expected from 16GFC as we are able to utilize

near-line rate.11 As we double the pipeline in size, we are also able to prove the 16GFC adapter

can double the throughput.12

16GFC out-performed 8GFC by almost 100% with larger blocks.

CPU effectiveness – reads

Figure 8 shows CPU effectiveness (defined as total IOPS divided by CPU utilization) for

sequential reads. This metric characterizes the number of CPU cycles taken to complete a

particular IOPS total; thus, a lower number is more desirable because it indicates the processor

is less stressed.

11

1,560 MB/s for sequential reads 12

750 MB/s for sequential reads

0

200

400

600

800

1000

1200

1400

1600

1800

1K 4K 8K 16K 32K 64K 128K 256K

MB

s

Block Size

MB/Sec Seq. Reads

8GB Reads

16GB Reads

Page 27: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

27 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Figure 8. CPU effectiveness for sequential reads at various block sizes

CPU effectiveness was similar with 16GFC and 8GFC, despite the fact that, with 16GFC, the

CPU was completing significantly more I/Os.

0

500

1000

1500

2000

2500

1K 4K 8K 16K 32K 64K 128K 256K

Block Size

CPU Effectiveness Seq. Reads

8GB Reads

16GB Reads

Page 28: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

28 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Sequential write throughput

Figure 9 shows sequential write throughput for a range of block sizes.

Figure 9. Sequential write throughput in MB/s in the test environment

With larger block sizes the result is what was expected from 16GFC as we are able to utilize

near-line rate. Again, as we double the pipeline in size, we are able to also prove the 16GFC

adapter can double the throughput.16GFC out-performed 8GFC by almost 100% with larger

blocks.

0

200

400

600

800

1000

1200

1400

1600

1800

1K 4K 8K 16K 32K 64K 128K 256K

MB

s

Block Size

MB/Sec Seq. Writes

8GB Writes

16GB Writes

Page 29: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

29 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

CPU effectiveness – writes

Figure 10 shows CPU effectiveness for sequential writes.

Figure 10. CPU effectiveness for sequential writes at various block sizes

CPU effectiveness was similar with 16GFC and 8GFC, despite the fact that, with 16GFC, the

CPU was completing significantly more I/Os.

0

500

1000

1500

2000

2500

3000

1K 4K 8K 16K 32K 64K 128K 256K

Block Size

CPU Effectiveness Seq. Writes

8GB Writes

16GB Writes

Page 30: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

30 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Summary

vSphere 5.1 clearly adds a broad range of features to the hypervisor; however, taking full

advantage of these features requires newly-supported 16GFC SAN connectivity in your

virtualization environment.

Testing carried out by Emulex indicated that I/O performance with a 16GFC implementation with

HP DL380 Gen 8 Servers and HP 3PAR P10000 V400 was significantly higher than with 8GFC,

without the need to stress the CPUs. This gives HP DL380 Gen 8 servers the ability to utilize

the CPU power to complete others higher priority tasks.

Planning the deployment of a vSphere 5.1 HP host – or the migration of an existing host to

vSphere 5.1 – is a relatively simple process; guidelines are provided in this guide.

Using HP DL380 Gen8, HP 3PAR P10000 with 16GFC connectivity, rather than 8GFC, results

in an infrastructure that can meet the demands of the larger VMs you are now able to create

with vSphere 5.1, thus allowing you to introduce additional business-critical applications.

Page 31: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

31 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

Appendix A – Configuring BfS

This appendix outlines the process for configuring BfS:

1. Collaborate with your SAN administrator on provisioning a boot LUN and presenting it to the

vSphere 5.1 host.

2. Download the Universal Boot Code firmware for your adapter, available at

http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareIndex.jsp?lang=en&cc=us&pr

odNameId=5219801&prodTypeId=12169&prodSeriesId=5219798&swLang=8&taskId=135&

swEnvOID=54. For example, select HP SN1000E 16Gb 2-Port PCIe Fibre Channel Host

Bus AdapterCross operating system (BIOS, Firmware, Diagnostics, etc.).

3. Power on the server and, at the prompt, select Ctrl-E.

4. Install the boot code firmware.

5. Reboot the host and, at the prompt, select Ctrl-E.

6. Specify the adapter port from which the system will be booting.

7. Scan the array and select the boot LUN.

8. Save the settings and reboot the host.

9. Insert the vSphere 5.1 CD into the CD drive.

10. Initiate the install.

11. Select the appropriate disk on the SAN when asked where to install the media.

Page 32: VMware vSphere 5.1 - Implementer's Lab

Solution Implementer’s Series

32 Deploying 16Gb/s Fibre Channel SANs with HP ProLiant DL380 Gen8 servers and HP 3PAR Storage

For more information

“Storage I/O Performance on VMware vSphere 5.1 over 16 Gigabit Fibre Channel”

http://www.vmware.com/files/pdf/techpaper/V

Mware-vSphere-16Gb-StorageIO-Perf.pdf

“What’s New in VMware vSphere 5.1 – Performance”

http://www.vmware.com/resources/techresources/10309

“What’s New in VMware vSphere 5.1 – Storage”

http://www.vmware.com/resources/techresources/10308

HP virtualization with VMware, including a section on ProLiant servers

www.hp.com/go/vmware

HP storage solutions for VMware http://h71028.www7.hp.com/enterprise/w1/en/solutions/storage-vmware.html

“3PAR Utility Storage with VMware vSphere” http://www.vmware.com/files/pdf/techpaper/vmw-vsphere-3par-utility-storage.pdf

To help us improve our documents, please provide feedback at [email protected].

© Copyright 2012 Emulex Corporation. The information contained herein is subject to change without notice. The only warranties for Emulex products and services are set forth in the express warranty statements accompanying such products and services. Emulex shall not be liable for technical or editorial errors or omissions contained herein.

OneCommand is a registered trademarks of Emulex Corporation. HP is a registered trademark in the U.S. and other countries.

VMware is a registered trademark of VMware Corporation.

Page 33: VMware vSphere 5.1 - Implementer's Lab

World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600 Bangalore, India +91 80 40156789 | Beijing, China +86 10 68499547 Dublin, Ireland+35 3 (0)1 652 1700 | Munich, Germany +49 (0) 89 97007 177 Paris, France +33 (0) 158 580 022 | Tokyo, Japan +81 3 5322 1348 Wokingham, United Kingdom +44 (0) 118 977 2929