Rearchitecting Storage for Server Virtualization
-
Upload
stephen-foskett -
Category
Technology
-
view
1.914 -
download
3
Transcript of Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationStephen FoskettOct 21, 2010
This is Not a Rah-Rah Session
Agenda•First 45 minutes
▫Impact of hypervisors on I/O▫VM storage approaches▫VM connectivity options
•Break•Second 45 minutes
▫Storage features for VM▫Questions and comments
Introducing Virtualization
Poll: Who Is Using VMware?
VMware
None
MicrosoftOther
Virtualization Users
Source: A dozen analyst SWAGs
Server Virtualization “The I/O Blender”
•Shared storage is challenging to implement
•Storage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performance
•Server virtualization throws I/O into a blender – All I/O is now random I/O!
Server Virtualization requires SAN and NAS•Server virtualization has transformed the
data center and storage requirements▫VMware is the #1 driver of SAN adoption
today!▫60% of virtual server storage is on SAN or
NAS▫86% have implemented some server
virtualization•Server virtualization has enabled and
demanded centralization and sharing of storage on arrays like never before!
Server Virtualization Recoil•Dramatically increased I/O•“Detrimental” to storage utilization•Patchwork of support, few standards
▫“VMware mode” on storage arrays▫Virtual HBA/N_Port ID Virtualization (NPIV)▫Everyone is qualifying everyone and
jockeying for position•Befuddled traditional backup, replication,
reporting
Three Pillars of VM Performance
Poll: Does Server Virtualization Improve Storage Utilization?
Hypervisor Storage Approaches
Hypervisor Storage Options:Shared Storage
• Shared storage - the common/ workstation approach▫ Stores VMDK image in VMFS datastores
▫ DAS or FC/iSCSI SAN
▫ Hyper-V VHD is similar
• Why?▫ Traditional, familiar, common (~90%)
▫ Prime features (Storage VMotion, etc)
▫ Multipathing, load balancing, failover*
• But…▫ Overhead of two storage stacks (5-8%)
▫ Harder to leverage storage features
▫ Often shares storage LUN and queue
▫ Difficult storage management
VMHost
GuestOS
DAS or SANStorage
VMFS VMDK
Hypervisor Storage Options:Shared Storage on NFS
• Shared storage on NFS – skip VMFS and use NAS▫ NTFS is the datastore
• Wow!▫ Simple – no SAN
▫ Multiple queues
▫ Flexible (on-the-fly changes)
▫ Simple snap and replicate*
▫ Enables full Vmotion
▫ Use fixed LACP for trunking
• But…▫ Less familiar (3.0+)
▫ CPU load questions
▫ Default limited to 8 NFS datastores
▫ Will multi-VMDK snaps be consistent?
VMHost
GuestOS
NFSStorage
VMDK
Hypervisor Storage Options:Raw Device Mapping (RDM)• Raw device mapping (RDM) -
guest VM’s access storage directly over iSCSI or FC▫ VM’s can even boot from raw devices
▫ Hyper-V pass-through LUN is similar
• Great!▫ Per-server queues for performance
▫ Easier measurement
▫ The only method for clustering
• But…▫ Tricky VMotion and DRS
▫ No storage VMotion
▫ More management overhead
▫ Limited to 256 LUNs per data center
VMHost
GuestOS
SAN Storage
Mapping File
I/O
Physical vs. Virtual RDMVirtual Compatibility
Mode• Appears the same as a
VMDK on VMFS• Retains file locking for
clustering• Allows VM snapshots,
clones, VMotion• Retains same
characteristics if storage is moved
Physical Compatibility Mode
• Appears as a LUN on a “hard” host
• Allows V-to-P clustering,a VMware locking
• No VM snapshots, VCB, VMotion
• All characteristics and SCSI commands (except “Report LUN”) are passed through – required for some SAN management software
Physical vs. Virtual RDM
Which VMware Storage Method Performs Best?
Mixed Random I/O CPU Cost Per I/O
Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc., 2008
VMFS,RDM (p), or RDM (v)
Storage Connectivity Options
Which Storage Protocol To Use?•Server admins don’t know/care about
storage protocols and will want whatever they are familiar with
•Storage admins have preconceived notions about the merits of various options:▫FC is fast, low-latency, low-CPU, expensive▫NFS is slow, high-latency, high-CPU, cheap▫iSCSI is medium, medium, medium, medium
vSphere Protocol Performance
vSphere CPU Utilization
vSphere Latency
Microsoft Hyper-V Performance
The Upshot: It Doesn’t Matter• Use what you have and are familiar with!• FC, iSCSI, NFS all work well
▫Most enterprise production VM data is on FC, many smaller shops using iSCSI or NFS
▫Either/or? - 50% use a combination• For IP storage
▫Network hardware and config matters more than protocol (NFS, iSCSI, FC)
▫Use a separate network or VLAN▫Use a fast switch and consider jumbo frames
• For FC storage▫8 Gb FC/FCoE is awesome for VM’s▫Look into NPIV▫Look for VAAI
Break Time!
Stephen [email protected]/sfoskett+1(508)451-9532
FoskettServices.comblog.fosketts.net
GestaltIT.com
25
VMware Storage Features
What’s New in vSphere 4•VMware vSphere 4 (AKA ESX/ESXi 4) is a
major upgrade for storage▫Lots of new features like thin provisioning,
PSA, Any-to-any Storage VMotion, PVSCSI▫Massive performance upgrade (400k IOPS!)
•vSphere 4.1 is equally huge for storage▫Boot from SAN▫vStorage APIs for Array Integration (VAAI)▫Storage I/O Control (SIOC) aka DRS for
Storage
Storage Features By License
Native VMware Thin Provisioning
• VMware ESX 4 allocates storage in 1 MB chunks as capacity is used▫Similar support enabled for virtual disks on NFS in
VI 3▫Thin provisioning existed for block, could be
enabled on the command line in VI 3▫Present in VMware desktop products
• vSphere 4 fully supports and integrates thin provisioning▫Every version/license includes thin provisioning▫Allows thick-to-thin conversion during Storage
Vmotion• In-array thin provisioning also supported (we’ll
get to that…)
Pluggable Storage Architecture• VMware ESX includes multipathing
built in▫ Basic native multipathing (NMP) is
round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to use
▫ E+ Only: vSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stack
• There are two classes of third-party plug-ins:▫ Path-selection plugins (PSPs)
optimize the choice of which path to use, ideal for active/passive type arrays
▫ Storage array type plugins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arrays
• EMC PowerPath/VE for vSphere does everything
vStorage APIs for Array Integration (VAAI)• vSphere 4.1 only!• Array-based “Full Copy”
command offloads operations to array snapshots for Storage VMotion
• Acceleration of storage I/O - think "I/O" dedupe (not to be confused with data deduplication
• Hardware-assisted locking on a block-by-block basis (rather than entire LUN)
• Array-based thin provisioning integration using TRIM, zeroing, etc
• Supposed to have thin provisioning stun, but it’s AWOL
Storage I/O Control (SIOC)• “SIOC provides a dynamic control
mechanism for proportional allocation of shared storage resources to VMs running on multiple hosts”
• ESX can provide quality of service for storage access to virtual machines▫ Enabled on the datastore object, when
a pre-defined latency level is exceeded on a VM it begins to throttle I/O based on the shares assigned to each VM
▫ SIOC is aware of the storage array device level queue slots as well as the latency of workloads and decides how it can best keep machines below the predefined latency tolerance by manipulating all the ESX host I/O queues
▫ Introduce an element of I/O fairness across a datastore
• But:▫ vSphere 4.1 and Enterprise Plus only▫ Only supported with block storage (FC
or ISCSI)▫ Does not support RDM’s or datastores
constructed of extents, only 1:1 LUN to datastore mapping
Why NPIV Matters• N_Port ID Virtualization (NPIV)
gives each server a unique WWN▫ Easier to move and clone*
virtual servers ▫ Better handling of fabric login▫ Virtual servers can have their
own LUNs, QoS, and zoning▫ Just like a real server!
• When looking at NPIV, consider:▫ How many virtual WWNs does
it support? T11 spec says “up to 256”
▫ OS, virtualization software, HBA, FC switch, and array support and licensing
▫ Can’t upgrade some old hardware for NPIV, especially HBAs
Virtual Server
Virtual Server
Virtual Server
21:00:00:e0:8b:05:05:04
Without NPIV
Virtual Server
Virtual Server
Virtual Server
…05:05:05
With NPIV
…05:05:06 …05:05:07
Intel VMDq, VMDc, MS RSS• VMDq is like NPIV for
network cards▫ Hardware-assisted sorting
of virtual network cards▫ Uses MAC address▫ Requires special driver
• Supported on ESX and Hyper-V
• Two more technologies:▫ VMDc is different – Intel’s
networking take on SR-IOV
▫ Microsoft RSS allocates work to multiple CPU cores
And Then There’s VDI…•Desktop Virtualization (VDI) takes
everything we just worried about and amplifies it▫Massive I/O crunches▫Huge duplication of data▫More wasted capacity▫More user visibility▫More backup trouble
Thank You!
Stephen [email protected]/sfoskett+1(508)451-9532
FoskettServices.comblog.fosketts.net
GestaltIT.com
36