VMworld 2011
description
Transcript of VMworld 2011
1© Copyright 2010 EMC Corporation. All rights reserved.
Rick SchererCloud Architect – EMC Corporation@rick_vmwaretips – VMwareTips.com
VMworld 2011@ the Venetian, Las Vegas
2© Copyright 2010 EMC Corporation. All rights reserved.
Agenda• VMworld 2011 Highlights• VMware Announcements• EMC Announcements• Hands On Labs Details• Q & A
3© Copyright 2010 EMC Corporation. All rights reserved.
VMworld 2011 Highlights
• Show Dates: August 29 – Sept 1st
• VMworld Theme: Your Cloud. Own It
• Attendees: 19,000+• Audience Profile
• IT Manager (44%), Architect,Sys Admin, C- Level, IT Director
4© Copyright 2010 EMC Corporation. All rights reserved.
A Few Statistics by Paul during Keynote
Statistics Comparisons> 50% of Workloads VirtualizedVM Created Every 6 Seconds
Faster than Babies being born in the US
20 Million VM’s running on vSphere
If these were physical, they would stretch 2X the length of Great Wall of China
5.5 vMotions per second This is more VM’s in flight than any aircraft in flight at any given time
> 800,000 VM Admins ~population of San Francisco
> 1650 ISV Partners~ 3000 Apps Certified on VMware
5© Copyright 2010 EMC Corporation. All rights reserved.
Sessions by Tracks & Industries•Cloud Application Platform•Business Continuity•Mgt. and Operations•Security and Compliance•Virtualizing Business Critical Applications•vSphere•End-User Computing•Partner (For VMware Partners Only)•Technology Partners/Sponsors•Technology Exchange for Alliance Partners
Over 175 Unique Breakout Sessions and 20+ Labs
VMworld Session Highlights
6© Copyright 2010 EMC Corporation. All rights reserved.
VMworld 2011 Numbers ……• 200+ Exhibitors in Solutions Exchange• Over 19,000 attendees• 7 EMC Lead sessions• NetApp had 3 sessions• HP had 2 sessions• Dell and IBM had 1 session each• Over 13,500 Labs attended • Over 148,100 VMs deployed• 175 unique Breakout Sessions• Staged more than 470 lab seats
7© Copyright 2010 EMC Corporation. All rights reserved.
VMware Announcements
8© Copyright 2010 EMC Corporation. All rights reserved.
VMware Introduces Portal for Database as a Service--- DBaaS
• Reduce Database Sprawl• Self-service Database
Management • Extend virtual resource
capabilities to the data tier
9© Copyright 2010 EMC Corporation. All rights reserved.
VMware vCloud AcceleratesJourney to Enterprise Hybrid Cloud
VMware Cloud Infrastructure SuiteThe Foundation for the Enterprise Hybrid Cloud
• VMware vCenter Site Recovery Manager 5• VMware vCloud Director 1.5• VMware vShield 5
VMware vCloud Connector 1.5Fast, reliable transfer between private and public clouds
vcloud.vmware.comFind, connect with, and test drive Service Provider vCloud offerings
Disaster recovery to the cloud with vCenter Site Recovery Manager 5Cloud based Disaster Recovery Services
Together, these products will help customers transform IT to drive greater efficiency of existing investments and improve operational agility.
10© Copyright 2010 EMC Corporation. All rights reserved.
End User Computing in The Post-PC ERA
• VMware View 5
• VMware Horizon
• Projects AppBlastand Octopus
11© Copyright 2010 EMC Corporation. All rights reserved.
VMware and Cisco Collaborateon Cloud Innovation
• VXLAN submitted to IETF
• Isolation and segmentation benefits of layer 3 networks, while still being able to travel over a flat layer 2 network.
12© Copyright 2010 EMC Corporation. All rights reserved.
Technology Preview: vStorage APIs for VM and Application Granular Data Management
• Preview of VM Volumes, No Date Given• Major change to storage model – everything is
at the vApp layer• Works with Block and NAS models• 5 storage vendors implementations
demonstrated• EMC had a strong demonstration footprint
13© Copyright 2010 EMC Corporation. All rights reserved.
Storage SystemStorage System
How It All Comes Together
VMFSDatastor
eNFS
Datastore
NFSReadWrite
CreateDelete
SCSIReadWrite
ESX Server
CapacityPool
VM VolumeVM Storage Profile
SCSI or NFS
ReadWrite
VM Granular Storage APICreate DeleteSnap Clone
Bind
VM Granular Storage
APIProvider
IO De-mux
VM Volume
•Storage resources•Administrative domain•Visible to all servers in an ESX cluster•One or more Capacity Pools may offer the required Storage Capabilities •Management via VM Granular Storage API Provider
•Data path to VM Volumes•Block: FC, FCoE, iSCSI•File: NFS
•Support VM Granular Storage web-service API•Delivered by the storage vendor•On or off array
•Storage resources•One or more Storage Profiles advertising different Storage Capabilities•Manage VM Volumes•One-to-one mapping of data VM Volume to VMDK•Meta VM Volume for non-data VM files
Capacity PoolCapacity Pool
14© Copyright 2010 EMC Corporation. All rights reserved.
EMC Announcements
15© Copyright 2010 EMC Corporation. All rights reserved.
What Does EMC Support with vSphere 5 and When?• Storage platforms on vSphere 5 HCL: VPLEX, VMAX, VMAXe, VNX,
VNXe, CX4, NS• EMC Virtual Storage Integrator 5 (VSI) - Supported• VAAI New API Support
• VMAX – Already included in latest Enginuity but testing still underway, not supported yet
• VNX – Beta of new operating code to support new APIs underway• VNXe – Target for file level support is Q4 2011, 2012 for block• Isilon – Support coming in 2012
• VASA Support• EMC general support GA’s 9/23
• Will support block protocols for VMAX, DMX, VNX, CX4, NS• File support for these platforms and others in 2012
• PowerPath VE• Day 1 support including updated simpler licensing model and
support for Stateless ESX
16© Copyright 2010 EMC Corporation. All rights reserved.
EMC Releases VSI 5, A 5th Gen Plug-in for VMware
• Full support for VMware vSphere 5
• Provisions in minutes
• New and Robust Role Based Access Controls
17© Copyright 2010 EMC Corporation. All rights reserved.
EMC Breaks World Record with vSphere 51 Million IOps Through a Single vSphere Host
18© Copyright 2010 EMC Corporation. All rights reserved.
EMC Breaks World Record with vSphere 5New World Record: 10GBps from vSphere 5
19© Copyright 2010 EMC Corporation. All rights reserved.
EMC VNX Accelerates VMware View 5.0 Boot 500 Desktops in 5 Minutes
• Boost Performance during log-in storms
• Maximize Efficiency
• Simplify Management
20© Copyright 2010 EMC Corporation. All rights reserved.
EMC Technology Preview – Scale Out NAS
Characteristics of true scale-out NAS model
•Multiple Nodes Presenting a Single, Scalable File System and Volume•N-Way, Scalable Resiliency•Linearly Scalable IO and Throughput•Storage Efficiency
21© Copyright 2010 EMC Corporation. All rights reserved.
Tech Preview: Storage, Compute, PCIe Flash Uber Mashup
Emerging Workloads have Different Requirements
•Some benefit by moving compute closer to storage•Others benefit by moving data closer to compute•EMC demonstrated both
• The effect of Lightning IO cards on latency sensitive apps
• The effect of running a VM on a storage node for bandwidth constrained apps
22© Copyright 2010 EMC Corporation. All rights reserved.
EMC Technology Preview: Avamar vCloud Protector
• To backup and restore vCloud Director is not simple
• Granular, reliable, and fast tenant based restores are a must
• Self serve backup and restore is required in a cloud
23© Copyright 2010 EMC Corporation. All rights reserved.
vShield 5 and RSA integrationCan The Virtual Be More Secure Than The Physical?
• Uses RSA DLP Technology• Check compliance against
global standards• Catch data leakage• Integrates with RSA Envision
and Archer eGRC
VMware vShield App with Data Security
24© Copyright 2010 EMC Corporation. All rights reserved.
Hands On Labs
25© Copyright 2010 EMC Corporation. All rights reserved.
26© Copyright 2010 EMC Corporation. All rights reserved.
VMware Hands on Labs• 10 Billion+ IOs served• ~148,138 VMs created and
destroyed over 4 days. 4,000+ MORE than VMworld 2010
• 2 X EMC VNX 5700’s• 131.115 Terabytes of NFS traffic• 9.73728 billion of NFS IOPS• VNX internal avg NFS read latency of
1.484ms• VNX internal avg NFS write latency
2.867ms
27© Copyright 2010 EMC Corporation. All rights reserved.
EMC vLabs• Infrastructure running on a pair of
EMC VNX7500’s• Each loaded with SSD, FAST
Cache, FAST VP and 10GbE• Most of the load on NFS with 3
data movers (and 1 standby)
Lab Type
# of Labs
Lab Type
# of Labs
VSI 85 Archer 6UIM 22 Isilon 49VNX 89 VPLEX 46VNXe 32 VAAI 35Avamar
55 Atmos 20
Description Count# of VM’s 1718# of Users 402# of Lab Stations
15
# of Demos 474Statistics on Types of Demos
30© Copyright 2010 EMC Corporation. All rights reserved.
VM HAGround-Up
Rewrite
31© Copyright 2010 EMC Corporation. All rights reserved.
VM HA Enhancement Summary• Enhanced vSphere HA core
– a foundation for increased scale and functionality
– Eliminates common DNS issues
– Multiple Communication Paths
• Can leverage storage as well as the mgmt network for communications
• Enhances the ability to detect certain types of failures and provides redundancy
• Also:– IPv6 Support– Enhanced Error
Reporting– Enhanced User Interface– Enhanced Deployment
Mechanism
32© Copyright 2010 EMC Corporation. All rights reserved.
vSphere HA Primary Components• Every host runs an agent.
– Referred to as ‘FDM’ or Fault Domain Manager
– One of the agents within the cluster is chosen to assume the role of the Master
• There is only one Master per cluster during normal operations
– All other agents assume the role of Slaves
• There is no more Primary/Secondary concept with vSphere HA
ESX 01 ESX 03
ESX 04ESX 02
vCenter
Useful for VPLEX and Stretched Clusters
33© Copyright 2010 EMC Corporation. All rights reserved.
The Master Role• An FDM master monitors:
– ESX hosts and Virtual Machine availability.– All Slave hosts. Upon a Slave host failure,
protected VMs on that host will be restarted.– The power state of all the protected VMs. Upon
failure of a protected VM, the Master will restart it.
• An FDM master manages:– The list of hosts that are members of the
cluster, updating this list as hosts are added or removed from the cluster.
– The list of protected VMs. The Master updates this list after each user-initiated power on or power off.
ESX 02
34© Copyright 2010 EMC Corporation. All rights reserved.
The Slave Role• A Slave monitors the runtime state of its
locally running VMs and forwards any significant state changes to the Master.
• It implements vSphere HA features that donot require central coordination, most notably VM Health Monitoring.
• It monitors the health of the Master. If theMaster should fail, it participates in the election process for a new master.
• Maintains list of powered on VMs.
ESX 01 ESX 03
ESX 04
35© Copyright 2010 EMC Corporation. All rights reserved.
The Master Election Process• The following algorithm is
used for selecting the master:
– The host with access to the greatest number of datastores wins.
– In a tie, the host with the lexically highest moid is chosen. For example moid "host-99" would be higher than moid "host-100"
since "9" is greater than "1".
ESX 01 ESX 03
ESX 04ESX 02
36© Copyright 2010 EMC Corporation. All rights reserved.
Agent Communications• Primary agent communications utilize the
management network.• All communication is point to point (no
broadcast)• Election is conducted using UDP.• Once the Election is complete all further
Master to Slave communication is via SSL encrypted TCP.
• Each slave maintains a single TCP connection to the master.
• Datastores are used as a backup communication channel when a cluster’s management network becomes partitioned.
ESX 01 ESX 03
ESX 04ESX 02
37© Copyright 2010 EMC Corporation. All rights reserved.
Storage-Level Communications• One of the most exciting new features
of vSphere HA is its ability to use a storage
subsystem for communication.– The datastores used for this are referred to
as ‘Heartbeat Datastores’.– This provides for increased communication
redundancy.
• Heartbeat datastores are used as a communication channel only when the management network is lost - such as in
the case of isolation or network partitioning.
ESX 01 ESX 03
ESX 04ESX 02
Useful for VPLEX and stretched clusters
38© Copyright 2010 EMC Corporation. All rights reserved.
Storage-Level Communications• Heartbeat Datastores allow a Master to:
• Monitor availability of Slave hosts and the
VMs running on them.• Determine whether a host has become
network isolated rather than network partitioned.
• Coordinate with other Masters - since a VM can only be owned by only one master, masters will coordinate VM ownership thru datastore communication.
• By default, vCenter will automatically pick2 datastores. These 2 datastores can also be selected by the user.
ESX 01 ESX 03
ESX 04ESX 02
Useful for VPLEX and stretched clusters
39© Copyright 2010 EMC Corporation. All rights reserved.
Storage-Level Communications• Host availability can be inferred differently,
depending on storage used:• For VMFS datastores, the Master reads the
VMFS heartbeat region.• For NFS datastores, the Master monitors
a heartbeat file that is periodically touched by the Slaves.
• Virtual Machine Availability is reported bya file created by each Slave which lists thepowered on VMs.
• Multiple Master Coordination is done by using file locks on the datastore.
ESX 01 ESX 03
ESX 04ESX 02
Useful for VPLEX and stretched clusters
40© Copyright 2010 EMC Corporation. All rights reserved.
Storage-Related
Stuff
41© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – vStorage changes1. vStorage APIs for Array Integration (VAAI) –
expansion2. vStorage Storage APIs for Storage Awareness
(VASA) 3. VMFS-54. Storage DRS5. Storage vMotion enhancements6. All Paths Down (APD) and Persistent Device Loss
(PDL)7. Native Software FCoE initiator8. NFS – improvements for scale-out
42© Copyright 2010 EMC Corporation. All rights reserved.
1. VAAI Updates
43© Copyright 2010 EMC Corporation. All rights reserved.
Understanding VAAI a little “lower”• VAAI = vStorage APIs for Array
Integration– A set of APIs to allow ESX to
offload functions to storage arrays
– In vSphere 4.1, supported on VMware File Systems (VMFS) and Raw Device Mappings (RDM) volumes,
– vSphere 5 adds NFS VAAI APIs. – Supported by EMC VNX, CX/NS,
VMAX arrays (coming soon to Isilon)
• Goals– Remove bottlenecks– Offload expensive data
operations to storage arrays• Motivation
– Efficiency– Scaling
VI3.5 (fsdm)
vSphere 4 ( fs3dm - software)
vSphere 4.1/5 (hardware) = VAAI
Diagram from VMworld 2009 TA3220 – Satyam Vaghani
44© Copyright 2010 EMC Corporation. All rights reserved.
Growing list of VAAI hardware offloads• vSphere 4.1
– For Block Storage:• HW Accelerated
Locking• HW Accelerated Zero• HW Accelerated Copy
– For NAS storage:• None
• vSphere 5– For Block Storage:
• Thin Provision Stun• Space Reclaim
– For NAS storage:• Full Clone• Extended Stats• Space Reservation
45© Copyright 2010 EMC Corporation. All rights reserved.
Hardware-Accelerated Locking• Without API
– Reserves the complete LUN (via SCSI-2 reservation) so that it could update a file-lock
– Required several SCSI -2 commands– LUN level locks affect adjacent hosts
• With API– Commonly Implemented as Vendor Unique
SCSI opcode– Moving to SCSI CAW opcode in vSphere 5
(more standard)– Transfer two 512-byte sectors– Compare first sector to target LBA, if match
writes second sector, else returns miscompare
46© Copyright 2010 EMC Corporation. All rights reserved.
VMDK
SCSI WRITE (0000…..)SCSI WRITE (data)SCSI WRITE (0000….)SCSI WRITE (data)Repeat MANY times…
SCSI WRITE SAME (0 * times)SCSI WRITE (data)
Hardware-Accelerated Zero• Without API
– SCSI Write - Many identical small blocks of zeroes moved from host to array for MANY VMware IO operations
– Extra zeros can be removed by EMC arrays after the fact by manually initiating “space reclaim” on the entire device
– New Guest IO to VMDK is “pre-zeroed”
• With API– SCSI Write Same - One giant block of zeroes
moved from host to array and repeatedly written– Thin provisioned array skips zero completely (pre
“zero reclaim”)– Moving to SCSI UNMAP opcode in vSphere 5
(which will be “more standard”, and will always return blocks to the free pool)
47© Copyright 2010 EMC Corporation. All rights reserved.
SCSI READSCSI READSCSI READ..MANY times…
SCSI EXTENDED COPY
SCSI WRITESCSI WRITESCSI WRITE..MANY times…
“let’s Storage VMotion”“let’s Storage VMotion”
“Give me a VM clone/deploy from template”
Hardware-Accelerated Copy• Without API
– SCSI Read (Data moved from array to host)– SCSI Write (Data moved from host to array)– Repeat– Huge periods of large VMFS level IO, done via
millions of small block operations
• With API– Subset of SCSI eXtended COPY opcode– Allows copy within or between LUs– Order of magnitude reduction in IO
operations– Order of magnitude reduction in array IOps
• Use Cases– Storage VMotion– VM Creation from Template
48© Copyright 2010 EMC Corporation. All rights reserved.
VAAI in vSphere 4.1 = Big impact
http://www.emc.com/collateral/hardware/white-papers/h8115-vmware-vstorage-vmax-wp.pdf
49© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – Thin Provision Stun• Without API
– When a datastore cannot allocate in VMFS because of an exhaustion of free blocks in the LUN pool (in the array) this causes VMs to crash, snapshots to fail, and other badness.
– Not a problem with “Thick devices”, as allocation is fixed.
– Thin LUNs can fail to deliver a write BEFORE the VMFS is full
– Careful management at VMware and Array level needed
• With API– Rather than erroring on the write, array reports
new error message– On receiving this command, VMs are “stunned”,
giving the opportunity to expand the thin pool at the array level.
VMDKVMFS-5
Extent
VMDKVMDK
Thin LUNs
Utilization
! ! !
Allocate VMFSAllocate VMFSAllocate VMFS
SCSI WRITE - OKSCSI WRITE - OKSCSI WRITE – ERROR!
Storage Pool (free blocks)
50© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – Space Reclamation• Without API
– When VMFS deletes a file, the file allocations are returned for use, and in some cases, SCSI WRITE ZERO would zero out the blocks.
– If the blocks were zeroed, manual space reclamation at the device layer could help.
• With API– Rather of SCSI WRITE ZERO, SCSI UNMAP is
used.– The array releases the blocks back to the
free pool.– Is used anytime VMFS deletes (svMotion,
Delete VM, Delete Snapshot, Delete)– Note that in vSphere 5, SCSI UNMAP is
used in many other places where previously SCSI WRITE ZERO would be used, and depends on VMFS-5
VMDKVMFS-5
Extent
VMDK
Utilization
Storage Pool (free blocks)
SCSI WRITE - DATASCSI WRITE - DATASCSI WRITE - ZERO
CREATE FILECREATE FILEDELETE FILE
SCSI WRITE - DATASCSI WRITE - DATASCSI UNMAP
CREATE FILECREATE FILEDELETE FILE
51© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – NFS Full Copy• Without API
– Some NFS servers have the ability to create file-level replicas
– This feature was not used for VMware operations – which were traditional host-based file copy operations.
– Vendors would leverage them via vCenter plugins. An example was EMC exposed this array feature via the Virtual Storage Integrator Plugin Unified Storage Module.
• With API– Implemented via NAS vendor plugin, used by
vSphere for clone, deploy from template– Uses EMC VNX OE File file version– Somewhat analagous to block XCOPY hardware
offload– NOTE – not used during svMotion
NFS Mount
Extent
FOO-COPY.VMDK
Filesystem
NFS Server
FOO.VMDK
ESX Host
File ReadFile ReadFile Read..MANY times…
“let’s clone this VM”“let’s clone this VM”
File WriteFile WriteFile Write..MANY times…
“Create a copy (snap, clone, version) of the
file
52© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – NFS Extended Stats• Without API
– Unlike with VMFS, with NFS datastores, vSphere does not control the filesystem itself.
– With the vSphere 4.x client – only basic file and filesystem attributes were used
– This lead to challenges with managing space when thin VMDKs were used, and administrators had no visibility to thin state and oversubscription of both datastores and VMDKs.
• think: with thin LUNs under VMFS, you could at least see details on thin VMDKs)
• With API– Implemented via NAS vendor plugin– NFS client reads extended file/filesystem details
NFS Mount
Extent
Filesystem
NFS Server
FOO.VMDK
ESX Host
“Filesize = 100GB”
“just HOW much space does this file take?”“just HOW much space does this file take?”
“Filesize = 100GB, but it’s a sparse file and has 24GB of allocations in
the filesystem. It is deduped – so it’s only REALLY using 5GB”
53© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5 – NFS Reserve Space• Without API
– There was no way on NFS datastores to do the equivalent of an “eagerzeroed thick” VMDK (needed for WSFC) or a “zeroed thick” VMDK
• With API– Implemented via NAS
vendor plugin– Reserves the
complete space for a VMDK on an NFS datastore
54© Copyright 2010 EMC Corporation. All rights reserved.
2. vSphere Storage API
for Storage Awareness
(aka “VASA”)
55© Copyright 2010 EMC Corporation. All rights reserved.
What Is VASA? VASA is an Extension of the vSphere Storage APIs, vCenter-based
extensions. It allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers.
Allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. Think of it as saying:
“this datastore is protected with RAID 5, replicated with a 10 minute RPO, snapshotted every 15 minutes, and is compressed and deduplicated”.
VASA enables several features:•It delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage.•Another example is that it provides array internal information that helps several Storage DRS use cases to work optimally with various arrays.
56© Copyright 2010 EMC Corporation. All rights reserved.
Storage Policy• Once the VASA Provider has been
successfully added to vCenter, the VM Storage Profiles displays the storage capabilities from the Vendor Provider.
• For EMC in Q3, this is provided for VNX and VMAX via Solutions Enabler for block storage. NFS will require user-defined.
• In the future, VNX will have a native provider, and will gain NFS system-defined profiles
• Isilon VASA support is targeted for Q4
57© Copyright 2010 EMC Corporation. All rights reserved.
Profile-Driven ProvisioningToday
StorageDRS
Storage DRS + Profile driven
storage
Identify requirements
Find optimal datastore Create VM
Periodically check
compliance
Identify storage
characteristics
Groupdatastores
Identify requirements Create VM
Periodically check
compliance
Discover storage
characteristics
Groupdatastores
Select VM Storage profile Create VM
Initial setup
Initial setup
58© Copyright 2010 EMC Corporation. All rights reserved.
Storage Profile During Provisioning• By selecting a VM
Storage Profile, datastores are now split into Compatible & Incompatible.
• The Celerra_NFS datastore is the only datastore which meets the GOLD Profile (user defined) requirements
59© Copyright 2010 EMC Corporation. All rights reserved.
VM Storage Profile Compliance
60© Copyright 2010 EMC Corporation. All rights reserved.
3. VMFS-5
61© Copyright 2010 EMC Corporation. All rights reserved.
VMFS-5 Versus VMFS-3 Feature Comparison
NOTE: LUN count limit for a vSphere 5 cluster is still 256, but with larger datastores the norm, should be less of an issue…
62© Copyright 2010 EMC Corporation. All rights reserved.
VMFS-3 to VMFS-5 Upgrade• The Upgrade to VMFS-5 is displayed in the
vSphere Client under Configuration → Storage view.
• It is also displayed in the Datastores → Configuration view.
• The upgrade is non-disruptive.
NOTE: Upgrading VMFS-3 to VMFS-5 non-disruptively doesn’t enable all the new goodness•>2TB volume size works post upgrade•Other features need a net new filesystemRecommendation:•Create new Datastore•Storage vMotion to new Datastore
63© Copyright 2010 EMC Corporation. All rights reserved.
4. Storage DRS
64© Copyright 2010 EMC Corporation. All rights reserved.
What Does Storage DRS Solve?• Without Storage DRS:
– Identify the datastore with the most disk space and lowest latency.
– Validate which virtual machines are placed on the datastore and ensure there are no conflicts.
– Create Virtual Machine and hope for the best.• With Storage DRS:
– Automatic selection of the best placement for your VM.– Advanced balancing mechanism to avoid storage performance
bottlenecks or “out of space” problems.
– VM or VMDK Affinity Rules.
65© Copyright 2010 EMC Corporation. All rights reserved.
What Does Storage DRS Provide?• Storage DRS provides the following:
1. Initial Placement of VMs and VMDKS based on available space and I/O capacity.
2. Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization (“capacity”)
3. Load balancing via Storage vMotion based on I/O metrics, (latency).• Storage DRS also includes Affinity/Anti-Affinity Rules for VMs
and VMDKs;• Affinity rules cannot be violated during normal operations.
66© Copyright 2010 EMC Corporation. All rights reserved.
Datastore Cluster• A group of datastores
called a “datastore cluster”
• Think: – Datastore Cluster - Storage
DRS = Simply a group of datastores (like a datastore folder)
– Datastore Cluster + Storage DRS = resource pool analagous to a DRS Cluster.
– Datastore Cluster + Storage DRS + Profile-Driven Storage = nirvana
datastore cluster
datastores
500GB
2TB
500GB 500GB 500GB
67© Copyright 2010 EMC Corporation. All rights reserved.
2TB
Storage DRS – Initial Placement• Initial Placement – VM/VMDK create/clone/relocate.
– When creating a VM you select a datastore cluster rather than an individual datastore – SDRS recommends a datastore based on space utilization and I/O load.– By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster
(VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters.
300GB availabl
e
260GB available
265GB available
275GB available
datastore cluster
datastores
500GB 500GB 500GB 500GB
68© Copyright 2010 EMC Corporation. All rights reserved.
Storage DRS Operations – IO Thresholds• SDRS triggers action on
either capacity and/or latency – Capacity stats are
constantly gathered by vCenter, default threshold 80%.
– I/O load trend is evaluated (default) every 8 hours based on a past day history, default threshold 15ms.
• Storage DRS will do a cost / benefit analysis!
• For latency Storage DRS leverages Storage I/O Control functionality.
When using EMC FAST VP, use SDRS, but disable I/O metric here.This combination gives you the simplicity benefits of SDRS for automated placement and capacity balancing but adds:
•Economic and performance benefits of automated tiering across SSD, FC, SAS, SATA•10x (VNX) and 100x (VMAX) higher granularity (sub VMDK)
69© Copyright 2010 EMC Corporation. All rights reserved.
Storage DRS – Affinity/Anti-Affinity
• Keep a Virtual Machine’s VMDKs together on the same datastore
• Maximize VM availability when all disks needed in order to run
• On by default for all VMs
Datastore Cluster
VMDK affinity VMDK anti-affinity
Datastore Cluster
VM anti-affinity
Datastore Cluster
• Keep a VM’s VMDKs on different datastores
• Useful for separating log and data disks of database VMs
• Can select all or a subset of a VM’s disks
• Keep VMs on different datastores
• Similar to DRS anti-affinity rules
• Maximize availability of a set of redundant VMs
70© Copyright 2010 EMC Corporation. All rights reserved.
Storage DRS/EMC considerationsFeature/Product/Use
CaseSDRS Initial
Placement
SDRS Migration
VMware Linked Clones (View) Not supportedVMware Snapshots SupportedVMware SRM Not SupportedRDM Pointer Files SupportedRDM (physical or virtual) Not ApplicablePre vSphere 5 hosts Not SupportedNFS Datastores SupportedDistributed Virtual Volumes (VPLEX)
Not Supported
Array-based Replication (SRDF, Recoverpoint, etc)
Supported
Use Manual Mode Only – as unanticipated Storage vMotions will increase WAN utilization
Array-based Snapshots Supported
Use Manual Mode only – as unanticipated Storage vMotions will increase space consumed
Array-based Compression/ Deduplication
Supported
Use Manual Mode only – as unanticipated Storage vMotions will temporarily increase space consumed
Array-based thin provisioning Supported
Supported on VASA-enabled arrays only
Array based auto-tiering (EMC FAST VP)
Supported
Supported, but use SDRS for capacity load-balancing + use FAST VP for IO load-balancing. Turn off I/O metrics in SDRS, but enable SIOC on datastores to handle spikes of I/O contention.
Array based mega-cache (EMC FAST Cache)
Supported
71© Copyright 2010 EMC Corporation. All rights reserved.
5. Storage vMotion
Enhancements
72© Copyright 2010 EMC Corporation. All rights reserved.
Storage vMotion – Enhancements• Storage vMotion in vSphere 5 works with Virtual Machines that
have snapshots – means coexistence with other VMware products & features such as VDR &
vSphere Replication.
• Storage vMotion will support the relocation of linked clones.• Storage vMotion has a new use case – Storage DRS – which
uses Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).
73© Copyright 2010 EMC Corporation. All rights reserved.
Storage vMotion – Enhancements• In vSphere 4.1, Storage vMotion uses the Changed Block Tracking (CBT) method
to copy disk blocks between source & destination – a huge improvement from vSphere 4 (used snapshots)
• The main challenge in this approach is that the disk pre-copy phase can take a while to converge, and can sometimes result in Storage vMotion failures if the VM was running a very I/O intensive load.
• Mirroring I/O between the source and the destination disks has significant gains when compared to the iterative disk pre-copy mechanism.
• In vSphere 5.0, Storage vMotion uses a new mirroring architecture to provide the following advantages over previous versions:
– Guarantees migration success even when facing a slower destination.– More predictable (and shorter) migration time.
74© Copyright 2010 EMC Corporation. All rights reserved.
6. All Paths Down
and PermanentDevice Loss
(PDL)
75© Copyright 2010 EMC Corporation. All rights reserved.
APD vs. PDL: Definition of Terms• APD (“All Paths Down”) is a generic term for
an unexplained condition where no usable paths exist to a given device or a device stops responding to outstanding requests
– APD can be transient or permanent– The important point is that the system has no
information about the possible duration; the device may resume normal operation at any time.
76© Copyright 2010 EMC Corporation. All rights reserved.
APD vs. PDL: Definition of Terms• PDL (“Permanent Device Loss”) is indicated
by the storage target using SCSI sense codes.
– Examples would be a return of 5/25h/00h (ILLEGAL REQUEST; LUN NOT SUPPORTED) or 4/3Eh/01h (HARDWARE ERROR; LUN FAILURE).
– Absent specific indications of PDL, a non-responsive device is considered to be in APD, not PDL.
– Once a device is in PDL, no recovery is expected.
77© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5.0: PDL• ESX 5.0 introduced a new device state: PERMANENTLY LOST
– When all paths to a device indicate PDL, the device state is changed to VMK_SCSI_DEVICE_STATE_PERM_LOSS
– PDL can be planned or unplanned– Planned PDL could be deliberately removing a LUN that’s no
longer used.– Unplanned PDL could be, for example, the result of inadvertently
changing a LUN’s masking to block access by a host.– Once a device is determined to be in PDL, no recovery is expected– Restoring a device that’s in PDL requires terminating all running
VMs and closing all files on the device. If a device in PDL is encountered on a HBA rescan (with ‘delete’ option specified if done manually) it will be removed.
78© Copyright 2010 EMC Corporation. All rights reserved.
vSphere 5.0: PDL• Planned PDL is easy
Step 1 – Unmount Filesystem
Step 2 – Detach Device
79© Copyright 2010 EMC Corporation. All rights reserved.
7. Native Software
FCoE Initiator
80© Copyright 2010 EMC Corporation. All rights reserved.
Introduction• The FCoE adapters that VMware supports generally
fall into two categories, hardware FCoE adapters and software FCoE adapters which uses an FCoE capable NIC
– Note – at GA, the primary FCoE capable NIC will be the Intel x520 series.
• Note: Hardware FCoE adapters were supported as of vSphere 4.0.
• EMC Support for Native FCoE exists on both the VNX and the VMAX.
81© Copyright 2010 EMC Corporation. All rights reserved.
Software FCoE Adapters• A software FCoE adapter is a software code that performs some
of the FCoE processing.• Unlike the hardware FCoE adapter, the software adapter needs to
be activated, similar to Software iSCSI.• For EMC – check eLab for end-to-end support statements
82© Copyright 2010 EMC Corporation. All rights reserved.
8. NFS ClientDNS Round-
Robin
83© Copyright 2010 EMC Corporation. All rights reserved.
What has changed?• Minor change in NFS v3 client – not NFS v4, NFS v4.1 or
pNFS• If the domain name is specified in the NFS datastore
configuration:– DNS lookup will occur on every ESX boot– Will honor DNS round robin
• Can be useful to distribute NFS client logins across a vSphere 5 cluster (not a single host) for a datastore to different IP addresses
– Particularly useful for scale out storage (Example: EMC Isilon)