VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the...

38
VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review/Download the VMware Exam Guide for updates made by VMware Certification Team. Reference: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/certification/vmw-vcp65-dcv-2v0-622- guide.pdf Configure and Administer vSphere Security Users/Group Management: You can export the displayed list of users to a file (CSV). You can choose to select all items or selected users/groups. Global permissions can span multiple vCenter Servers in the same SSO domain. Propagation of permissions is enabled by default. Permissions hierarchy o Root object (Global Permissions Level) § Content Library § vCenter Server (vCenter Server Instance Level) § Tag Category Inherited permissions are not always enforced. Permissions applied at the ‘Object’ level always win/override inherited permissions. Permissions applied on a child object always override permissions that are applied on ta parent object. o if no explicit (per object) permissions are applied group permissions will be enforced. o if permissions are applied on an object those permissions supersede all others. Permissions applied at the Global Level are propagated to all objects Permissions applied to vCenter Level are only to inventory objects within vCenter (excluding tags / content library). Roles o Default roles cannot be modified or deleted o Sample roles can be cloned, modified and removed. o Custom role can be created from scratch or cloned from existing roles. System Roles o Administrator role o No cryptography administrator role o No access role o Read-only role Sample Roles o VM power user role (Power on + Snapshot creation) o VM user role (Power on) o Resource pool administrator role o VMware consolidated backup user role o Data store consumer role o Tagging admin role o Network administrator role o Content library administrator role Identity sources for vCenter: o Active Directory (Integrated Windows Authentication) - Users will be authenticated automatically using the client integration plugin. § Requires only AD Domain Join. This is also needed for the enhanced integration plugin. o Active Directory as an LDAP server (does not require AD join) o OpenLDAP o Local OS - Users will be authenticated using the OS of the SSO server. § Users are defined in the SAM file /etc/passwd and /etc/shadow Where possible assign permissions to groups instead of users and use the POLP method. Tagging permissions: o Only global permissions or permissions assigned to the Tag object apply. o if you grant permissions to a user on a vCenter Server inventory object, such as a VM that user cannot automatically perform tag operations on that object. Use folders to group objects, also helpful for setting permissions boundaries on a set of cluster(s) or hosts. Use the ‘no access’ role to mask specific areas of the hierarchy, this restricts users who are configured with the ‘no access role' License propagation happens to all vCenter Servers linked to the same PSC, or SSO Domain. o License propagation happens even if the user doesnt have access to the remote vCenter Server. User directory validation o User directory time out - maximum amount of time in seconds SSO allows a search to run.

Transcript of VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the...

Page 1: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review/Download the VMware Exam Guide for updates made by VMware Certification Team. Reference: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/certification/vmw-vcp65-dcv-2v0-622-guide.pdf Configure and Administer vSphere Security

• Users/Group Management: You can export the displayed list of users to a file (CSV). You can choose to select all items or selected users/groups.

• Global permissions can span multiple vCenter Servers in the same SSO domain. • Propagation of permissions is enabled by default. • Permissions hierarchy

o Root object (Global Permissions Level) § Content Library § vCenter Server (vCenter Server Instance Level) § Tag Category

• Inherited permissions are not always enforced. Permissions applied at the ‘Object’ level always win/override inherited permissions.

• Permissions applied on a child object always override permissions that are applied on ta parent object. o if no explicit (per object) permissions are applied group permissions will be enforced. o if permissions are applied on an object those permissions supersede all others.

• Permissions applied at the Global Level are propagated to all objects • Permissions applied to vCenter Level are only to inventory objects within vCenter (excluding tags / content library). • Roles

o Default roles cannot be modified or deleted o Sample roles can be cloned, modified and removed. o Custom role can be created from scratch or cloned from existing roles.

• System Roles o Administrator role o No cryptography administrator role o No access role o Read-only role

• Sample Roles o VM power user role (Power on + Snapshot creation) o VM user role (Power on) o Resource pool administrator role o VMware consolidated backup user role o Data store consumer role o Tagging admin role o Network administrator role o Content library administrator role

• Identity sources for vCenter: o Active Directory (Integrated Windows Authentication) - Users will be authenticated automatically using the

client integration plugin. § Requires only AD Domain Join. This is also needed for the enhanced integration plugin.

o Active Directory as an LDAP server (does not require AD join) o OpenLDAP o Local OS - Users will be authenticated using the OS of the SSO server.

§ Users are defined in the SAM file /etc/passwd and /etc/shadow • Where possible assign permissions to groups instead of users and use the POLP method. • Tagging permissions:

o Only global permissions or permissions assigned to the Tag object apply. o if you grant permissions to a user on a vCenter Server inventory object, such as a VM that user cannot

automatically perform tag operations on that object. • Use folders to group objects, also helpful for setting permissions boundaries on a set of cluster(s) or hosts. • Use the ‘no access’ role to mask specific areas of the hierarchy, this restricts users who are configured with the ‘no

access role' • License propagation happens to all vCenter Servers linked to the same PSC, or SSO Domain.

o License propagation happens even if the user doesnt have access to the remote vCenter Server. • User directory validation

o User directory time out - maximum amount of time in seconds SSO allows a search to run.

Page 2: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Query limit (enabled/disabled checkbox) o Query limit size - the maximum number of users and groups that vCenter displays in the selected users or

groups. Set 0 or untick Query limit for all users to appear (default 5000) o Validation (enabled/disabled checkbox) o Validation period - specifies how often in minutes the validation is performed ( default 1440 minutes / 24 hours)

• Roles and Privileges for common tasks o Create VM:

§ virtual machine. inventory create new § virtual machine. configuration add new disk § virtual machine. configuration add existing disk § virtual machine. configuration raw device § resource assign virtual machine to resource pool § datastore allocate space § network assign network § virtual machine. interaction power on § virtual machine. interaction power on.

o Power on VM § virtual machine. interaction power on (target folder / data center)

o Take a snapshot § virtual machine. snapshot management create snapshot.

o Install a guest operating system § virtual machine. interaction answer question § virtual machine. interaction console interaction § virtual machine. interaction device connection § virtual machine. interaction power off § virtual machine. interaction power on § virtual machine. interaction reset § virtual machine. interaction configure CD media. § virtual machine. interaction VMware tools install § datastore browse (if they need to browse to ISO). § datastore browse datastore low level file operations (to upload ISO)

• vMotion encryption is only available with Enterprise Plus licensing. o no KMS required o vMotion encryption is enabled per vm only. o There are no certificates to manage, one-time 256-bit key is generated for each vMotion operation.

• UEFI - unified extensible firmware interface, is a replacement for the traditional BIOS firmware. o Secure boot validates the digital signature of the operating system and its boot loader (verifies drivers and

applications) • ESXi 6.5 supports secure boot on both ESXi and VMs.

o VM requirements: § Virtual hardware version 13 or later § EFI firmware in the VM boot options § VMware tools version 10.1 or later § A guest OS that supports secure boot (W8, 2012, ESXi 6.5, Photon OS, RHEL, CentOS, Ubuntu) § Edit VM -> VM options -> Boot Options: Firmware Set to EFI + Select ‘Secure Boot’ Checkbox (EFI boot

only). § If boot mode is set to EFI already just check box ‘Secure Boot’.

§ Note: You cannot upgrade a VM that uses BIOS boot to a VM that uses UEFI boot. If a VM already uses UEFI boot and the OS supports UEFI secure boot, you can simply enable secure boot.

• For ESXi secure boot verifies VIBs using its digital signature. During boot each VIB is verified against a firmware based certificate.

• ESXi services start up policy o Start and stop with host o Start and stop manually o Start and stop with port usage

• ESXi Roles o Administrator role o Read-only role o No access role

• ESXi local users o root user, built-in can be removed but make sure you have a backup root user.

Page 3: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o dcui user, the primary purpose of this user is to configure hosts for lockdown mode from the DCUI. o vpxuser user, this is used by vCenter to manage ESXi hosts after the hosts has been added to the VC inventory.

(note this is an admin role). • Lockdown mode

o Lockdown mode disables direct access to ESXi. The host will only be accessible via vCenter Server or depending on the lockdown mode used via the DCUI (physical console).

• Lockdown mode options: o Disabled. o Normal, if this option is used the DCUI is not blocked. But host UI, ESXi shell, ESXi SSH is disabled.

§ users/solution users added to exception list are not restricted. § you can only added exception users via vCenter (no function exists within the DCUI, you can only turn it

on/off…assuming you are using Normal mode). o Strict, if this option is used all local services are disabled including the DCUI. ESXi is only accessible via the

vCenter Server. • Lockdown mode exception users

o Note: this only applies if the ‘Normal’ mode is used. Strict = No DCUI access, SSH, Shell is stopped! o Users/Solutions added to the exception users list will be excluded from lockdown mode (Normal mode only) o Exception users are local or active directory users. (only service accounts should be added).

• If there is catastrophic failure, the DCUI.access advanced option allows you to exit lockdown mode when you cannot access the host from vCenter.

o You can add users to the list by editing the advanced settings for the host from the web client. o Consider removing the root user and adding an alternate one. (break glass local user).

§ Must be local user, you can add more (comma separated). o Adding users to the DCUI.access option grants ‘UNCONDITIONAL’ access to the DCUI, even if they dont have

administrator role on the host. • Disable MOB on ESXi hosts: MOB in *vSphere 6.0 is disabled by default* and should not be enabled on production ESXi

hosts. Advanced param: config.hostAgent.plugins.solo.enableMob • The default role for new users is ‘No Access’

o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed to Windows, does not have full administrative privileges over vCenter.

• VMware vSphere 6.x supports the following certificate modes: o VMware Certificate Authority - the PSC act as top level CA or intermediate CA. o Custom Certificate Authority - third party CA can be used o Thumbprint Mode - older mode (vSphere 5.5.) only thumbprints are validated.

• PSC / vCenter Architecture o PSC

§ SSO § Custom roles § Certificate Authority § VMware Certificate Service § VMware identity management service § VMware license service § Tags § VAMI (if embedded with VC)

o vCenter § vCenter Server § Inventory Service § Profile-Driven storage § HTML 5 vSphere Web Client § Auto Deploy § Content Library § Syslog collector § ESXi Dump collector § Optional: Embedded VUM § Optional: Embedded DB vPostgresSQL

• vCenter can be deployed to Windows or VCSA • PSC uses limited resources 2vCPU / 4 GB RAM • In vSphere 6.5 ELM is not supported with PSC/VC deployed as embedded mode *note this changes in later releases of

vSphere.. • Supported deployment topologies:Ref: https://kb.vmware.com/s/article/2147672

o note you can mix virtual/physical PSCs & vCenters between sites with ELM provided that for PSC you use external deployment.

Page 4: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o vCenter HA does not support ELM, and does not support PSC replication. Therefore limits use to non SSO federated solutions.

• Scalability (database considerations) o Embedded DB for VCSA : 2,000 hosts or 25,000 VMs o External DB for VCSA: 2,000 hosts or 25,000 VMs o Windows Embedded: 20 hosts, 200 VMs o Windows External DB: 2,000 hosts or 25,000 VMs

• The following authentication methods are supported: o multi-factor authentication (MFA) o two-factor authentication (2FA)

§ Starting vSphere 6.0 U2, it is possible to use two-factor authentication as follows: § Smart card (CAC) - Common Access Card § RSA SecureID token.

o Note vCenter SSO does not support RADIUS only SecureID. • Recommended deployment is two PSCs per site (multi-psc’s use LB + ring replication topology) • Only writable domain controllers can be used for AD domain joins.

o For Windows, just join windows machine to the domain. § Unlike previous versions local windows admin is not added to SSO administrators.

o For VCSA, join the PSC or the vCenter Server using Web Client. § Enter domain credentials in UPN format. § Reboot the node after joining the domain (PSC or vCenter). § Join domain first before attempting to use IWA (integrated windows authentication) § Note: to prevent authentication conflicts dont use same uname/password for OpenLDAP or Microsoft

AD. • Manage the appliances:

o PSC, https://PSC:5480 (Embedded + VCSA), login with root o PSC, https://PSC/PSC (Embedded + External PSC), login with root

§ Note smart card authentication configuration is only possible using the /PSC UI. o VCSA, https://vCenter/vsphere-client

• Configure and Manage KMS Encryption. o In vSphere 6.5 if you want to encrypt VM disks you need to deploy a KMS server (or cluster). o KMS is mandatory for VM disk encryption, note KMS is NOT needed for vMotion encryption.

• Enable/Disable VM disk encryption. o Once KMS deployed, encryption is controlled in the VM storage policies section (Home -> VM Storage Policies ->

VM Encryption Policy) • Encryption Recommendations:

o If the PSC or vCenter are implemented as VMs on the same platform, dont encrypt them. (avoid circular dependencies)

o Never edit the .vmx files or .vmdk files for encrypted VMs other they will become unrecoverable. • Privileges required to encrypt VM:

o Cryptographic operations.Encrypt new o Cryptographic operations.Decrypt o Cryptographic operations.Register host

• To prevent VMtools install disable CD/DVD drives, or the following privilege: virtual machine .Interaction.VMware tools install.

o Note this does not prevent someone from running the installer files locally or from network share. • Copy/Paste functions using VM console is disabled by default.

o isolation.tools.paste.disable TRUE (default setting), set to FALSE if you want to allow. o isolation.tools.copy.disable TRUE (default setting), set to FALSE if you want to allow. o Restart the VM for changes to take effect.

• To prevent DDOS attacks: o Harden VM o Control the size of VMX file: Default limit is 1MB, but can be changed with tools.setInfo.sizeLimit VM advanced

option. o Remove any unnecessary hardware devices from the VM. o Prevent virtual disk shrinking (this can be done within the guest OS).

§ Add the following to VM option -> adv config: (when this option is disable = true, you cannot shrink the VM disks when the datastore runs out of space).

§ isolation.tools.diskWiper.disable = TRUE § isolation.tools.diskShrink.disable = TRUE

o Prevent users from sending informational messages to host. § Add the following:

Page 5: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ isolation.tools.setinfo.disable = TRUE o VM-VM communication on the same host...note VMCI is disabled by default. o Other forms of VM hardening, prevent hot-plug of devices:

§ devices.hotplug set to FALSE (this disables device hot plug). • VM network security policies (vSS Defaults):

o Promiscuous mode (Reject) o MAC Address Changes (Accept) o Forged Transmits (Accept)

§ If hardening, set all this to Reject, but check impact to certain types of workloads (traffic capture, microsoft NLB) - this is set to all reject by default on vDS.

• vMotion Encryption, Encryption vMotion traffic is ** Per ** VM only. When a VM is migrated a one-time 256-bit key is created randomly by the vCenter Server (it does NOT use KMS).

o Edit VM-> VM Options -> Encryption -> Drop down menu § Disabled, does not use encryption § Opportunistic, will try to use it if destination host supports it (ESXi 6.5 or newer) § Required, enforced. vMotion will fail if destination does NOT support it.

o Note: If the VM is encrypted (Storage Policy) vMotion encryption is enforced. o Note: Migration across vCenter Server systems is NOT supported for encrypted VMs. o Note: Storage vMotion for encrypted VMs are Storage vMotion is encrypted, if not encrypted they are

transferred as they are (storage vMotion encryption not supported). o Note: When you encrypt a virtual machine, the virtual machine keeps a record of the current encrypted vSphere

vMotion setting (vMotion Encryption Required). If you later disable encryption for the virtual machine, the encrypted vMotion setting remains at ‘Required' until you change the setting explicitly.

• ESXi VIB acceptance Levels (ESXi Host -> Configure -> Security Profile) - esxcli software acceptance get o VMware Certified (most stringent requirements, vmware fully supported) o VMware Accepted (vmware verifies only, partner provides support) o Partner Supported (partner performs testing and provides support) o Community Supported (not vmware-approved testing, not supported by vmware or partner) o Note: VIB can be installed on a host if the ESXi host has the same acceptance level or higher acceptance level.

• ESXi Shell Timeout: o Web client -> Host -> Configure Tab -> System : Advanced System Settings:

§ UserVars.ESXiShellTimeOut : time in seconds before automatically disabling local and remote shell access. Takes effect after service restart.

§ UserVars.ESXiShellInteractiveTimeOut: time seconds before an interactive shell session is automatically logged out. Requires service restart.

o Can also be set via DCUI. • ESXi Password Complexity

o /etc/pamd/passwd : retry=3 min=disabled,disabled,disabled,7,7 (default setting). • ESXi Pass Phrase (third octet .. 'disabled, disabled, pass phrase length, 7, 7’)

o passphrase are disabled by default o edit the /etc/pamd/passwd file o adding 'retry=3 min=disabled,disabled,16,7,7 passphrase=4’ (16 characters, with minimum number of 4 words

with spaces). • ESXi certificate store:

o /etc/vmware/ssl § rui.crt § rui.key

• You can download the vCenter root cert and crl from the vCenter landing page (https://vcenter) • Configure a Host to use Active Directory:

o Host provisioned with auto deploy cannot store AD credentials. You can use the vSphere Authentication Proxy to join the host to an AD domain. Because the trust chain exists between the vSphere Authentication Proxy and the Host.

§ The vSphere Authentication Proxy runs as a service on vCenter (stopped by default). - You need to start it manually if you want to use it.

o An ESXi host can be placed in an OU as follows: name.tld/container/ou eg: domain.com/computers/vmware • ESXi log file locations

o vmkernel /var/log/vmkernel.log (records activities related to VMs and ESXi) o vmkwarning /var/log/vmkwarning.log (records vms/esxi issues) o ESXi host agent log /var/log/hostd.log (contains information about the agent / manages VMs on the ESXi host). o vCenter agent log /var/log/vpxa.log (Info regarding the agent the communicates with vCenter). o Shell log /var/log/shell.log (info regarding commands entered on the ESXi host). o Authentication /var/log/auth.log (contains events concerning authentication).

Page 6: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

• DCUI Menu Tree o F2 - Customise system & view logs o F12 Shutdown / Restart o System Customisation

§ Configure password § Configure Lockdown mode -> you can enable it. *you cannot add any exception users. § Configure Management Network

§ ipv4 configuration § ipv6 configuration § DNS configuration § Custom DNS suffixes

§ Restart Management Network -> F11 § Test Managent Network

§ Allows you to ping 3 specified addresses and check if hostname resolves. § Network Restore Options

§ Restore Network Settings -> This will revert all network settings. Restoring network settings will stop all the running virtual machines on the host.

§ Restore Standard Switch -> All mgmt network and associated uplinks can be moved onto a new vSwitch in order to restore network connectivity to vCenter. All interfaces apart from management interface will be disabled.

§ Restore vDS -> Reconfigure properties of a misconfigured vDS. A new vDS host local port will be created with the reconfigured properties and management will be moved to this new port.

§ Configure Keyboard § Troubleshooting Options

§ Enable ESXi Shell § Enable SSH § Modify ESXi Shell and SSH Timeouts § Modify DCUI idle timeout § Restart Management Agents

§ View System Logs (press Q to return to main menu, after reviewing logs) § 1. Syslog § 2. VMkernel § 3. Config § 4. Management Agent (hostd) § 5. VirtualCenter Agent (vpxa) § 6. VMware ESXi Observation log (vobd)

§ View Support Information -> view serial number, SSL thumbprint and keys. § Reset System Configuration -> resets all system params. (F11 to confirm!)

§ root password will be reset to nothing. § requires a reboot of the host.

Configure and Administer vSphere 6.x Networking • Adding/Removing ESXi hosts from a vSphere Distributed switch. First:

o Create all needed distributed port groups for all VMs and for VMkernel networking if you intend to migrate. o Add enough uplinks on the vDS to match all the physical NICs (for each ESXi host).

• dvPortgroups provide connectivity to the VMs and or VMkernel interfaces. • When creating a new dvSwitch

o Set number of uplinks (default 4) o Enable/Disable NIOC (NIOC enabled by default) o Select checkbox to create default portgroup (provide a dvPortgroup name)

o o Edit dvPortgroup

§ Port binding: § Static, A port is assigned statically when the resource is connected to the vDS (eg VM). § Dynamic, A port is assigned dynamically when the VM is powered on, and of course it is

connected to the vDS. (deprecated since 5.0)

Page 7: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Ephemeral - no binding, No port binding for VMs. § Port Allocation (by default 8 ports are allocated). Port allocation defines how ports are pre-assigned.

§ Elastic (new ports added as needed) § Fixed (no new ports are added, requires admin/intervention.)

§ Number of dvPorts (used by VMs) § Network resource pool (default)

• Make sure you migrate VMs to a portgroup before removing a dvPort. • LACP (link aggregation control protocol) is a standard that bundles several physical ports together to form a single logical

channel. • LACP is fully supported with vSphere 5.1 and later but only with vDS.

o vSS can support only static link aggregation or Etherchannel, with IP hash teaming policy. • LACP must be prepared correctly both physical and on ESXi hosts.

o Each LAG has two or more ports o You can create upto 64 LAGs. o Minimum required = 2 ports o LAG mode options:

§ Passive, in this mode LAG ports do not initiate LACP negotiations § Active, in this mode LAG ports initiate negotiations with the LACP Port challenge at the physical switch

site. o LAG load balancing options:

§ Source and Destination IP address TCP/UDP port and VLAN § Source and Destination IP address and VLAN § Source and Destination MAC address § Source and Destination TCP/UDP port § Source port ID § VLAN

• LAG migration: o 1. Create LAG o 2. Set the LAG as a standby uplink on the distributed port groups o 3. Reassign physical network adapters of the hosts to the LAG ports o 4. Set the LAG to be only active uplink on the distributed port groups.

• vDS Security Policies, by default all polices are set to ‘Reject' o Promiscuous Mode (Reject) o MAC address changes (Reject) o Forged Transmits (Reject)

• Portblocking policies can be enabled both at port groups level or at a port level. o To disable all ports on a specific dvPort, just select Misc -> Block All Ports -> Yes. o To block a specific port define an override (enable this at the dvPortgroup level).

• Configure Load Balancing and Failover Polices o Route based on originating virtual port ID - uplink wont change for VM, only uses single uplinks worth of

bandwidth. o Route based on source MAC hash - uses algorithm based on VMs MAC, only uses a single uplinks worth of

bandwidth. o Route based on IP hash - hash algorithm uses source/destination IP address for each packet. In this case one VM

can use more physical uplinks but requires physical switches to support this config .. eg Etherchannel mode. o Route based on physical NIC load - the vDS checks the actual load of the uplink, if load exceeds 75% for

30seconds the port of the highest VM is moved to a different uplink. o Use explicit failover.

• Network Failure Detection o Link status, o Beacon probing, needs three or more adapters for voting majority.

§ Note: This cannot be used in conjunction with IP Hash. • Notify Switches (yes/no)

o During failover/failback, of one uplink, the virtual switch sends notification over the network to update the lookup tables on the physical switches.

• Physical Switch configuration recommendations o Disable spanning tree on physical ports used by ESXi hosts o For CISCO, enable portfast (saves about 30s, during initialisation). o Portfast BPDU enabled. o Disable trunk negotiation.

• VLAN untagged, tagged. o untagged all packets bounded on a specific VLAN ID.

Page 8: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o tagged, Multiple VLANs can flow in this port. Also known as trunk mode. • Tagging Options:

o External VLAN Tagging - The physical switch port are untagged mode on a specific VLAN ID, no config is needed on the virtual switch - it is all done at the physical switch.

o Virtual Switch Tagging - Physical switch ports are in tagged mode on more clans, and each port group is configured on a specific VLAN.

o VM VLAN tagging - VM responsible for tagging/untagging - virtual switch set to VLAN 4095 (VSS) or VLAN Tunk (vDS)

• VLAN type: o None: Do not use a VLAN o VLAN: In the VLAN ID field, enter a number between 1 and 4094 o VLAN Trunk: Enter VLAN trunk range (eg: 1-20, 24-30) o Private VLAN: Select a Private VLAN entry

• PVLAN o Promiscuous VLAN o Community - can communicate with each other but not between other communicate pvlans, but can talk to the

promiscuous PVLAN. o Isolated - VMs on this pvlan can only communicate with the promiscuous vlan.

§ You can only have one isolated PVLAN. • Traffic Shaping

o Average bandwidth (Kbit/s) o Peak bandwidth (Kbits/s) o Burst size (KB)

• vDS supports ingress and egress traffic shaping, VSS only supported egress traffic shaping. • Traffic optimisations, to improve performance.

o Use TCP segmentation Offload (TSO) in VMkernel network adapter and VMs to improve the network performance in workloads that have severe latency requirements.

§ TSO when enabled reduces the CPU overhead for TCP/IP network operations. § When TSO is enabled it divides larger data chunks into TCP segments instead of the CPU, thus allowing

the Guest CPU more cycles to run applications. § Enable TSO along the data-path (ESXi, Physical Network)

o TCP Segmentation offload (TSO), also known as Large Segment Offload. § By default TSO is enabled on VMkernel of the ESXi host and on VMs using vmxnet2 and vmxnet3 virtual

nics. § To disable on ESXi host: esxcli system settings advanced set -o /Net/useHwTSO -i 0 (also adv settings

on host) § Then reload the driver: esxcli system module set --enabled false --module nic_driver_module

§ To enable on ESXi host: esxcli system settings advanced set -o /Net/useHwTSO -i 1 (also adv settings on host)

§ Then reload the driver: esxcli system module set --enabled true --module nic_driver_module § To disable on VM (https://kb.vmware.com/s/article/2055140)

o If the physical adapter DOES NOT support TSO, the vmkernel SEGMENTS large TCP packets coming from the guest OS and sends them to the adapter.

o Large Receive Offload (LRO) • IEEE 802.3, Jumbo frames if set to 9000 must be set end-to-end, otherwise performance will NOT increase (Phsysical

Switch Port, vNIC and vmkernel adapters) • Auto-rollback and recovery of the Management Network

o Supported in both vSS and vDS, to fix an invalid configuration of the management network. o vSphere networking rollback

§ Updating the speed or duplex of a physical NIC § Updating DNS or routing settings § Updating teaming/failover policies, and traffic shaping of a standard portgroup which contains the

management network. § Removing the mgmt vmkernel network adapter from a standard or dvswitch. § Removing a physical NIC of a standard or distributed switch containing the mgmt vmkernel network

adapter. § Migrating the mgmt vmkernel adapter from vSphere standard to distributed switch. § If the network disconnects for any reason , the task fails and the host rolls back to the last valid

configuration. o vSphere Distributed rollback

§ The vDS rollback happens when invalid updates are made to vDS, dvPorts, or distributed ports for one of the following:

Page 9: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Changes to the MTU of the dvSwitch § Changing the teaming and failover, VLAN and traffic shaping § Blocking all ports in the dvPortgroup WHEN containing the management vmkernel network

adapter. • Auto-rollback can be disabled: Web client -> vCenter instance -> config tab -> settings -> advanced settings -> edit ->

config.vpxd.network.rollback key *set to false. • Configure vDS across multiple vCenters to support long distance vmotion:

o Starting vSphere 6.0, you can hot-migrate VMs between vCenter Server. o Requirements:

§ minimum network bandwidth 250Mbs with Maximum latency 150ms § virtual network: host -> host vmkernel, vm layer networking connectivity for VM port groups. § CPU compatibility, both hosts must have same CPU generation family (EVC baseline must match). § Version: Source and destination vCenter Server and ESXi instances must be 6.0 or higher. § Time sync: Both vCenter Servers instances must have time-sync. for VC sso-token verification. § uses the ‘Provisioning TCP/IP stack’ § Important: cross-vc or long-distance vMotion can only be performed from the vSphere Web Client if

vCenters are in ELM and in the same SSO domain. § there is a fling from non-shared SSO domains via SDK.

• ERSPAN (port mirroring, Encapsulation Remote Switched Port ANaylser - ERSPAN) o Port mirroring is used on the switch to send a copy of the packets seen on one port (or an entire VLAN) to a

monitoring connection on another switch port. Port mirroring is based on ERSPAN standards. o Configure at dvSwitch level. o Session types:

§ Distributed Port Mirroring - Mirrors network traffic from a set of distributed ports to other distributed ports

§ Remote Port Mirroring - Mirror network traffic from a set of distributed port to specific uplink ports § Remote Mirroring Destination - Mirror network traffic from a set of VLANs to distributed ports § Encapsulation Remote Mirroring (L3) Source - Mirror network traffic from a set of distributed ports to

remote agents IP addresses § The only mirroring option which provides encapsulation. You can use:

§ GRE § ERSPAN Type II § ERSPAN Type III

§ Distributed Port Mirroring (Legacy) - Mirror network traffic from a set of distributed ports to a set of distributed ports an/or uplink ports.

§ Note: status of mirror is disabled by default, you need to enable it. § Advanced Properties:

§ Set Packet length (Bytes): e.g the minimum is 60 § Sample rate: 1

• TCP/IP stacks o Default TCP/IP Stack - provides networking support for management traffic (and usually also for vMotion, IP

storage, FT.) o vMotion TCP/IP Stack - can be used by vMotion to provide better isolation or when vMotion adapters require a

different gateway. If you use this stack then any configured vMotion vmkernels on the default stack are disabled.

o Provisioning TCP/IP Stack - Supports cold vMotion, cloning, snapshot migration and long-distance vMotion. § If you plan to transfer high volumes of virtual machine data that the management network cannot

accommodate, redirect the cold migration traffic on a host to the TCP/IP stack that is dedicated to coldmigration and cloning of powered off virtual machines.

o To create custom stack: esxcli network ip net stack add -N=StackName • Configuring Netflow, VMware support IPFIX (Internet Protocol Flow information Export)

o Configured at the dvSwitch level. o Enter collector IP address (IPv4/IPv6) o Collector Port o Observation Domain ID (vDS identifier) o Switch IP address (Identify it as a single network device in net flow collector) o Advanced Settings:

§ Active flow export timeout (seconds) § Idle flow export timeout (seconds) § Sample rate § Process internal flow only (disable)

• Netflow usecase: Solarwinds, monitor IP network traffic.

Page 10: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

• Network I/O Control (NIOC), used as a miens to guarantee the network bandwidth. Could be used to resolve situations where several traffic types compete for common network resources.

• NIOC o NIOC v3 enhancements:

§ Enables bandwidth to be guaranteed at the virtual network interface of the VM (edit VM -> Network Adapter -> Set: Share, Limit or Reservation)

o NIOC use case: § Allows you to reserve bandwidth for system traffic based on capacity of physical adapters. § Offers fine-grained resource control at the VM network adapter level (like you do for CPU/RAM). § Use NIOC to allocate network bandwidth to business critical application and to resolve situation where

several types of traffic complete for common resources. § Configure shares, limits and reservations (edit VM -> Network adapter -> Set: Share, Limit or

Reservation). § When NIOC is enabled, resource pools are automatically applied to their corresponding traffic type. § The capacity of the physical adapter determines the bandwidth that you guarantee (max 75%)

§ So for example: on a vDS that is connected with 10GbE adapters. You could configure a reservation to guarantee min:

§ 1Gbps to Management § 1Gbps to vSphere FT § 1Gbps to vMotion § 0.5Gbps to virtual machine traffic. § You can reserve no more that 7.5 Gbps (75%). You might leave the remaining

bandwidth for the host to allocate dynamically o NIOC Requirements:

§ Requires NIOC to be enabled on vDS § Requires Enterprise Plus licensing

o NIOC Architecture § System Traffic, is strictly associated with the ESXi host (Set Share: Low, Normal, High & Custom or

reservation in Mbit/s) § Management § Fault Tolerance § NFS § vSAN § vMotion § vSphere Replication § vSphere Data Protection § iSCSI

§ Virtual Machine Traffic (user-defined now removed in NIOCv3) § using shares, reservations and limits.

§ Shares: § High = 100 § Normal = 50 § Low = 25

o NIOC Setup: § Enable NIOC Control on a vDS § Configure Bandwidth Allocation for System Traffic

§ vDS -> Configure -> Resource Allocation -> Select ‘System Traffic’ § Set share value as required between 1-100 (High = 100, Medium = 50, Low = 25) § Set (optional): Reservation in Mbit/s § Set (optional): Limit in Mbit/s

§ Configure Bandwidth Allocation for VM traffic § NIOC lets you configure bandwidth requirements for individual VMs. § vDS -> Configure -> Resource Allocation -> Select ‘System Traffic’ Select Virtual Machine

Traffic. § Configure with reservation (0.2Gbits or 200Mbits)

§ vDS -> Configure -> Resource Allocation -> Network Resource Pools (you now have 0.2 Gbps / 200 Mbits to use).

§

Page 11: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Add new Resource Pools (E.g Tenant A and Tenant B), give each Tenant a reservation of 100 Mbit/s.

§ Edit the dvPort Group for Tenant A § Under network resource pool set resource pool to Tenant A. All VMs on this

dvPortgroup will now use the Tenant A RP - 100Mbit/s reservation.

§ § Alternatively you can edit the bandwidth by editing the VM and setting: Shares (High-

Normal-Low-Custom), Reservation (Mbit/s) or Limit (Mbit/s) § Move a Physical Adapter Out the Scope of NIOC

§ Under some circumstances you may need to exclude an adapter (physical uplink) , say 1GbE adapter.

§ use the advanced system settings: Net.IOControlPnicOptOut parameter. (vmnic0, vmnic3) o Upgrade NIOC v2 vs v3

§ In vSphere 6.0, NIOCv2 and v3 coexisted. The two version implement different modes for allocating bandwidth to virtual machines and system traffic.

§ In NIOC v2, you configure bandwidth allocation for VMs at the physical adapter level § In NIOC v3, you configure bandwidth allocation for VMs at the level of the entire distributed

switch. § Note: When you upgrade dvSwitch to 6.5 NIOC is also upgraded to version 3 ** Unless you are using

CoS tagging and user-defined network resource pools. In this case, the difference is the resource allocation models of version 2 and version 3 do NOT allow for non-disruptive upgrade.

§ You can continue to use version 2 to preserve your bandwidth allocation settings for VMs or you can switch to NIOC v3 and tailor your bandwidth policy across the switch hosts.

§ Note: SR-IOV is NOT available for virtual machines configured to use NIOC v3 § Note: The upgrade from version 2 to version 3 IS DISRUPTIVE. Certain functionality is only available in

NIOC v2 and is removed in NIOC v3 (as mentioned above). § The following functionality is removed:

§ user-define network resource pools including all associations between them and existing dvport groups.

§ existing associations between ports and user-defined network resource pools § CoS tagging of the traffic that is associated with a network resource pool.

§ During the upgrade to NIOC v3, the system reports on where, CoS tagging, User-define RPs or Resource allocation policy override has been configured.

Configure and Administer vSphere 6.x Storage

• vSphere 6.5 supports: o Direct attached storage (DAS) - SCSI, SATA, SAS and NVMe o Network attached (NAS) - NFSv3, NFSv4 *note the Content Library also supports SMB protocol but with limited

functionality. o Storage Area Networks (SAN) - FC, FCoE and iSCSI.

• Features/Capabilities o DAS: VMFS - vADP, RDM, vSphere HA, DRS o SAN FC: VMFS & vVOLs - vADP, RDM, vSphere HA, DRS, vSphere FT o SAN iSCSI: VMFS & vVOLs - vADP, RDM, vSphere HA, DRS, vSphere FT o NAS - NFS: NFS & vVOLs - vADP, vSphere HA, DRS, vSphere FT

• Starting with vSphere 6.0, NFS 3 and NFS 4.1 are supported. • Creating NFS volume for use by virtual machines

o create new vmkernel to connect to storage networks /separate switch or pg. o add new datastore -> select NFS version -> /volume/nfs (export)

• Storage Filters (Storage protection filters), help avoid storage corruption - by filtering them after they are marked for use by ESXi.

• Turn Off Storage Filters : vCenter Server Object -> Configure Tab -> Settings : Advanced Settings -> Edit (search for one of the below).

Page 12: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o config.vpxd.filter.vmfsFilter - VMFS filter (filters out VMFS volumes that are already in use) o config.vpxd.filter.rdmFilter - RDM filter (filters out LUNs already used by RDM) o config.vpxd.filter.sameHostsAndTransportsFilter - Same Hosts and Transports Filter (Filters out LUNs ineligible

for use as VMFS extents). o config.vpxd.filter.hostRescanFilter - Host rescan filter (if you turn off this filter, hosts will still perform a rescan

each time you present a new LUN to a host or cluster). § By nature: this setting, provides a consistent view of all the VMFS datastores managed by vCenter.

o You do NOT need to restart vCenter after making storage filter changes. • by default when performing VMFS datastore management operations, such as creating a VMFS datastore, or increasing

or deleting a VMFS datastore, vCenter Server will automatically rescan and update your storage on all hosts. o This can be performed manually at the adapter level or host level, cluster level and at the datacenter level. o using the new HTML5 client (vSphere client) it is possible to rescan all adapters at the same time)

• Rescan choices: o Scan for new Storage Devices - performs a rescan of all adapters to discover new storage devices o Scan for new VMFS volumes - performs a rescan of all storage devices to discover new datastores that have

been added since the last scan. • vmotion migrations may be impacted during lengthy rescanning operations.

o rescan progress can be monitored /var/log/vmkernel.log • You need to perform a manual rescan each time you perform one of the following operations:

o Change in SAN fabric zoning o Creation of new LUNs o Change of path mask on a host o Reconnecting a SAN/SAS cable o Changing iSCSI CHAP settings o Adding a single host to the vCenter Server after you have changed some shared storage settings.

• Boot from SAN requires each host has access to dedicated boot LUN o Enable boot from SAN in host BIOS and adapter (dependent on adapter) o Multi-Pathing on boot LUN not supported o Choose LUN ID 0 o FCoE boot from LUN requires:

§ FCoE Boot Firmware Table (FBFT) § FCoE Boot Parameter Table (FBPT) § FCoE Limitations:

§ you cant change boot config within ESXi (do this at controller) § Core dump is not supported on any software FCoE LUNs including boot lun. § Multipathing is not supported pre-boot § Intel adapters with cisco networks require switch port trunk native vlan set.

o iSCSI boot from LUN § Requires: iSCSI Boot Firmware Table (iBFT)

• NFS v3 supports one single TCP connection between client (ESXi host) storage target. For this reason ESXi does not support multiple paths, your only option is more IPs - different subnets or link aggregation.

• NFS v4.1 supports multi-pathing (requires session trunking) • NFS datastores support mounting datastore (ISOs) as read-only datastores. • Virtual disks created on NFS datastores are thin provisioned by default. • iSCSI

o Software iSCSI initiator - ESXi manages the entire iSCSI stack (no special h/w required), uses standard NICs. o Dependent hardware iSCSI initiator - iSCSI management and configuration managed by ESXi, but adapter offers

some offloading capabilities. o Independent hardware iSCSI initiator - All networking stack managed by the adapter, on the ESXi side you just

see one or more vmhbas, all network configuration must be performed at the card level. • Software and Dependent hardware iSCSI initiators depend on VMkernel networking

o vmkernel adapters must be on same subnet as storage targets, multiple subnets = multiple switches (see vendor recc)

o requirements for port bindings: § All iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the

VMkernel adapters § All VMkernel adapters used for iSCSI port binding must reside in the same broadcast domain and IP

subnet • iSCSI supports the following discovery methods:

o Dynamic (Send Targets) .. Admin enters one address. And a 'send target' request is then sent iSCSI array. The iSCSI array returns a list of available targets. These then show up in the static discovery tab.

Page 13: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Static: All targets are pre-defined by the administrator. • Port bindings do NOT support routing (see above network requirements) • Independent hardware iSCSI adapters DO NOT require vmkernel networking • Hardware iSCSI adapters ARE enabled by default • Software iSCSI initiators need to be created -> storage adapters -> add -> Software iSCSI adapter.

o You need to reboot the host to ‘Disable’ and remove the software iSCSI adapter. • To configure software iSCSI adapter: ESXi hosts -> Storage: Storage Adapters -> select iSCSI Software Adapters: vmhba64

… o Add Dynamic Discovery o Static Discovery o Configure CHAP o Enable/Disable adapter o Change the iSCSI IQN Name.. o Network Port Bindings (vmkernel 1:1 mapping) o Advanced Options - Tab

§ DelayedAck, Header Digest, ErrorRecoveryLevel, ARP Redirect • iSCSI CHAP

o ESXi supports unidirectional CHAP for all types of iSCSI initiators o ESXi supports bidirectional CHAP for Software and Dependent Hardware adapters ONLY!

§ This means the independent adapters CANT use bidirectional only unidirectional. o The CHAP name cannot exceed 511 alphanumeric characters and CHAP secret cannot exceed 255 alphanumeric

characters. • FC Zoning, Single initiator - Single Target recommended. • LUN Provisioning:

o Thick Provisioning - The entire storage is allocated o Thin Provisioning - Only used storage is allocated.

• Virtual Disk Provisioning o Thick Provisioning Eager Zeroed

§ Create a virtual disk in default thick format § Space required for virtual disk is allocated when the disk is created. § Data is zeroed out on creation (required by VMware FT)

o Thick Provisioning Lazy Zeroed § Create a virtual disk in default thick format § Space required for virtual disk is allocated when the disk is created. § Data remaining on physical device is NOT erased during creation, but is zeroed out on demand.

o Thin Provisioning § VMware will report the provisioned space (configured) and used space (which is what the VM is using).

• Storage Provisioning: o Thin at storage level - Thin at VM level, resulting provisioning = Thin o Thin at storage level - Thick Lazy Zeroed at VM level, resulting provisioning = Thin o Thin at storage level - Thick Eager Zeroed at VM level, resulting provisioning = Thick o Thick at storage level - Thin at VM level, resulting provisioning = Thin o Thick at storage level - Thick Lazy/Eager Zeroed at VM level, resulting provisioning = Thick.

• SPBM - Storage Based Policy Management, is a framework that provides single control panel across various data services and storage solutions, including vSAN and VVOLs.

• Enable VSAN (requires separate licensing) o Hosts & Clusters -> Datacenter -> right click -> New Cluster -> Enter Cluster Name -> Enable vSAN by selecting

'vSAN Turn On’ - checkbox • vSAN

o VM Storage Policy - Determines component placement and provisioning redundancy. § The default storage provision policy is thin with vSAN. - ‘object space reservation = 0'

o Failures to Tolerate - Number of hosts, disks or network failures a VM can tolerate. o Disk Group, A disk group is a unit of physical storage capacity on a host which groups physical devices to provide

performance and capacity to the vSAN cluster. § Each Disk Group must contain ONE flash and one or multiple capacity devices. § The devices used for caching are NOT shared among disk groups. 1 Diskgroup = 1 Flash disk and one or

multiple capacity disks. § Only capacity drives contribute to capacity available (cache disks do not contribute)

o vSAN Monitoring: use the RVC (Ruby vSphere Console), and you can monitor vSAN from the vSphere Client Monitor - tab.

§ vSAN Observer output: https://kb.vmware.com/s/article/2064240 § From console:

Page 14: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ rvc username@localhost (enter password when prompted) § cd to vSAN-DC (data center object) § run this command to enable live monitoring: san.observer ~/computers/vSAN-Cluster --run-

webserver —force § To view ‘live’ stats go to https://vCenterIPorFQDN:8010 § To generate performance stats bundle for example: 1 hour:

§ vsan.observer ~/computers/vSAN --run-webserver --force --generate-html-bundle /tmp --interval 30 --max-runtime 1

o vSAN observer is a web-based tool that runs on RVC and is used to provide in-depth performance analysis of vSAN.

o vSAN iSCSI Target § vSAN can be configured as an iSCSI Target for External Workloads! where part of the vSAN datastore is

exported as an iSCSI LUN. § Supported OS: Windows 10, 2016, 2012 R2, 2012, 2008 R2, 2008. RHEL 5, 6, 7. SUSE 11/12

(check SP level). § Important: using vSAN iSCSI target to provide storage for ESXi is NOT supported.

o vSAN Fault Domains: § A fault domain is a set of elements that can fail together/at the same time without causing an issue

due to redundancy being in place. § By default with vSAN each ESXi host is a fault domain, but you can group one or more hosts according

to their location in the datacentre. § using vSAN Fault Domain feature, you can protect against rack, chassis failure.. A minimum of

three fault domains are required for vSAN cluster (2 hosts with witness). § Fault Domains and Stretched Clusters Options:

§ Do not configure § Configure two host vSAN cluster § Configure stretched cluster § Configure Fault Domains.

• Virtual Volumes (VVOLs) o Virtual Volumes, encapsulates virtual machine files, virtual disks and their derivatives. o With VVOLs an individual virtual machine, not the datastore becomes a unit of storage management o VVOLs helps to improve granularity, it helps you to differentiate virtual machine services on a per application

level. o VVOLs arranges the storage around the needs of the virtual machines, making storage virtual machine centric. o Storage Providers

§ A VVOL storage provider, also called a VASA provider, is a software components that acts as a storage awareness service for vSphere.

§ The provider mediates out-of-band communication between vCenter, ESXi and the storage system. o Storage Containers, VVOLs do not require LUNs, instead Virtual Volumes use a storage container, which is a raw

pool of storage capacity or an aggregation of storage capabilities. § A single Storage Container CANNOT span multiple storage arrays.

o Protocol Endpoint § ESXi has no direct access to virtual volumes on the storage side. Instead ESXi uses a logical I/O proxy,

called the protocol endpoint to communicate with virtual volumes and virtual disk files that virtual volumes encapsulates.

§ ESXi uses protocol endpoints to establish a data path on demand from virtual machines to their respective virtual volumes.

o Virtual Volumes Datastore, represent a storage container in vCenter Server and the vSphere Web Client. § After vCenter discovers storage containers exported by storage systems you MUST mount them as

virtual volume datastores. The virtual volume datastores are not formatted in a traditional way like for example VMFS datastores. You must still create them for as a construct for vSphere functionalities (FT, HA and DRS).

§ use the datastore creation wizard to map a storage container to a virtual volume datastore. § The VV datastore acts like any other, you can browse virtual machines, the datastore also supports

unmounting and mounting. § The size of the datastore is configured by the storage administrator outside of vSphere.

o The system creates the following types of virtual volumes for core elements that make up the virtual machine: (Note these reside / visible on the VVOL Array, multiple for each VM.)

§ Data-VVol, this corresponds to each virtual disk (.vmdk) these can be either thick or thin provisioned. § Config-VVol, A config virtual volume, or home directory (which contains meta-data). This includes the

.vmx, logs files etc.. § Swap-VVol, Created when a VM is first powered on. Holds copies of VM memory pages.

Page 15: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Snapshot-VVol, a virtual machine volume to hold the content of the virtual machine memory for a snapshot (this is thick provisioned). This is only created on the array when user performs a snapshot. (A new vvol is created on the array), the main point being is that this is offloaded to the array.

§ Other, a virtual volume specific features, for example a digest virtual volume is created for Content-Based Read Cache (CBRC)

o Typically, a VM creates a minimum of three virtual volumes (Data-VVol, Config-VVol and Swap-VVol). o Virtual Volumes require VM storage Policies

§ A VM storage policy is a set of rules that contains placement and quality of service requirements for a virtual machine.

§ If the VMs policy requirements change, specify a new policy for the VM (apply to all), the Array will automatically move the VM-objects to the correct policy based on storage capabilities. When complete, VM storage policy should show ‘Compliant’.

§ Note: if you do not create a VM storage police compatible with the Virtual Volume Datastore, the system uses ‘default No Requirements Policy’.

§ The No requirements policy is a generic policy that contains no rules or storage specifications. o Virtual Volumes Supports the following storage protocols:

§ NFS version 3, NFS version 4.1 § iSCSI § FC and FCoE

o VVOLs Architecture:

§ § Setup VVOL (vSphere Admin)

§ 1. Register storage provider in the vCenter Server (assumes you are using a supported VASA provider)

§ 2. Create new datastore -> select Virtual Volume datastore type § 3. Create VMs in this new datastore. Use default storage VVOL policy.

o Dell Workflow (iSCSI) - Block

§ o Dell Workflow (NFS) - File

Page 16: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ • SBMP (Storage Based Policy Management) - in General

o You apply the storage policy when you create, clone, or migrate the virtual machine. After you apply the storage policy, the SPBM mechanism assists you with placing the virtual machine in a matching datastore.

o More vCenters in ELM will have their own set of policies. o A maximum of 1024 policies can exist in vCenter o Storage policies can comprise of many rules (performance, availability and space) o A storage policy can be applied to a group of VMs, as single VM, or even a single VMDK within a VM. o Storage policies are not additive, only one policy (that contains one or more rules) can be applied per object.

• Pluggable Storage Architecture o Native Multipathing Plugin (NMP):

§ Storage Array Type Plugin (SATP) - Recognises the type of storage architecture § Path Selection Policy (PSP) - Responsible for selecting a physical path for I/O requests § Multipathing Plugin (MPP) - Provides multi-path rules.

o Note that I/O could be disrupted up to 60 seconds during path failover. Therefore it may be required to increase the disk timeout on VMs to prevent issues with the guest OS. This is automatically done when VMware tools is installed.

• SATP/PSP Table:

Type of Storage Generic SATP Default PSP (Path Selection Policy) Active-Active storage array VMW_SATP_DEFAULT_AA Fixed Active-Passive storage array VMW_SATP_DEFAULT_AP MRU (Most Recently Used) Asymmetrical storage array VMW_SATP_ALUA MRU (Most Recently Used)

• Path Optimised / Unoptimised o Optimised: These are paths on the controller that own the LUN (For Active-Active storage) o Unoptimised: These are paths on the controller that does not own the LUN (Active-Passive storage) o Active Unoptimised: These are paths on the controller that does not own the LUN, in an ALUA storage.

• MPP - Multi-pathing Claim Rules: o When you start ESXi host or perform a rescan of an adapter, the host discovers all physical paths to storage

devices available to the host. Based on a set of claim riles, the host determines which multi-pathing plugin (MPP) should claim paths to a particular device and become responsible for managing the multi-pathing support for the device.

o By default ESXi performs a path evaluation every 5 minutes, causing any unclaimed paths to be claimed by the appropriate MPP.

• Paths can appears as: o Active - Paths available for issuing I/O to a LUN. o Standby - if active paths fails, the path can quickly become operational and can be used for I/O. o Disabled - The path is disabled and no data can be transferred. o Dead - The software cannot connect to the disk through this path.

• Path Selection Policy (PSP) o Fixed (VMware) - Host select the first working path on boot. Host uses the preferred path (if configured). If

preferred path fails, alternate path selected. If failed path returns, original preferred path is used. (Generally used in Active-Active arrays)

o Most Recently Used (VMware) - Host select path that it used most recently. When the path becomes unavailable, the host select an alternate path. The Host DOES NOT revert to the original path if it becomes available again. (Generally used in Active-Passive arrays)

o Round Robin (VMware) - Hosts uses automatic path selection (rotating through all active paths). (Can be used for Active-Passive & Active-Active).

• Paths can be changed under storage -> storage devices or protocol endpoints (vvols). o select the ‘Edit Multipathing’ button.

Page 17: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

• VMFS 6 features o Support up to 64 TB datastores o VMFS hot extend o VMDK larger than 2 TB o Unified block size (1MB) o ATS/VAAI o Uses sub-blocks (64KB Dynamic) for space efficiency o Small file support (1KB) o Physical Block Size (512n and 512e) o VMFS space reclamation (automatic) o VMDK space reclamation (yes for virtual machine running HW version 13)

• Note: You cannot upgrade from VMFS 5 to VMFS 6 o During installation process, the local datastore (if it exists) will be formatted as VMFS 5.

• Note: You cannot create VMFS 3 datastores, but you can use them. • RDM (Raw Device Mapping) - Special mapping file in a VMFS volume that managed metadata for its mapped device.

o Local disk cannot be used. o RDM in physical mode cannot use snapshots at VM level (but could leverage snapshots at array level). o vMotion of RDM in physical mode supported, but destination hosts needs to have access to LUN. o Flash read cache not supported with RDM in physical mode, virtual mode is supported. o RDM NOT supported on NFS o Compatibility modes:

§ Physical Mode - allows guest OS to access the hardware directly § Virtual Mode - RDM is managed as a VMDK *snapshots supported,

• NFS v3 vs NFS v4 - capabilities

Feature NFS v3 NFS v4.1 ESXi compatibility Since ESX version 3 Since ESXi 6.x NFS Security AUTH_SYS AUTH_SYS / Kerberos Hardware acceleration VAAI NA (vSphere 5.0) VAAI NAS Multipath No Yes IPv6 Support Yes Yes File locking .lck-file_id Share reservations

• NFS / vSphere Feature Support

vSphere Feature NFS v3 NFS v4.1 vMotion and Storage vMotion Yes Yes vSphere HA Yes Yes vSphere FT Yes Yes (only new FT) DRS Yes Yes Host Profiles Yes Yes Storage DRS Yes NO Storage IO Control Yes NO Site Recovery Manager (SRM) Yes NO Virtual Volumes Yes Yes vSphere Réplication Yes Yes vRops Yes Yes

• VM SCSI BUS Sharing - is a feature of the VMs controllers to allow multiple access simultaneously to the same virtual disks connected to the selected controller.

o None - virtual disks cannot be shared by other VMs o Virtual - Virtual disk can be shared by VMs on the SAME ESXi host o Physical (when using RDM in P-mode) - Virtual disks can be shared by VMs on any ESXi host.

• Multi-writer locking (new) https://kb.vmware.com/s/article/1034165 o Another way to share a virtual disk across two or more VMs is by using the multi-writer option. o Allows virtual disk to be shared amongst multiple VMs o This option is used by vSphere FT o This option is also used by Oracle RAC o Limitations:

§ Suspend VM not supported

Page 18: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Snapshot not supported § Cloning not supported § Storage vMotion not supported § CBT not supported § vSphere Flash Read Cache - Not supported.

• NFS 4.1 / Kerberos o With NFS 4.1 you can add multiple IP addresses or server names if the NFS server supports trunking (4.1) in

order to achieve multi-pathing. o in NFSv3 remote file are accessed with root permissions (no_root_squash) also known as AUTH_SYS. In NFS 4.1

you can use kerberos authentication to secure communication between the NFS sever and ESXi. o There are two different options for kerberos:

§ Kerberos for authentication only (krb5) § Kerberos for authentication and data integrity (krb5i) - helps check packets for any modifications.

• Unmounting datastores. Before unmounting datastores ensure that: o there are no registered or running VMs on that datastore o check that the datastore is not managed by storage DRS o verify that storage I/O control is disabled on the datastore o in vSphere HA, make sure the datastore is not being used for datastore heartbeats.

• Enabling disabling vStorage API for array integration (VAAI) o VAAI is enabled by default, but you can control that setting by modifying the following advanced setting:

§ DataMover.HardwareAccelleratedLocking - ATS, used during the creation of files on the VMFS volume § DataMover.HardwareAccelleratedMove - Clone blocks/Full Copy/XCOPY, which is used to copy data § VMFS3.HardwareAccelleratedInit - Zero Blocks/Write Same, which is used to zero out disk regions.

• SIOC - Storage I/O Control o Comes into force only during I/O congestion. o Provides QoS to VM disks. o Right click datastore -> General -> Edit: tick ‘Enable Storage I/O Control’.

§ o In vSphere 6.5 there are two different SIOCs.

§ SIOC v1 - disabled by default and can be enabled per datastore (as above). Latency threshold is 30ms, with peak throughput at 90%. If SIOC is triggered disk shares (aggregated from all VMDKs using the datastore) are used to assign I/O queue slots on a per host basis to that datastore.

§ SIOC v2 - This can now be managed from SBPM policies, VM storage policies in vSphere 6.5 have a new common rules, which is used for configuring data-services provided by hosts.

§ Note: SIOC v1 and v2 can coexist in vSphere 6.5. § SIOC limitations:

§ Enterprise plus licensing required § SAN with auto-tiering must be certified for SIOC § NFS 4.1 is not supported and RDM is not supported § VMFS datastores with multiple extents - not supported. § Datastores must be managed by single vCenter.

§ SIOC v2 - Shares: (can edit at VM level) § Low = 500 § Normal = 1000 § High = 2000

§ Monitor SIOC -> Datasores -> Select Datastore -> Monitor tab : Performance. The following graphs are available:

§ Storage I/O Control Normalized Latency § Storage I/O Control Aggregated IOPs § Storage I/O Control Activity

• Checking Metdata Consistency with VOMA. o You can use VOMA to (vSphere on-disk Metadata Analyser) to identify incidents of metadata corruption that

affect filesystems or underlying logical volumes.

Page 19: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ o You can run VOMA from the ESXi CLI, there isn’t any remediation / correction you can perform. Documentation

mentions contacting VMware support. o You can run VOMA only against a single extent-volume. o Power off any VMs that are running on the datastore or move them to an alternate datastore. o Run: esxcli storage vmfs extent list to get the device ID (naa.id) o Run: voma -m vmfs -f check -d /vmfs/devices/disks/naa.ID

§ -m : Module vmfs, lvm or ptck § -f : function query, check § -d : device naa.xxxx § -s : logfile output § -v : version of VOMA § -h : help.

Upgrade a vSphere Deployment to 6.x

• vSphere 6.5 Update 1 supports migration from vSphere 6.0 Update 3 and 5.5 Update 3b. • vSphere 6.5 comes with two versions of update manager

o Windows based o Built into the VCSA

• Databases support o Both Windows and VCSA, use separate databases, you can use the embedded vPostgres for smaller

environment 20 Hosts / 200 VMs (on Windows). o Windows - Larger environments use SQL and Oracle. o VCSA - Larger environments use Oracle.

• Download sources: o You can download from .vmware.com or import patches. o Default download sources can be disabled only, they cannot be deleted. o You can configure the download source to look at a shared repository *such as UMDS via HTTP. o Network Share is not supported. o Importing patches into VUM requires them in .zip format. o Note download sources using both ‘internet’ and ‘shared repository’ not possible - you can only configure one

or the other. • UMDS can be used if VCSA/vCenter does not have direct connection to the internet.

o UMDS supports the ability to recall a patch if the released patch has a potential issue. • Setup UMDS

o Specify UMDS updates to download § vmware-umds -S --enable-host -- enable-va (enables host and virtual appliance updates to be

dowloaded) § vmware-umds -S --patch-store your_new_patchstore_folder (sets UMDS patch repo) § vmware-umds -S --add-url https://host_URL/index.xml --url-type HOST (download URL) § vmware-umds -D (download selected patches) § vmware-umds -E --export-store repository_path (export downloaded data)

• Upgrading ESXi host to 6.5 can be performed using the host upgrade baseline with update manager. o H/W software requirements for ESXi 6.5 are as follows:

§ 64-bit processor § min two cores § NX/XD bit to be enabled in CPU / BIOS § a minimum of 4GB of RAM § Intel-VT, AMD-RVI enabled on x64 CPUs. § 1 GBe or faster ethernet

o Check h/w supportability (realtek nics have compatibility issues). o You can download the ESXi installer .iso or use image builder to create your own. o To import image -> ESXi Images -> ‘Import ESXi Image’, the file will be uploaded to VCSA *VUM.

• Baselines o Static Baselines - content doesn't change over time, even if new patches are added. This is typically used to

deploy a specific patch. o Dynamic Baselines - content changes will be updated in baselines, eg when new patches are released/removed.

§ Upgrade Baselines - these are used to perform version upgrades of ESXi hosts. § Host Extension Baselines - these are used to upgrade drivers/third party software.

Page 20: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

• Predefined baselines cannot be edited, these are the: o Host Baselines

§ Non-Critical Host Patches (Predefined) § Critical Host Patches (Predefined)

o VMs/VAs Baselines § VA Upgrade to Latest (Predefined) § VM Hardware Upgrade to Match Host (Predefined) § VMware Tools Upgrade to Match Host (Predefined)

o Baseline Types § Host Patch -> Choose between Fixed and Dynamic § Host Extension (no additional choice as above) § Host Upgrade (no additional choice as above) § VA Upgrade (in the VMs/VAs Baselines)

• Consider staging patches to hosts, if host to VUM are on a slow WAN. • Troubleshooting update errors (hosts) - See /var/log/esxupdate.log • If you upgrade a cluster of hosts, some cluster features, such as HA, Distributed Power Management (DPM) and Fault

Tolerance (FT) must be temporarily disabled. • Before upgrading an ESXi host backup the configuration - https://kb.vmware.com/s/article/2042141

o Note: when restoring an ESXi host the build number needs to match. o use the vicfg-cfgbackup or

§ from an esxi host : vim-cmd hostsvc/firmware/backup_config o use powercli - Get-VMHostFirmware -VMHost ESXi_host_IP_address -BackupConfiguration -DestinationPath

output_directory • dvSwitch upgrade:

o before upgrading dvSwitch to 6.5 , make sure the host and all contributing hosts are at the same level. o you can ‘export’ the dvSwitch configuration: Networking -> dvswitch -> settings -> export

§ Distributed switch and all port groups § Distributed switch only

o The dvSwitch configuration export is saved as .zip to desktop. • Upgrade VMware tools

o yellow warning triangle denotes VMware tools is out dated on the selected virtual machine. o you can edit the VM options and set an upgrade of VMware tools on next power on. o you can also manually upgrade vmware tools by right clicking VM -> Guest OS -> Upgrade VMware Tools o upgrading via Update manager is the simplest.

§ attach the VMware tools upgrade to match host (predefined) baseline or create your own. • Upgrading virtual machine hardware

o vSphere 6.5 uses virtual machine hardware version 13. o ESXi 5.5 and 6.0 DO NOT SUPPORT hardware version 13. o You can also attach the VM Hardware to Match Host baseline. o You can also edit the cluster -> Configuration tab -> Configuration : General -> ‘Default VM Compatibility’ ->

§ Use datacenter setting and host version § Choose ESXi host version (eg ESXi 6.5 and later) § Or right click the Data Center object -> Edit Default VM Compatibility.

• Upgrade ESXi o CD-ROM o USB drive o VUM - With update manager parallel remediation can be performed. o Auto-Deploy via PXE boot from central auto-deploy server. o esxcli commands

§ download the update bundle from vmware.com in .zip format. § copy the update bundle to a datastore visible by the host. § run: esxcli system maintenanceMode set -- enable true § run: # esxcli software vib update -d "/vmfs/volumes/datastore1/patch-directory/ESXi500-

201111001.zip" § reboot the host. § run: esxcli system maintenanceMode set -- enable false

o installation script * ks.cfg § The script can be saved on a USB flash drive, or network location accessible through NFS, HTTP, HTTPS

or FTP. § example: boot host (Shift + O) § enter: ks=http://ip_addressWhereScriptLocated/kickstart/ks.cfg

Page 21: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

• vSphere 6.5 Update 1 - VUM has been integrated into VCSA, providing an automated update process to ensure that a vSAN cluster is up to date with the best release.

• You cannot upgrade to vCenter 6.5 from vCenter 5.1, upgrade to 5.5 first. vCenter 6.5 can manage ESXi version 5.5 or 6.0 hosts.

• Starting with version 6.0, vCenter Server is deployed with two core components: o Platform Services Controller (PSC) o vCenter Server (VC)

• Platform Services Controller (PSC) is responsible for the following services o Single Sign-on (SSO) o VMware License Service o Certificate Management o VMware Appliance Management Service (if deployed as an appliance) o VMware Component Manager o VMware Identity Management Service o VMware HTTP reverse proxy o VMware Service Control Agent o VMware Security Token Service o VMware syslog health service o VMware Authentication framework o VMware Directory Service.

• The PSC can be deployed embedded or external. o Embedded (both PSC and vCenter reside on same server) o External when PSC is separated from vCenter.

• PSC upgrade is dependent on implementation. The upgrade can be perfumed from the following versions: o vCenter 5.5 with an embedded vCenter SSO, on Windows o vCenter 6.0 with an embedded PSC instance, on Windows o vCenter SSO 5.5 on Windows o PSC 6.0 on Windows o vCenter 5.5 on Windows o vCenter 6.0 on Windows

• Deployment Methods o vSphere 5.5 rules

§ If all vCenter Server 5.5 components are installed on the same server, the vCenter 6.5 upgrade will upgrade your system with an embedded platform services controller.

§ You cannot change the deployment type during upgrade in this instance. § if you have vCenter Server 5.5 configured with an external vCenter SSO, the upgrade will deploy a

separate PSC and a separate vCenter (external instances) § if you have an auto-deploy server for host provisioning configured, the upgrade will also upgrade this

component. The upgrade will migrate auto-deploy to the vCenter running 6.5. You CANNOT continue to use the old auto-deploy server.

§ if vSphere web client server is deployed on a 5.5 system with separate SSO, during the upgrade only the SSO remains remote as an external PSC. All other components moved to separate vCenter Server.

o vSphere 6.0 rules § if you have deployed external PSC, and upgrade will use external PSC § upgrade PSC first then upgrade vCenter. § for multiple vCenter server instances sharing the same PSC or vCenter SSO, you have to upgrade the

vCenter SSO or PSC first, then you can upgrade the vCenter Server instances concurrently. • Before upgrading, backup the vCenter configuration, database and certificate store.

o backup the VCSA: § Connect to the VAMI https://ipOfVC:5480 -> Backup -> Protocols:

§ HTTP, HTTPs, FTP, FTPs, SCP. § Optionally you can select ‘Encrypt backup data’ § Select which parts to backup:

§ select common parts (inventory and configuration) § stats, events, alarms an tasks (additional historical data)

Page 22: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ • Note: to restore the backup using the above process, you MUST use the VCSA GUI Installer.

o This restore method will only work if the backup was taken using the VAMI (file backup method). • To restore the VCSA using the GUI Installer

o extract the VCSA ISO - > run the vCSA installer -> select the ‘Restore’ option. o this is a two stage process:

§ Stage 1: A new VCSA appliance is deployed to the target host. This will be used to replace the failed VCSA.

§ Stage 2: Data is copied from the restore file to the new appliance. o When the restore is completed the following configurations will be restored:

§ Virtual Machine Resource Settings § DRS configuration and rules. § Cluster-host membership § Resource pool hierarchy and settings.

• Standard upgrade procedure: o PSC o vCenter o ESXi Hosts o VMware Tools o Virtual Hardware

• vCenter Server Migration to VCSA o in version 6.5 the Client integration plugin is now deprecated. o the migration tool can be used to migrate from Windows vCenter Server to VCSA o vCenter must version 5.5 Update 3b or later. o Migration from Windows vCenter 6.5 to VCSA 6.5 is NOT supported. o The upgrade method uses two stages

§ Stage 1 VCSA Deployment (uses a temporary IP) § Stage 2 Copying of configuration from the source VC to the new VCSA.

o If using DRS fully automated, set to manual (the pre-upgrade check will show a warning unless you do this). o Verify port 22 is open on the appliance that you want to upgrade. o if using windows update manager it is required to also install the migration agent on the update manager or

uninstall update manager. § copy the migration-assistant directory from the VCSA installer iso to update manager. § run the vmware-assistant-migration.exe § enter the SSO information. § some pre-checks are performed, leave the utility running and start the migration of the vCenter Server.

o Copy the vmware-assistant-migration.exe to the Windows vCenter Server and start the process. o If you have a VCSA, from the UI install menu just select ‘upgrade’. The migration facility is only available for

windows (source) implementations. § During the second stage of the upgrade the following is copied:

§ FQDN § IP Address § UUID § Certificates § MoRefIDs

o The migration doesn't delete the old vCenter Server, which can be reinstated if there is an issue (hosts will need to be reconnected).

o Note: regarding vCenter sizing, If you want to add ESXi host with more that 512 LUNs and 2048 paths to the vCenter Appliance Inventory you must deploy a VC with a large or x-large environment.

§ Required privileges add hosts: § Host.Inventory.add host to cluster § Resource.Assign virtual machine to resource pool § System.view

Deployment Size Number of vCPUs Memory

Page 23: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

Tiny (up to 10 hosts and 100 VMs) - not supported in Prod environments.. 2 10GB Small (up to 100 hosts and 1000 VMs) 4 16 Medium (up to 400 hosts and 4000 VMs 8 24 Large (up to 1000 hosts and 10,000 VMs 16 32 GB X-Large (up to 2000 hosts and 35,000 VMs 24 48 GB

• The H/W requirements for the PSC is the same, 2vCPU 4GB RAM. • Certificates: If you upgrade an ESXi host to 6.0 or later the upgrade prices replaces the *self-signed (thumbprint)

certificates* with the VMCA-signed certifies. o if the ESXi host already uses a custom certificate the certificate is retained (even if the certificates are expired or

invalid). o if you decide to not upgrade the hosts, the host will continue to use thumbprint verification. (you cannot

provision hosts with VMCA certificates they must be upgraded). • Upgrade and impact to deployment

Before Upgrade After Upgrade vCenter Server 5.5 with embedded SSO on Windows vCenter Server 6.5 with an embedded PSC on Windows vCenter Server 6.0 with an embedded PSC instanced on Windows vCenter Server 6.5 with an embedded PSC on Windows vCenter Single Sign-on 5.5 on Windows Platform Services Controller 6.5 on Windows Platform Services Controller 6.0 on Windows Platform Services Controller 6.5 on Windows vCenter Server 5.5 on Windows vCenter Server 6.5 on Windows vCenter Server 6.5 on Windows vCenter Server 6.5 on Windows

• To keep your current SSL certificates that are on your Windows vCenter, backup the certificates before you upgrade to vCenter Server 6.5.

• Before upgrading vCenter, if you are using vSphere HA Clusters, SSL certificate checking must be enabled. o if certificate checking is not enabled when you upgrade, vSphere HA fails to configure on the hosts.

§ configure tab: -> general -> ‘Verify that the SSL field set to ‘vCenter Server requires verified host SSL certificates’.

• Downtime: When you upgrade vCenter Server, expect the service to be out for a minimum of 40-50 minutes and can take much longer depending on the size.

o vSphere DRS does not work during this time, but vSphere HA will as it is NOT dependent on vCenter (only for management).

• Patching and upgrading *vCenter in HA* mode o High Level Process (Maintenance Mode cluster-> Witness -> Passive -> (Failover) - Patch Node ->

Exit Maintenance. o Place vCenter HA cluster into maintenance mode o Login to the Active Node -> SSH to Witness Node.

§ Patch Witness Node o SSH to Passive Node.

§ Patch Passive Node § Initiate failover.

o Login to new Active Node -> SSH to Passive Node. § Patch Passive Node

o Exit Maintenance mode. § ** optionally before exiting maintenance mode, failover over back to the original Active node to

‘preserve known state’. • Detailed:

o Place the vCenter HA cluster in maintenance mode: Configure -> settings -> vCenter HA -> edit -> Select ‘Maintenance Mode’.

o On the [Active] node login as root and establish a connection to the Witness Node. o ssh root@witnessIP-address o From the appliance shell of the witness node, patch the [Witness Node].

§ use software-packages utility o exit the SSH session to the witness node.

§ exit o Patch the [Passive Node]

§ From the active node ssh to passive node. § use the software-packages utility. § exit

o Initiate a Manual failover from Active Node (unpatched) to Passive Node (patched).

Page 24: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Configure -> Settings -> vCenter HA -> click ‘Initiate Failover’ -> Yes to start the failover. (perform synchronisation)

o Login to the appliance shell of the NEW active node. § Patch the new passive node.

o Exit maintenance mode. • Certificate Management (https://kb.vmware.com/s/article/2097936)

o vSphere 6.x cert manager can be used to: § Implement Default Certificates - Option 4 § Replace VMCA Certificate with Custom CA Certificate - Option 2 (can be in-house CA or Commercial

CA) § Replace all vSphere Certificates with Custom CA Certificates and Keys - Option 5

§ the VMCA will no longer be responsible for issuing certificates § Machine and Solution User Certificates will be replaced.

§

Administer and Manage vSphere 6.x Resources

• When the resource demands exceed the available resource capacity, attributes such as shares, reservations and limits can be used to determine the amount of CPU and RAM, and storage resources to be provided to VMs.

• Shares o Shares, specify the priority of a VM to get resources during a period of contention. The VM configured with the

highest shares will have the highest priority to access more the the hosts resources. Shares set at High, Normal and Low (4:2:1). By default shares are set to Normal for both VMs and Resource Pools.

Setting CPU Shares Value Memory Share Value High 2,000 shares per virtual CPU 20 shares per MB of configured virtual memory Normal 1,000 shares per virtual CPU 10 shares per MB of configured virtual memory Low 500 shares per virtual CPU 5 shares per MB of configured virtual memory

• Reservation: o Specifies the minimum allocation guarantee to a VM. o The VM cannot power on if it cannot receive its reservation requirements.

• Limit: o Limits, specify the maximum amount of resources a VM can use. o A VM cannot exceed this limit.

• In VMware vSphere a maximum tree depth of 8 resources pools is supported. • Dont use resource pools to group VMs use folders instead. • To review resource availability of a resource pool object -> navigate to: Select Resource Pool -> Monitor Tab -> Resource

Reservation -> ‘see available reservation’ • To create Resource Pools in a vSphere Cluster, DRS must be enabled first.

o Resource pools can be created on stand-alone hosts (which are not in a cluster). o Set share values in a 4:2:1 ratio. o deleting resource pools doesnt delete any VMs contained within, VMs will be moved to the root/parent

resource pool. o When a virtual machine is removed from a resource pool the associated total number of shares with the

resource pool decreases. • DRS affinity/anti-affinity rules.

o VM-VM affinity rule - used to specify that selected VMs should run on the same host. The anti-affinity rule behaves in the opposite way, instead ensuring that VMs are kept on different hosts.

Page 25: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o VM-Host affinity rule - Allows you to control which VMs in the cluster can run on which Hosts and requires that at least one VM DRS group and at least one host DRS group are created before managing host affinity rules.

• Note: VM-VM Rules require both VM-Group and Host-Group • Note: VM-Host Rules require both VM-Group and Host-Group • DRS Automation Levels

o Manual - vCenter will only suggest placement (for both power on and general moves) o Partially automated - vCenter will perform auto-placement for powered on VMs, but will make

recommendations for migrating VMs. o Fully automated - vCenter will perform auto-placement for power on, and automatically migrate VMs. o The migration threshold (aggressiveness) can be used. There are Five levels (Conservative = 1, Most Aggressive

= 5) § 1. vCenter Server will only apply recommendations that must be taken to satisfy cluster constraints like

affinity rules and host maintenance. § 2. vCenter Server will apply recommendations that promise a significant improvement to the cluster's

load balance. § 3. vCenter Server will apply recommendations that promise at least good improvements to the

cluster's load balance. § 4. vCenter Server will apply recommendations that promise even a moderate improvement to the

cluster's load balance. § 5. vCenter Server will apply recommendations that promise even a slight improvement to the cluster's

load balance. • To save a snapshot of a resource pool

o Edit DRS -> untick ‘Turn ON vSphere DRS’. o Click ‘yes’ to save snapshot. (cluster name.snapshot)

• To restore resource pool o Edit DRS -> tick ‘Turn ON vSphere DRS’ o Right click cluster -> restore resource pool -> browse.

§ Important: VMs that were in RP are reinstated. • If you want to retrain the RP tree, without export just set DRS automation level to manual. Automatic actions are not

performed, just recommendations. • You can set a DRS automation level on a per-vm basis. Edit DRS -> ‘Enable individual virtual machine automation levels’

o to disable option just uncheck. If you recheck this box, all original automations levels are restored. • HA, DRS and DPM wil NOT violate affinity rules. This may include:

o VMs not evacuated to place a host in maintenance mode o VMs not placed for power-on or load balance VMs o vSphere HA does NOT perform failover o vSphere DPM does not optimise power management by placing hosts into standby mode.

• When there is conflicting DRS rules, the oldest rule Wins and the newest rule is disabled (disabled state). • Network aware DRS

o This is a new feature in vSphere 6.5, where DRS now also considers network utilisation when it generates migration recommendations. If a hosts has Tx and Rx rate utilisation of connected physical uplinks greater 80%, the virtual machine wont be placed on that host.

• A vMotion is NOT triggered if load is high (only looks at CPU/RAM). o If the host members of the cluster are all network saturated, DRS will NOT migrate VMs to avoid further

performance degradation. • Predictive DRS is a new feature introduced in vSphere 6.5, that combined with vROPS allows you to balance workload for

certain VMs before resource utilisation spikes. o The information is based on metric history pulled from vCenter, vROPS computes and forecasts virtual machine

CPU/Memory utilisation. o The vROPS forecasts metrics are sent to DRS which acquires the information in advance (default 60 minutes

starting from the current time) and balances the cluster based on forecasted utilisation. • Storage DRS Clusters, this service allows you to keep space and I/O resources balanced in the datastore cluster, providing

recommendations for best virtual machine disk placement and migration. o Space load balancing amongst datastores - you can set a custom threshold o I/O load balancing among datastores o Initial placement of virtual disks based on space and I/O workloads.

• Storage DRS placement is evaluated every 8 hours or when one or more datastores exceeds the thresholds of space utilisation and I/O latency set by user.

• Storage I/O control automatically sets the latency threshold that corresponds to the estimated latency when he datastore is operating at 90% of its peak throughput.

Page 26: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o • Storage DRS Automation Levels

o Manual - Recommendations are made for placement and migrations only o Fully Automated - placement and migrations are automatic.

• If the storage DRS feature is disabled on a datastore cluster, all settings are preserved. When you re-enable Storage DRS, all settings are restored to the point when Storage DRS was disabled.

Backup and Recover a vSphere Deployment

• vSphere 6.5 comes with a new feature introduced in the VCSA that allows a file-based backup of the VCSA and the PSC (embedded & external deployments are supported) through the VAMI. This new capability allows you to backup core vCSA configuration, inventory and historical data.

• VAMI, file based backup supports the following targets: o HTTP, HTTPs, FTP, FTPs, SCP.

• When backing up the VCSA, common parts are included (inventory + configuration), optionally select ‘Stats + Events + Alarms + tasks’.

• The backup is NOT stored in the VCSA but streamed to the backup target.

o o Restoring the VCSA from this file-backup, mount the GUI installer, select restore.

§ This is a two step process. § 1. A new VCSA is deployed (this is done automatically as part of the restore operation).

§ Note: for a restore the Failed appliance is replaced. § 2. Copy the data from the file-based backup to the appliance. § The Restore only works if the backup was taken using the file-based method. § if you have enabled encryption you MUST enter an encryption password.

• A file-based restore of the PSC should be performed only when the last PSC installed in the domain fails. If a PSC fails you must decommission the failed PSC first, redeploy a new one, and join the existing SSO domain. The multi-master model of the PSC will allow replication to update the new PSC.

• vCenter Data Protection Appliance o managed the the vSphere web client (plugin) o the VM is deployed with H/W version 7 and can be stored on VMFS, NFS and vSAN datastores. o requires port 902 between appliance and ESXi host to be open.

• Data Protection Appliance Deployment o Deploy OVF o You can deploy up to 20 vDP appliances per vCenter Server (each appliance by default is deployed with 4 vCPUs,

4GB RAM). § Storage deployment options: 0.5TB, 1 TB, 2TB, 6TB, 8TB.

o In addition you an also deploy 8 external proxies for SCSI hot-add for vDP which cannot access the VM datastores directly.

• vDP agents are used to support granular guest-level backup and recovery. o MS Exchange, Sharepoint, SQL server 32 + 64bit versions available.

§ ‘Microsoft Exchange Server 64bit’

Page 27: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ ‘Microsoft Sharepoint Server 64bit’ § ‘Microsoft SQL Server 32bit’ § ‘Microsoft SQL Server 64bit'

o the agent is in .msi format, and can be obtained from the vDP appliance o Installing the agent: from command prompt, run as: msiexec /i VmwareVDP<servertype>-windows-x86_64-

<version>.msi • You can backup 8 virtual machines simultaneously and up to 24 with proxies deployed. • CBT is used providing Incremental backups.

o CBT can also be used for restore BUT, requires restore to same datastore and original VM folder and virtual disks must be present.

• Virtual Machines can be renamed as part of the restore process. • Using VDP to backup the VCSA and PSC is supported. • Note: if performing backup of VCSA and PSC and if you cannot quiesce the filesystem restore is not guaranteed. • Due to the way VDP leverages vSphere snapshot functionality the following is not supported:

o Independent RAW Device Mappings o Independent - Virtual compatibility Mode, and RDM physical compatibility mode. o Virtual Volumes NOT supported.

• The following factors influence size of vDP appliance / storage requirements: o Number and type of VMs to be backed up o Amount of data o Retention periods o Typical change rate

• vDP should not be deployed to the same datastore as the virtual machines that it needs to protect. • vDP appliance management interface: https://ipOfvDP:8543/vdp-configure

o username: root / changeme • vDP does not support flash read cache (remember vDP appliance deployed at H/W version 7). during backups if a VM

uses flash-read cache, vDP will use Network Block Device (NBD) mode backup, which could impacting backup performance.

• vDP backup limitations: vDP cannot be used to backup, o VDP appliances o vSphere Storage Appliances o Templates o Secondary fault tolerant nodes o Proxies o Avamar VE servers o virtual machines must not special characters in inventory names (dash and under_score okay).

• vDP restore operations could fail for the following reasons: o Restores to VMs with SCSI bus sharing configured are not supported o if a virtual machine contains snapshots, snapshots must be removed for a successful restore. If the target VM

contains snapshots the restore job fails. • The restore to original location is not allowed if the original virtual disk of the VM to restore has been removed. In this

case you need to restore the virtual disk to an alternate location. • You need a backup job scheduled for a VM otherwise you cannot back it up. • vSphere Replication

o vSphere Réplication uses FastLZ compression library, providing minimal CPU overhead. o compression is configured, during replication setup, but performed by the ESXi host. o to support end-to-end compression, both source and target site must be running version 6.x if an earlier version

6.0 source host is used, the data compression is not supported (enable compression for VR data is disabled). o vSphere Replication uses CBT to replicate change blocks from the source site to the destination site. Enabling

compression improves performance and saves bandwidth. o vSphere Replication supports RPO of 5 minutes to 24 hours, if quiesce file system is taken RPO is 15 minutes

minimum. o any configured point in time instances (snapshots) will be visible in the destination site. When the replication

VM is recovered, the replication instances are converted to snapshots. § At the secondary site, select the desired snapshot from the point in time instance (basically just a

snapshot). § vSphere Replication supports a maximum of 24 snapshot instances. § If the retention period is set to 15 days the snapshot will be kept for that period, at the end of the 15

days the snapshots are deleted by the garbage collection process. o Deploy vSphere Replication:

§ deploy OVF template, select following files: § vSphere_Replication_OVF10.ovf

Page 28: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ vSphere_Replication-system.vmdk § vSphere_Replication-support.vmdk

§ if deploying add-on (vSphere Replication Server) § vSphere_Replication_AddOn_OVF10.ovf § vSphere_Replication-system.vmdk § vSphere_Replication-support.vmdk

o Maximum of 9 VR servers supported (vSphere Replication) + 1 VRM which can also act as VR server. o a VRM server is needed for each site and registered with vCenter and paired. o Upgrading vSphere Replication.

§ This can be done via the VAMI for the vSphere Replication Appliance using and .iso. § Update manager can also be used using the update .iso.

o Dont use quiescing if you dont need it. o VSS quiescing with vREP and Virtual Volumes is not supported. o Configuring Reverse Replication

§ vSphere Replication can use the original source disk as seeds to minimise bandwidth used. § to do this, you first need to make sure the original VM is unregistered from vCenter Inventory.

Deploy and Customise ESXi Hosts

• Host Profiles, create. o 1. Extract Profile from hosts (reference host, select one) o 2. Enter name for Host Profile (host profile will be created, using reference host information). o 3. Attach Host Profile to cluster (or host) o 4. Customise Host Profile (optional: you can skip this). o 5. Check Host Profile Compliance o 6. Remediate (Host must be in Maintenance Mode)

• Use the Edit Host Customisations (equivalent to the old answer file) • Host profiles can be exported, and will use the Profile.vpf extension. Passwords are NOT exported. • The Auto-Deploy Service must be started (disabled by default in the VCSA) • Auto-Deploy

o Auto-deploy installation is a way to PXE boot your ESXi hosts from a central Auto Deploy Server. o This method relies on the use of master images, with a set of rules to deploy ESXi. o Auto-deploy can be used in conjunction with Host Profiles to ensure a consistent configuration. o Auto-deploy requires Enterprise Plus - licensing.

• Auto-Deploy Architecture / Components o PXE Boot - This must be configured in the hosts to connect to the TFTP server. o TFTP Server - Used to retrieve the files from the boot server. You retrieve the TFTP Boot Zip file and copy it to

the root of the TFTP folder. § To download the image: vCenter -> Configure -> Auto-Deploy -> ‘Download TFTP Boot Zip'

o DHCP Server - This provides an IP address to newly booted hosts. § Requires two options: § 66 - Specifies the boot server hostname § 67 - Specifies the Bootfile name: undionly.kpxe.vmw-hardwired

o Auto Deploy Server - Which provides the ESXi images and Host Profile. o DNS needs to be configured and Hosts need forward/reverse records.

• Auto-Deploy Boot Process o ESXi host boots and retrieves DHCP address

§ option 66/67 in DCHP server provides boot server ip and basic boot image to TFTPserver o Once connected to the TFTP server, the host downloads the undionly.kpxe.vmw-hardwired boot loader. o An HTTP request is made to the Auto-Deploy Server (VCSA). o Once connection acquired, rules engine is used to get host info then streams the components to the ESXi host. o The ESXi host boots using the assigned image profile - if Host Profiles are used, host profile is applied. o When host boots it is added to vCenter where auto-deploy was registered. o Virtual Machines can be migrated to the new ESXi host only when the host is added to vCenter if it is part of a

DRS - enabled cluster. • Image Profiles - contain VIBs, VIBs contain drivers, patches and 3rd-party software specific to the hardware type.

o To create and image profile you need to first create a Software-Depot. o download an offline-bundle from vmware.com create new software depot. Upload the .zip.

§ Online - via HTTP § Offline - via .zip

• Three different installations can be performed depending on the business requirements o Stateless - The ESXi image os loaded directly into host memory as it boots.

Page 29: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Stateless caching - The ESXi image is cached to local disk, remote disk or USB. If no Auto-Deploy servers are available the ESXi host boots from local cache. When the host has booted successfully the auto-deploy image is loaded to memory. This is a good use case in the event the Auto-Deploy server is experiencing congestion, or had a short outage.

o Stateful - The image is cached to local disk, remote disk or USB. Similar to above with the exception that the host first boots from local cache, then talks to auto-deploy. Within the host BIOS set boot order to local disk.

o Caching us set -> Advanced Configuration -> System Image Cache Configuration -> System Image Cache Profile Settings:

§ • Note offline or un-presented devices (LUNs) are not captured by the host profile.

Configure and Administer vSphere and vCenter Availability Solutions

• Enable HA, new cluster -> turn on required features based on business requirement o vSphere HA - 'Turn on’ checkbox

o • vSphere Availability (formally vSphere HA)

• APD, PDL States (part of a new component called VMCP) - these are disabled by default o Permanent Device Loss (PDL) - Occurs when the storage array issues an SCSI sense code, indicating that a

specific device is unavailable (a typical case is a failed LUN). § A condition that occurs when a storage device is permanently fails or is administratively removed. It is

NOT expected to return. § Recovery:

§ Power off / unregister VMs § Unmount datastore § Rescan all ESXi hosts that had access to the device § If rescan is not successful ensure there is no existing I/O to the device.

§ How vSphere can deal with PDL datastore failures: (UI) § Disabled (no action will be take to affected VMs) § Issue Events (no action will be taken, but events will be generated) § Power off and restart VMs (all affected VMs will be terminated and vSphere HA will attempt

to restart the VMs on hosts that still have connectivity to the datastore). o All Paths Down (APD) - Usually related to SAN networking where there is NO active path to target storage

device. § Occurs then the storage device is not accessible and no paths are available § This may only be a temporary condition. § How vSphere can deal with APD datastore failures:

§ Disabled (no action will be take to affected VMs) § Issue Events (no action will be taken, but events will be generated)

Page 30: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Power off and restart VMs - **Conservative** restart policy (all affected VMs will be powered off and vSphere HA will attempt to restart VMs if another host has connectivity to the datastore.

§ Power off and restart VMs - **Aggressive** restart policy (all affected VMs will be powered off and vSphere HA will always attempt to restart VMs.

• HA will NOT violate DRS affinity rules • Note: if VM Monitoring is unchecked, you cannot enable Application Monitoring.

o VM Monitoring ‘Resets’ individual virtual machines if their VMware tools heartbeats are not received within a set time.

o Application Monitoring ‘Resets’ individual virtual machines if their in-guest heartbeats are not received within a set time.

• vSphere HA Host Failure States: o Failed - Host has failed, VMs are restarted on alternate hosts o Isolated - Host is considered isolated when it cannot communicate with master host and it cannot ping its

default isolation address. § Host will then carry out its isolation response:

§ Leave Powered On § Power Off - when network isolation occurs, all vms are powered off and restarted on another

host. (this is a hard stop) This is initiated on the 14th second and a restart on the 15th second. § Shut down - when a network isolation occurs, all vms are shut down via vmware tools and

restarted on another host. If shutdown doesnt succeed after 5 minutes the VMs are powered off.

o Partitioned - Host is considered partitioned when it loses connectivity with the master host. The host is not isolated in this case, but the master HA host cannot communicate with the host but can do so using the heartbeat datastores.

• vSphere Availability Settings o Failed and Responses - Provides settings for host failure responses, host isolation, VM monitoring and VM

Component Protections o Proactive HA failures and responses - Provide settings for how Proactive HA responds when a provider has

notified its health degradation to vCenter, indicating a partial failure of that host o Admission Control - Enable or disable admission control for the vSphere HA cluster and choose a policy for how

it is enforced. o Heartbeat Datastore - Specify preferences for the datastores that vSphere HA uses for datastore heartbeating o Advanced Options - Customise vSphere HA behaviour.

• Proactive HA o You can configure how Proactive HA responds when a provider has notified its health degradation to vCenter,

indicating a partial failure of that host. o Must have DRS enabled o Proactive HA Failure Responses

§ Automation Level § Manual - vCenter Server suggests migration recommendations § Automated - Virtual Machines are migrated to healthy hosts and degraded hosts enter into

‘quarantine’ or maintenance mode (depending on the configured Proactive HA automation level).

§ Remediation: § Quarantine mode for all failures - Balances performance and availability. § Quarantine mode for moderate and Maintenance mode for severe failures (Mixed) -

Balances performance and Availability, by avoiding usage of moderately degraded hosts providing VM performance is not affected. But.. ensures that VM DO NOT run on severely affected hosts.

§ Maintenance mode for all failures - Ensures that all virtual machines do not run on partially failed hosts.

• Admission Control ensures that there is enough failover capacity in the cluster to support VM power on. o vSphere HA uses the admission control policy to determine how many hosts failure the cluster can tolerate.

Admission Control therefore reserves an amount of resources for virtual machines in the event of host failure. o Power on, migrating, increasing CPU/RAM reservation of a VM may be prevented by admission control. o There are three (excluding disabled) ways to define host failover capacity: o Disabled (allows VM power-ons to violate availability constraints) o Slot Policy (Powered-on VMs)

§ Slot size policy, calculate the slot size based on the maximum CPU/Memory reservation + overhead of all powered on VMs.

§ or Set a fixed slot size. (CPU: 32MHz + Memory 100MB)

Page 31: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Cluster Resource Percentage § The total resource requirements for powered on virtual machines are calculated by summing up the

CPU reservations (by default this is 32Mhz + 0MB Ram plus overhead). You can adjust the CPU resection with das.vmcpuminmhz.

§ In 6.5 this is done automatically. 4 host cluster = 25%, 5 host cluster = 20% etc... § You can override calculated failover capacity *this then ignores the ‘Host Failures Cluster

Tolerates’ setting (greyed out)

§ § To calculate cluster failover capacity: § CPU - MHz (Total Cluster CPU - Total Required CPU for VMs) / (Total Cluster CPU) = % § Memory - RAM (Total Cluster RAM - Total Required *reserved RAM for VMs) / (Total Cluster

RAM) = % o Dedicated failover host.

§ you cant use this host, it is reserved. You cannot power on vms or vmotion workloads to this host. o You can also set ‘Performance degradation VMs tolerate %’.

§ 0% raises a warning if there is insufficient failover capacity to guarantee the same performance after VMs restart

§ 100% disables warning. o vSphere Availability - Advanced Options

§ das.isolationaddress[x] set address used to detect isolation 0-9 (max of 10) § das.ignoreRedundantNetWarning : true (default false - but not set). § das.useDefaultIsolationAddress: determines if default gateway is used. § das.ignoreInsufficinetHbDatastore

• vCenter HA - vCenter HA protects vCenter Server Appliance against host and hardware failures, using an active-passive architecture.

o Architecture: § vCenter Server appliance. § Deployed in three node configuration, Active Node, Passive Node + Witness Node

§ The active node is cloned twice. § Three ESXi hosts / Separate datastores recommended. § Important: Limitations, Does NOT support ELM, Does NOT support PSC replication. § Only the active Node has an active management interface (public IP). The three nodes communicate

over a private network, known as the vCenter HA network. § The active node and passive node continuously replicate data.

§ § The Active Node:

§ Runs the active vCenter Server Instance § Uses a public IP address for the management interface § Uses the vCenter HA network for replication of data to the Passive node. § Uses the vCenter HA network to communicate with the witness node.

§ The Passive Node: § Is initially cloned from the Active Node § Constantly receives updates from the Active Node over the vCenter HA network. § Automatically takes over the role of the Active node if a failure occurs.

§ The Witness Node: § Is a lightweight clone of the Active Node § Provides quorum to protect against split brain situations.

Page 32: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Requirements: § ESXi 5.5 or later § Three hosts recommended and separate datastores *use DRS. § Deployment size ‘small’ - tiny not recommended. Do not use tiny in production environments. § vCenter HA is supported on VMFS, vSAN and NFS only (Virtual Volumes NOT supported) § Networking: vCenter HA network between Active - Passive - Witness must be less than 10ms.

§ the vCenter HA network must be on separate subnet than the management network. § vCenter HA requires vCenter Standard Licensing (only 1 required). § vCenter HA can be setup with embedded PSC or external PSC.

§ with embedded PSC, data synchronisation includes PSC data. o Deployment:

§ Right click vCenter Inventory Object -> vCenter HA Settings -> § Basic Configuration: The system creates and configures the clones. § Advanced Configuration: The user creates and configures the clones.

o Failover: § You can manually initiate a failover and have the Passive node become the Active Node. § vCenter HA supports two types of failover:

§ Automatic Failover - The passive node attempts to take over the active role in case of Active Node failure.

§ Manual Failover - The user can force a Passive node to take over the Active role by this the ‘Initiate Failover' action.

o Cluster Modes § Maintenance - Mode

§ When vCenter HA cluster is placed in maintenance mode, automatic failover is DISABLED. Only manual failover is possible.

§ Data is still synchronised § Disabled - Mode

§ Failover is disabled. § Synchronisation disabled.

§ Enable vCenter HA - Mode § Failover is automatic, and data synchronised.

• vCenter HA shutdown sequence: o Passive Node o Active Node o Witness Node

• Note if all nodes lose connectivity the Active Node STOPs service client requests. o If connectivity cannot be restored (isolated nodes will auto-join), it may be necessary to destroy vCenter HA. o Power off and Delete and Passive and Witness Nodes. o SSH to Active Node, run: destroy-vcha -f o Reboot the Active Node (now standalone). o Perform the HA configuration again.

• VMware FT o If the primary VM is turned on the entire state of the Primary VM is copied and the Secondary VM is created. o The Fault Tolerance Status displayed from the virtual machine is ‘Protected’ o If the Primary VM is powered off.

§ The Secondary VM is immediately created and registered to a host in the cluster. § The Secondary VM is not powered on until the Primary VM is powered on. § The Status for the VM is displayed as ‘Not Protected, VM not running’.

o Requirements § vSphere HA enabled § FT VMkernel and vMotion network required. § VT enabled in BIOS § Intel EPT / AMD RVI

§ Intel Sandy Bridge or Later, AMD Bulldozer or later. § Use 10GbE logging network. § VM is either primary (protected) or secondary § FT enabled VMs cannot run on the same ESXi host. § vCenter SSL certificate checking must be enabled.

o Limitations: § The maximum number of fault tolerant VMs allowed on a host in the cluster. Both Primary and

Secondary VMs count toward this limit (The default is 4) § das.maxftvmsperhost

Page 33: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ The maximum number of vCPUs aggregated across all fault tolerant VMs on a host. vCPUs from both Primary VMs and Secondary VMs count toward this limit. The default is 8.

§ das.maxcpusperhost o vSphere Features

§ Snapshots NOT supported https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.avail.doc/GUID-F5264795-11DA-4242-B774-8C3450997033.html

§ VADP supported (snapshots via VADP only) § vMotion of both primary and secondary VM supported

§ DRS supported but you MUST use EVC mode. § Storage vMotion NOT supported. To storage migrate, temporarily turn off FT and perform storage

vMotion action. When complete turn FT back on. § Virtual Volume datastore NOT supported § SPBM NOT Supported § I/O Filters NOT Supported § Linked Clones NOT Supported. § VMCP (if cluster has VMCP enabled, overrides are created for FT VMs that turn this feature off).

o Incompatible Devices § Physical Mode RDM § CD-ROM/Floppy devices. § USB / Sound devices § NPortIV § NIC Passthrough § Hot-Plug. § Virtual EFI (system must use BIOS firmware). § 2 TB and greater VMDK not supported.

o Licensing: § vSphere Standard & Enterprise, allows up to 2 vCPUs. § vSphere Enterprise Plus, allows up to 4 vCPUs.

o Turning On FT § Right click VM -> Fault Tolerance -> Turn On Fault Tolerance.

§ Select a datastore for the secondary VM placement. § Select a host on which to place the secondary VM.

o Turning Off FT § Right click VM -> Fault Tolerance -> Turn Off Fault Tolerance.

§ The secondary VM is automatically deleted. (optionally suspend VM if you dont want to lose the secondary).

§ If the secondary VM resides a host that is in maintenance mode or not responding, you cannot turn off FT, use suspend instead.

o Migrate Secondary § Right click VM -> Fault Tolerance -> Migrate Secondary

o Test Failover § Right click VM -> Fault Tolerance -> Test Failover

§ VM is failed over to secondary, a new secondary VM is started placing VM back in protected state.

o Test Restart Secondary § Rick click VM -> Fault Tolerance -> Test Restart Secondary

Administer and Manage vSphere Virtual Machines

• USB devices, only one virtual machine can access a USB device at a time. The connected USB device will NOT be available to other virtual machines.

• To access a USB device form a virtual machine the VM requires a USB controller. • USB passthrough technology allows the virtual machine direct access to USB devices that are attached to the hosting ESXi

and requires the following: o USB arbitrator - enabled by default, is a role that manages the scanning the host to detect for new devices, and

is responsible for routing USB to the correct VM. o USB controller - each VM requires a controller to access a USB device. vSphere 6.5 supports up to 8 virtual

controllers. The controller must be present before adding the USB device. A max of 15 USB controllers can be managed by the USB arbitrator component.

o USB devices - up to 20 USB devices can be added to a VM. § When adding a USB controller to a VM you can choose between USB 2.0 and USB 3.0 - only. USB 2.0 is

used by default.

Page 34: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ right click VM -> Edit -> New Device -> USB Controller. • The following vSphere features DO NOT support USB pass through:

o vSphere DPM o vSphere FT

• The following vSphere features ARE Supported: o vSphere vMotion o vSphere DRS

• When a virtual machine with an attached USB device is migrated with vMotion to a different host, the VM remains connected with the USB device until it is powered off or suspended. To restore connectivity you need to migrate the VM back to the host where the USB device is attached.

• To avoid dataloss when a VM is connected to a USB device: o Remove any USB device before hot adding memory, CPU or PCI devices o Make sure no data transfers are in progress before suspending a VM. o Make sure USB devices are not attached to VMs before changing the state of an ESXi host USB arbitrator.

• Note: Hot adding USB CD/DVD-ROM devices is not supported. • VMs can take advantage of high performance graphics cards such as NVIDIA GRID vGPUs.

o Host required both card and driver installed. o Edit VM -> ad ‘Shared PCI Device’ o Expand device -> select GPU profile o Click, ‘Reserve All Memory’

§ Note: some operations are unavailable, you cannot snapshot, vmotion, suspend vm with some PCI shared devices.

o Note: the VM must be powered off and VM compatibility must be 6.0 or later. • Up to 6 PCI devices can be connected to a VM. • PCI passthrough allows direct connectivity to PCI device and can bypass the vmkernel, reducing CPU demand.

o A VM will device passthrough CANNOT be suspended, vmotioned (DRS), or snapshots taken (except with cisco vm-fex).

o Direct Path I/O cannot be shared 1:1 mapping with VM, unlike Single Root I/O (SR-IOV) which can be shared to multiple VMs.

• SR IOV also bypasses the VMkernel, reducing latency and improving CPU efficiency. • SR IOV is mainly used if a VM has high network demand, the configuration is two stage:

o 1. configure the physical adapter on the ESXi host. o Physical adapter -> edit VMNIC adapter -> SR-IOV set the status to enabled. o Specify the number of virtual functions (changes do not take effect until reboot). o 2. Edit the VM, add new network adapter -> select SR-IOV Passthrough -> select the adapter you set in step 1.

§ Set guest OS MTU / reserve all memory. o Note: some features such as vMotion are not supported.

• Max number of vCPUs which can be assigned to VM is 128, but the number of assigned vCPUs cannot exceed the number of physical cores in the ESXi host if hyper-threading is disabled.

• if you have more the 8 vCPUs assigned to a VM, vNUMA is enabled and the ESXi host distributes the virtual machines to more NUMA nodes if one doesn’t suffice.

• note: if CPU hot add is enabled, vNUMA is disabled and uniform memory access with interleaved memory access will be used.

• Content Library o You can mount an ISO image directly from a Content Library o Content Libraries can be used to store VM Templates, vApp Templates, OVF and ISO files. o There are two types of Content Libraries:

§ Local - Items are stored on a single vCenter Server and can be published to allow users from different vCenter Server to access them via a subscription.

§ Subscribed - Subscribing to a published Content Library creates a subscribed library. The content of a subscribed library is kept up to date, though an automates or on-demand synchronisation to the sources published library. Subscribers CANNOT edit or modify the contents, they can only use them.

o New features in vSphere 6.5 § Mount ISO files directly from the library § Update existing templates § Apply guest OS customisation.

• When publishing a Content Library, select ‘Optimise for syncing over HTTP’ - if the library resides on a remote vCenter Server not using Enhanced Linked Mode.

o The subscription URL :https://IPofVC/cls/vcsp/lib/f3189b14-124c-442d-b583-d1a203de8c76/lib.json • You cannot unpublish a library if the ‘option to synchronise the content over HTTP’ - is used.

o Also you cannot use this type of library to publish virtual machines.

Page 35: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o o The library can still be deleted.

• Once a content library has been created other vCenter Servers in the same SSO domain can subscribe to this library. If the a remote VC is used (not in the SSO domain) use the Sync of HTTP option.

• Subscribed library: o Add https://path to .json file. o Enable authentication (if used) o Download all library content immediately or choose ‘Download library content only when needed’.

• Content libraries are not direct children objects of the vCenter Server. Content libraries are under Global Root (adjacent to vCenter). Permissions set at vCenter level are not applied to content libraries (or Tags). Permissions to Content Libraries must be granted to users as Global Permission.

• To manage a Library a users needs to be granted the Content Library Administrator (sample) - role. • Using VMware vCenter Converter

o This stand-alone tool is used to P2V / V2V servers. o Components: Stand alone server, agent and client.

§ Installation can be: Local or Client-Server. § The Client-Server allows you to manage the conversion remotely. § Local assumes the local machine will be converted.

• The following volumes/disk types are unsupported: o RAID, GPT/MBR Hybrid Disks, RDM Disks

• The following volumes/disk types are supported: o BASIC Volumes, All types of dynamic disks, GPT/MBR

• Source VM can be powered on or powered off. (no agent is deployed to powered off VM). • vCenter Converter 6.2 doesnt support virtual hardware version above 11. • Resize Partitions during conversion process *slower than block level conversion.

o Synchronisation only supported with Block Level conversions (volume level cloning).

vSphere Troubleshooting

• FT o Unable to enable FT, check HV enabled in the system BIOS, check that servers support Hardware Virtualisation. o Unable to power on secondary VM, there are no compatible hosts that can accommodate it.

§ Check hosts have HV enabled, and that they support Hardware MMU virtualisation. Check that all datastores are accessible and that there is available capacity. Check no hosts in maintenance mode.

o Secondary VMs can degrade the performance of the Primary VM. Check that the supporting host is not overcommitted (CPU).

§ For FT networking contention, use vMotion to move the Secondary VM to another host with fewer FT VM contenting on the FT network.

§ Verify storage access, validate that VM access to storage is not asymmetric § For CPU contention set an explicit CPU reservation for the Primary VM. The reservation will be applied

to both the Primary and Secondary VM. § vSphere DRS does NOT load balance FT VMs (unless they are using legacy FT). DRS is only used for

placement during power on of secondary VM. § Storage vMotion NOT supported (turn off FT then migrate virtual disk). In FT Legacy mode, Storage

vMotion is supported when workload is powered off • USB Passthrough

o Unable to vMotion VM. You must enable all USB devices that are connected to the virtual from a host for vMotion. If one or more devices are not enabled for vMotion, migration will fail.

§ Make sure that the devices are not in the process of transferring data before removing them § Re-add and enable vMotion for each affected USB device.

o You are unable to connect a USB device to a host § Check that the USB arbitrator is not being used for USB passthrough from an ESXi host to a VM.

Page 36: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ You may need to restart the usbarbitrator service: /etc/init.d/usbarbitrator stop -> disconnect the USB device and reconnect it, then start the service again.

• Orphaned VMs o Possibly due to unsuccessful failover or if VM is unregistered directly on the host.

§ Try moving the VM to another host (Remove VM from inventory, browse .vmx file and register VM.) • VM does not power on after cloning

o Possibly due to lack of swap file. § Either create a memory reservation, or increase the amount of space available for the swapfile by

moving other virtual machine disk off the datastore. (migrate VMs using the ‘change storage only’). § Change the location that swap files are created. Cluster -> Configure Tab | Configuration -> General:

Swap file location. § Swap file location:

§ Virtual Machine Directory - Store the swap files in the same directory as the VM. § Datastore specified by host. *using a datastore that is not visible to hosts during vMotion

might affect the vMotion performance of the affected virtual machines. • Troubleshooting vSphere HA on Hosts

o vSphere HA Agent is in the Agent Unreachable State § Possible networking condition, try 'reconfigure vSphere HA' on the host. (right click host in cluster ->

select 'Reconfigure vSphere HA’) o vSphere HA Agent is in the uninitialised state

§ vSphere HA cannot monitor the state of VMs, possible causes due to host not having access to any datastores.

§ Check ESXi firewall / check that another service is not using 8182. o vSphere HA Agent is in the initialisation error state.

§ vCenter unable to connect to the host while the vSphere HA agent was being installed or configured. § Host may not have entered Master or Slave state in specified period. § Check sufficient free space on hosts local datastore *unable to complete agent install. § Check host free memory. § Reboot pending (5.x or later)

o vSphere HA Agent is in the Host Failed State § Host might have failed, check state of host (check datastores, specifically those used as heartbeat

datastores. o vSphere HA Agent is in the Network Partitioned State

§ A host reports this state when the following conditions are met: § The vSphere HA Master host to which vCenter Sever is connected is unable to communicate

with the host by using the management (or vSAN) network. § The host is not isolated.

§ Possible causes, incorrect VLAN tagging, or failure of the physical NIC or Switch. o vSphere HA Agent is in the Network Isolated State

§ When a host in the Network Isolated state, there are two things to consider - the isolated host and the vSphere HA agent that holds the master role.

§ On the isolated host, the vSphere HA agent applies the configured isolation response to VMs. § The vSphere HA Master agent can access one or more datastore it monitors the VMs that

were running on the host when it became isolated and attempts to restart any that were powered off or shutdown.

§ The host is network isolated when: § The host cannot ping its configured isolation address. § The vSphere HA Agent on the host is unable to access any of the agents running on the other

cluster hosts. § If host is member of vSAN cluster, a host is determined isolated if it cannot communicate with other

HA agents in the cluster and cannot reach the isolation addresses. Although vSphere HA agent use the vASN network for inter-agent communication, the default isolation address is still the gateway of the host. In the default configuration both networks must fail fo the host to be declared isolated.

§ Resolve the networking problem impacting the hosts(s). • Auto-deploy

o Main issues revolve around connectivity to either DHCP (unable to get an IP address) or TFTP server. Check core-architectural components first.

§ Ensure core services are running / health § Check that Auto-Deploy is configured (Check auto-deploy service is running - built-in on VCSA,

deployed on Windows). § Check image profiles exist. Check rules is active.

• vCenter Server and vSphere Web Client

Page 37: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

o Upgrade fails due to error ‘unable to delete VC Tomcat service’ - service may be unable release locked files. Reboot VC.

o The DB User entered does not have the required permissions needed to install and configure vCenter Server with the selected DB. Error(s): %s

§ Microsoft SQL database set to unsupported compatibility mode. o Unable to start the virtual machine console: HTTP ERROR 404 Problem accesin /. Reason: Not Found

§ Check that another program is not using port 9443, the default port used by the HTML 5 vm console. o vCenter Server / ESXi Host Certificates

§ vCenter unable to connect to managed hosts § reconnect the host to vCenter (may require you to login with root credentials).

§ Cannot configure vSphere HA when using custom SSL certificates: vSphere HA cannot be configured on this host because its SSL thumbprint has not been verified.

§ reconnect host to vCenter, accept host SSL certificate. Enable vSphere HA on host. • Troubleshooting Availability

o Admission Control (cluster turns red) § Insufficient failover resources. § Hosts can be in maintenance mode or disconnected (and therefore are not contributing to HA).

§ Admission control only considers resources from health hosts. o Unable to power on VM due to insufficient failover resources

§ Same as above, check hosts are healthy state. § Check there is sufficient resources, there may not be enough slots to support your new workload.

§ view the Advanced Runtime Info pane that appears in the vSphere HA section of the clusters Monitor tab. This shows the slots size, if too high consider reviewing admission control policy. Modify the VM cap slot size, if results are skewed due to VMs with large reservations.

• Troubleshooting Heartbeat datastores o When the master host in a vSphere HA cluster can no longer communicate with subordinate host over the

management network, the master host uses datastore heartbeating to determine if the subordinate host might have failed or is in in a network partition state. If the subordinate host has stopped datastore heartbeating that host is considered to have failed and its VMs are restarted elsewhere.

• Troubleshooting Resource Management o Storage DRS

§ Storage DRS is disabled on a virtual disk § Virtual machines swap file is host-local, and cannot be moved. § The home disk of a VM is protected by vSphere HA and relocating it will cause loss of vSphere

HA protection. § The disk is a CD-ROM/ISO file § If the disk is an independent disk, storage DRS is disabled. § The virtual machine is a template § The virtual machine has FT enabled. § The virtual machine is sharing files between disks.

§ Datastore cannot enter Maintenance Mode, entering maintenance mode remains at 1% status. § Storage DRS is disabled on the disk § Storage DRS rules prevent storage DRS from making migration recommendations for the disk.

§ Set adv param: IgnoreAffinityRulesForMaintenance to 1. § Storage DRS cannot operate on a Datastore

§ The datastore is shared across multiple data centres. § The datastore is connected to an unsupported host, storage DRS is not supported on ESXi 4.1

and earlier. *use ESXi 5.0 later. § The datastore is connected a host that is not running SIOC.

o Storage I/O Control § Unable to View Performance Charts for Datastore

§ Storage I/O control is disabled. o Cannot enable SIOC

§ Host connected to datastore unsupported (running ESXi 4.1) § You do not have the required license (Ent+)

• Troubleshooting Storage o Excessive SCSI Reservations Cause Slow Host Performance

§ Serialise the operations of the shared LUNs, if possible. Limit the number of operations on different hosts the require SCSI reservation at the same time

§ Increase number of LUNs and limit the number of hosts accessing the same LUN § Reduce the number of snapshots. § Reduce the number of VMs per LUN

Page 38: VCP 6.5 Study Notes – Exam Number 2V0-622 Note: Review ... · o starting with vSphere 6.0, the local administrator of the PSC (root for VCSA) or the windows administrator if deployed

§ Update HBA firmware. § Update Host BIOS § Ensure correct host mode setting on the SAN array.

o Path Thrashing (causes slow LUN access) § check for path state changes in logs

§ Ensure that all hosts that share the same set of LUNs on the active-passive arrays use the same storage processor.

§ Check cabling or masking inconsistencies. § Ensure claim rules defined on all hosts that share the LUns are exactly the same. § Configure the path to use the MRU

o Increased Latency for I/O Requests slows virtual machine performance § Inadequate LUN queue depth. (setting invalid queue depth to higher than the default can decrease the

total number of LUNs supported). § Disk.ShedNumReqOutstanding (DSNRO) parameter must match queue depth of adapter.

o Failure to mount NFS datastore § use of non-ASCII characters for directory and filenames on NFS storage.

o Unable to use Flash devices (vSAN, host swap cache, and flash read cache) § Flash device may already be in use / VMFS volumes may be present, device may have already been

claimed by another feature. Flash devices cant be shared. § vSAN may have claimed flash device. § Avoid formatting disk with VMFS if intended use is for host swap, flash read cache.

• Troubleshooting Networking o Virtual Machines have duplicate MAC addresses

§ Two vCenter Server instanced with identical IDs generate overlapping MAC addresses for virtual machine adapters.

o Unable to Remove a Host from a vSphere Distributed Switch § Resources still in use message.

§ There are vmkernel adapters on the switch that are in use § There are virtual machine network adapters connected to the switch.

o Hosts on a vDS 5.1 and later cannot connect to vCenter Server after a port group configuration change § Auto-roll back disabled, the port group may have contained management vmkernel adapters.

o Unable to add a physical adapter to a vDS that has Network I/O control enabled. § Network I/O control aligns the bandwidth that is available for reservation to the 10-Gbps speed of the

individual adapters that are already connected to the distributed switch. After you reserve a part of this bandwidth , adding a physical adapter whose speed is less than 10Gbps might not meet the potential needs of the system traffic type.

§ You can comment out these adapter from being used by NIOC. § Net.IOControlPnicOptOut parameter, the the Host advanced settings.

§ example: vmnic2, vmnic3