VCP Club
description
Transcript of VCP Club
VCP CLUBTeaser Questions & Ice Breaker. Just for fun
Iwan ‘e1’ [email protected] | virtual-red-dot.blogspot.com
IntroductionA. There are only 10+ questions.B. For each, think of the answer.
A lot of dummy/funny/wrong answers are provided. The correct answer may or may not be there
C. At the end, ask yourself. If the questions are easy and you know the answer, you are
ready for the VCP Club session. If the questions are hard, then you should consider
attending vSphere Design Workshop On a serious note, I attended the workshop and got a
major surprise on something I thought was 101.
How is Slot Size determined in HA?A. What Slot Size? vSphere 4.1 no longer use slot size
B. Let me call my manager C. Slot size is based on actual usage or
configured/allocated (say you configure the VM to be 2 vCPU)
D. In vSphere 4.1, we now have Storage IO Control and take into account Disk in slot size. See picture below
E. Hmm… good question
How do you explain this?A. Hmm… 3 Total Slots, but used are 14. Strange…B. A bug. Must be man C. Sorry, I was sleeping during the ICM class.D. My trainer’s fault. Never explained properly
VM performance chart (1 VM)
What does the previous chart tell us?A. Nothing man. See no evil, hear no evil B. Oh, oh. What’s that darn spike? C. I’m not clear what CPU Ready means. I thought CPU
is always ready!D. This looks like ECG diagram when I did my health
check
How do you get 2 GE performance in NFS?A. It just gets it.
ESX is smart right B. I got 2 GE when I configure
NIC Teaming properly.C. NFS performance is bad.
So I don’t use it.D. By praying really hard E. By getting NetApp/EMC to do it. That’s their job
right
Your ESX has 2 ports. Each is 1 GE.It is dedicated for NFS traffic. Not shared with anything else.It is connected to 1 NFS array.The array has 2 GE ports per SP.Total is 4 GE from the Array.The array has 10 datastores.
VMware HA Cluster
You have the above setup. A Cluster of 4 ESXi spread in 2 blades in 2 racks.For some reason, blade in Chassis 1 lost connection to the Default Gateway. That’s all it lost. All other connection are in tact.
From previous slide. What happens next?A. We are dead. The entire blade in Chassis 1 will go
down.B. No problem my friend. Life goes on C. This problem can never happen. Not in my
environment D. Oh, oh, split brain will happen. ESX will start the
isolation respond like the one below.
When you vMotion a 16 GB VM…A. All 16 GB will travel on 1 port.
The other cable is idle.B. Whoever answers A is a
fake VCP C. Only active pages are copied.
Not all 16 GB are copied.D. 16 GB, but on both wires.E. I don’t use 16 GB. vSphere
cannot handle big RAM as boot takes too long F. Darn… now I’m confused
You have 10 VM and 2 ESX.You want to vMotion 1 VMIt has 16 GB of RAM.It used 16 GB of RAM but has gone idle.It has no TPS.You have dedicated vMotion LAN.2x 1 GE with IP-Hash teaming.You use vSphere 4.1
vSwitch is a Layer 2 switchA. Hei, what nonsense. It is a Layer 3
switch too. B. It’s Layer 3, because traffic from
different VM on the same vSwitch get short-circuited. That’s why we need virtual firewall.
C. No, it’s not short-circuited, unless they are on same segment.
D. Well it depends if port group is used. Same segment, but different port group, no short-circuit.
E. Wait, if you forget to specify a VLAN tagin the port group, they get short circuit.
F. Enough enough! What do you mean by Layer 2 anyway? This is not a layered cake
Can we overcome 2 TB vmdk limit?A. With Physical Compatibility Mode, you can exceed 2
TBB. It does not matter what mode, you can’t. The
standard 10-byte CDB of SCSI limits it.C. Answer B is correct, but if you use Para-virtualised
SCSI driver in the VM, you can overcome it.D. The guy who answer B is drunk E. Wait a minute! You can extent!F. I know, I know! Mount it directly via software iSCSI
initiator inside the VM. Then use special driver from Intel to load balance it as VMware Tools does not provide NIC teaming.
Storage Multi-PathingA. Round Robin is still active-passive. At any given
time, IO only goes via 1 path.B. The guy who answers A is a Hyper-V administrator C. A is incorrect when used in Active/Active Array as all
paths are active at the same time.D. You can load a multi-pathing from NetApp or HDS
and replace the baby one from VMware
Enhanced vMotion CompatibilityA. Don’t use it man. Your CPU will be clocked down to
the lowest common denominator.B. Use it. It helps you future proof.C. It is rather limited to 1 generation. So you can’t skip
a generation (say from Xeon 5300 to 5500).D. It is also limited to class of CPU. So you can’t
vMotion from Xeon 5000 to Xeon 7000.E. It does not really work as it’s just a mask. A badly
written app can still use the instruction set.F. Let me toss a coin…. This is how it works in
production anyway
Does DRS move powered-off VM?A. Yes. If you put the underlying ESX under
maintenance mode, DRS will move all VM (powered off or on).
B. Hei, what are you smoking? DRS is about load balancing based on actual usage. VM that is off does not consume any CPU/RAM, so it will never be moved.
MSCS and HA/DRSA. MS Clustering is not supported in 4.1. Coming in
Update 1.B. Of course it is you bozo. You must turn on VM-VM
anti-affinity rule so the 2 MSCS VMs are always apart.
C. The guy who answer B is the Bozo VMware HA does not obey VM-VM affinity rule.
D. In vSphere 4.1, you can set 2 VM-Host group. Put MSCS VM 1 in Host group 1, and MSCS VM 2 in Host group 2. So you at least need 4 nodes in HA cluster.
E. You must disable DRS for the MSCS VM.
HOW DID YOU GO?
You may find the following doc useful as follow up: http://communities.vmware.com/docs/DOC-13850