Performance Tuning Xen

27
Performance tuning Xen Roger Pau Monn´ e [email protected] Antwerp – 8th of April, 2013

description

How to tune your Xen deployment for performance: Xen has several options and different kinds of guests, knowing when to use each kind of guest, and how to tune its parameters for optimal performance can make a big difference. This talk will cover the types of guests that can be deployed on Xen, and the different options you can use to obtain the best performance.

Transcript of Performance Tuning Xen

Page 1: Performance Tuning Xen

Performance tuning Xen

Roger Pau [email protected]

Antwerp – 8th of April, 2013

Page 2: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Xen Architecture

Xen Hypervisor

Hardware

device model(qemu)

toolstack

Control DomainNetBSD or Linux

HardwareDrivers

I/O Devices CPU Memory

Paravirtualized(PV)

Domain:NetBSD or Linux

Fully Virtualized

(HVM)Domain:Windows,FreeBSD...

netbackblkback

netfrontblkfront

Antwerp – 8th of April, 2013 Performance tuning Xen 2 / 27

Page 3: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Paravirtualization

I Virtualization technique developed in the late 90sI Designed by:

I XenoServer research project at Cambridge UniversityI IntelI Microsoft labs

I x86 instructions behave differently in kernel or user mode,options for virtualization were full software emulation orbinary translation.

I Design a new interface for virtualizationI Allow guests to collaborate in virtualizationI Provide new interfaces for virtualized guests that allow to

reduce the overhead of virtualization

I The result of this work is what we know today asparavirtualiztion

Antwerp – 8th of April, 2013 Performance tuning Xen 3 / 27

Page 4: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Paravirtualization

I All this changes lead to the following interfaces beingparavirtualized:

I Disk and network interfacesI Interrupts and timersI Boot directly in the mode the kernel wishes to run (32 or

64bits)I Page tablesI Privileged instructions

Antwerp – 8th of April, 2013 Performance tuning Xen 4 / 27

Page 5: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Full virtualization

I With the introduction of hardware virtualization extensionsXen is able to run unmodified guests

I This requires emulated devices, which are handled by Qemu

I Makes use of nested page tables when available.

I Allows to use PV interfaces if guest has support for them.

Antwerp – 8th of April, 2013 Performance tuning Xen 5 / 27

Page 6: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

The full virtualization spectrum

VS Software virtualization

VH Hardware virtualization

PV Paravirtualized

Poor performance

Room for improvement

Optimal performance

Disk

and

networ

k

Inte

rrupts

and

timer

s

Emula

ted

mot

herboa

rd

Privile

ged

inst

ruct

ions

and

page

table

s

HVM VS VS VS VH

HVM with PV drivers PV VS VS VH

PVHVM PV PV VS VH

PV PV PV PV PV

Antwerp – 8th of April, 2013 Performance tuning Xen 6 / 27

Page 7: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Guest support

I List of OSes and virtualization support:

PV PVHVM HVM with PV drivers HVM

Linux (PVOPS) YES YES YES YES

Windows NO NO YES YES

NetBSD YES NO NO YES

FreeBSD NO NO YES YES

OpenBSD NO NO NO YES

DragonflyBSD NO NO NO YES

Antwerp – 8th of April, 2013 Performance tuning Xen 7 / 27

Page 8: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Kernbench

KernbenchResults: percentage of native, the lower the better

PV on HVM 64 bitPV on HVM 32 bit

HVM 64 bitHVM 32 bit

PV 64 bitPV 32 bit

90

95

100

105

110

115

120

125

130

135

140

Antwerp – 8th of April, 2013 Performance tuning Xen 8 / 27

Page 9: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Specjbb2005

SPECjbb2005

PV 64 bit PV on HVM 64 bit0

10

20

30

40

50

60

70

80

90

100

Results: percentage of native, the higher the better

Antwerp – 8th of April, 2013 Performance tuning Xen 9 / 27

Page 10: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Iperf

Iperf tcpResults: gbit/sec, the higher the better

PV 64 bit PV on HVM 64 bit PV on HVM 32 bit PV 32 bit HVM 64 bit HVM 32 bit0

1

2

3

4

5

6

7

8

Antwerp – 8th of April, 2013 Performance tuning Xen 10 / 27

Page 11: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

What virtualization mode should I choose?

I Linux supports several virtualization modes, which one isbetter?

I Depends on the workload.

I Generally PV mode will provide better performance for IO,but when using 64bit guests PV can be slower.

I There isn’t a fixed rule here, the best way to find out is toevaluate the workload on the different kind of guests.

Antwerp – 8th of April, 2013 Performance tuning Xen 11 / 27

Page 12: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Dom0

I Dom0 is the most important guest in the Xen infraestructure.

I It can become a bottleneck easily if not configured correctly.

I Dom0 is in charge of creating the guests, but usually alsoprovides the backends and device models for guests.

I Xen provides some options to tune performance of Dom0

Antwerp – 8th of April, 2013 Performance tuning Xen 12 / 27

Page 13: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

dom0 mem boot option

I dom0 mem tell Xen how much memory can be used by theDom0.

I If not set all memory will be assigned to the Dom0, andballooning will be used when launching new guests, reducingthe memory used by the Dom0.

I The value should be set depending on the usage, HVM guestsconsume more memory in the Dom0 because they need aQemu instance.

I If dom0 mem is set make sure to disable ballooning in thetoolstack by setting autoballoon=0 in /etc/xen/xl.conf.

Antwerp – 8th of April, 2013 Performance tuning Xen 13 / 27

Page 14: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

dom0 max vcpus and dom0 vcpus pin

I dom0 max vcpus: maximum number of CPUs the Dom0 willsee, also depends on the utilization of the Dom0 and the typeof guests.

I dom0 vcpus pin: pinning Dom0 vcpus to physical CPUs is agood idea for systems running IO intensive guests.

I Setting up the serial cable: although not important forperformance, setting up a serial cable is really important whendebugging. For more info:http://wiki.xen.org/wiki/Xen_Serial_Console

Antwerp – 8th of April, 2013 Performance tuning Xen 14 / 27

Page 15: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Dom0 Boot tunning options

I For example if I had to set up a Dom0 on a machine with 8CPUs and 8GB of RAM I would use the following boot line:com1=115200,8n1 console=com1 dom0 mem=1024Mdom0 max vcpus=2 dom0 vcpus pin.

I More info about boot parameters can be found at:http://xenbits.xen.org/docs/unstable/misc/

xen-command-line.html.

Antwerp – 8th of April, 2013 Performance tuning Xen 15 / 27

Page 16: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

General performance notes

I Always try to use physical disks as backends. Xen has mainlytwo ways of connecting disks to the guest depending on theformat of the image, if it’s a block device it will be attachedusing blkback, which is inside the Linux kernel and it’s faster.

I Take into account the number of CPUs your physical box hasand avoid using more VCPUS than PCPUS if runningperformance intensive applications.

Antwerp – 8th of April, 2013 Performance tuning Xen 16 / 27

Page 17: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Pinning CPUs

I You can pin VCPUs to PCPUs in order to obtain betterperformance or to distribute the workload across your CPUsto suit your needs. For example low latency VMs can beexclusively pinned to different PCPUs.

I cpus: allows to select in which CPUs the guest can run. Thelist can also contain specific CPUs where the guest is notallowed to run. Specifying ”0-3,5,ˆ1” allows the guest to runon CPUs 0,2,3,5.

I If Dom0 is pinned to certain PCPUs avoid running guests onthose PCPUs to obtain better performance. If Dom0 is pinnedto CPU 0, use the following CPU mask in order to preventother guests from running on CPU 0: ”ˆ0”.

Antwerp – 8th of April, 2013 Performance tuning Xen 17 / 27

Page 18: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Scheduler options

I The Xen scheduler has several options that can also betunned from the guest configuration file, in order to give acertain guest more share from the processor or to schedule itmore frequently.

I cpu weight: weight of the domain in terms of CPU utilization.For example a domain with a weight of 512 will get twice asmuch CPU than a domain with a weight of 256. Values rangefrom 1 to 65535.

I cap: fixes the maximum amount of CPU a domain is able toconsume. Expressed in percentage of one physical CPU. 100is one CPU, 50 half a CPU, 400 four CPUs.

I More info can be found at http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html

Antwerp – 8th of April, 2013 Performance tuning Xen 18 / 27

Page 19: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Driver Domains I

Xen Hypervisor

Hardware

device model(qemu)

toolstack

Control DomainNetBSD or Linux

HardwareDrivers

I/O Devices CPU Memory

Paravirtualized(PV)

Domain:NetBSD or Linux

netbackblkback

netfrontblkfront

Antwerp – 8th of April, 2013 Performance tuning Xen 19 / 27

Page 20: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Driver Domains II

Xen Hypervisor

Hardware

device model(qemu)

toolstack

Control DomainNetBSD or Linux

HardwareDrivers

I/O Devices CPU Memory

Paravirtualized(PV)

Domain:NetBSD or Linux

netbackblkback

netfrontblkfront

Driver Domain

netbackblkback

Antwerp – 8th of April, 2013 Performance tuning Xen 20 / 27

Page 21: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Driver Domains III

I Driver domains allow to offload work normally done in Dom0to other domains.

I It also provides better security, less surface for exploits inDom0.

I This is a current work-in-process.

Antwerp – 8th of April, 2013 Performance tuning Xen 21 / 27

Page 22: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

HVM specific I

I HVM guest require the usage of assisted paging, in order forthe guest to see the memory area as contiguous when it’s not.

I HAP: (Hardware Assisted Paging) is used by default since ittends to perform better under most workloads

I shadow: was introduced before HAP, and can provide betterperformance under certain workloads that have low TLBlocality (for example databases or java applications).

I Again, the best way to know is to try the workload by yourself.

Antwerp – 8th of April, 2013 Performance tuning Xen 22 / 27

Page 23: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

HVM specific II

I HVM domains require a Qemu instance in Dom0 to performthe necessary device emulation.

I This might be a bottleneck if running a lot of HVM domainsin the same node, since each one requires a Qemu instancerunning in Dom0 that uses both Dom0 CPU and Memory.

I To avoid this, we can launch the Qemu process in a differentdomain called ”Stubdomain”.

I This allows to offload work from Dom0.

Antwerp – 8th of April, 2013 Performance tuning Xen 23 / 27

Page 24: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

HVM specific III

Xen Hypervisor

Hardware

device model(qemu)

toolstack

Control DomainNetBSD or Linux

HardwareDrivers

I/O Devices CPU Memory

Fully Virtualized

(HVM)Domain:Windows,FreeBSD...

netbackblkback

Antwerp – 8th of April, 2013 Performance tuning Xen 24 / 27

Page 25: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

HVM specific IV

Xen Hypervisor

Hardware

device model(qemu)

toolstack

Control DomainNetBSD or Linux

HardwareDrivers

I/O Devices CPU Memory

Fully Virtualized

(HVM)Domain:Windows,FreeBSD...

netbackblkback

stubdomain

MiniOSQemu

Antwerp – 8th of April, 2013 Performance tuning Xen 25 / 27

Page 26: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Conclusions

I Xen offers a wide variety of virtualization modes.

I The best way to know which mode will bring betterperformance is to try it, although there are several tips thatapply to all guests.

I We are constantly working on performance improvements, sokeep updated in order to get the best performance.

Antwerp – 8th of April, 2013 Performance tuning Xen 26 / 27

Page 27: Performance Tuning Xen

Xen Architecture Xen virtualization modes Support in OSes Dom0 tunning Specific VM options Conclusions

Q&A

Thanks

Questions?

http://wiki.xen.org/wiki/Xen_Best_Practices

http://wiki.xen.org/wiki/Xen_Common_Problems

Antwerp – 8th of April, 2013 Performance tuning Xen 27 / 27