Tuning VIM performance for unikernels

38
Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions Pier Luigi Ventre (1) , Claudio Pisa (2) , Stefano Salsano (1,2) , Giuseppe Siracusano (1,3) , Florian Schmidt (3) , Paolo Lungaroni (4) , Nicola Blefari-Melazzi (1,2) (1) Univ. of Rome Tor Vergata, Italy; (2) CNIT, Italy (3) NEC Laboratory Europe, Germany; (4) Consortium GARR, Italy November 8 th 2016 – IEEE NFV-SDN conference, Palo Alto, USA, 7-9 Nov. 2016 A super-fluid, cloud-native, converged edge system

Transcript of Tuning VIM performance for unikernels

Page 1: Tuning VIM performance for unikernels

Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions

Pier Luigi Ventre(1), Claudio Pisa(2), Stefano Salsano(1,2), Giuseppe Siracusano(1,3),Florian Schmidt(3), Paolo Lungaroni(4), Nicola Blefari-Melazzi(1,2)

(1)Univ. of Rome Tor Vergata, Italy; (2)CNIT, Italy(3)NEC Laboratory Europe, Germany; (4)Consortium GARR, Italy

November 8th 2016 – IEEE NFV-SDN conference, Palo Alto, USA, 7-9 Nov. 2016

A super-fluid, cloud-native, converged edge system

Page 2: Tuning VIM performance for unikernels

Outline

• The SUPERFLUIDITY project – goals and approach– Toward sub 10 ms service instantiation

• A Unikernel primer – Memory footprint and boot time results

• Orchestration of Unikernels– Virtual Infrastructure Managers (VIMs): analysis of performance– Tuning of VIM performance

2

Page 3: Tuning VIM performance for unikernels

SUPERFLUIDITY project - http://superfluidity.eu/

Goal: a superfluid NFV approach

• Instantiate network functions and services on-the-fly

• Run them anywhere in the network (core, aggregation, edge), across heterogeneous infrastructure environments (computing and networking), taking advantage of specific hardware features, such as high performance accelerators, when available

Approach

• Decomposition of network components and services into elementary and reusable primitives (“Reusable Functional Blocks – RFBs”)

• Platform-independent abstractions, permitting reuse of network functions across heterogeneous hardware platforms

3

Page 4: Tuning VIM performance for unikernels

SUPERFLUIDITY project - http://superfluidity.eu/

Project consortium

• The SUPERFLUIDITY Consortium includes 18 partners from 12 countries. The partners include 4 universities, 1 inter-university research consortium, 9 industrial partners and 4 SMEs

Project timeline

• July 2015 – March 2018

Disclaimer

• The work presented here is a (small) subset of the work performed in the project

4

Page 5: Tuning VIM performance for unikernels

• Classical NFV environments (i.e. by ETSI NFV standards)– VNFs are composed/orchestrated to realize Network Services– VNFs can be decomposed in VNFC (VNF Components)

«Big» VNF

«Big» VNF

«Big» VNF

«Big» VNF

VNFC

VNFC

VNFC

VM

VM

VM

Heterogeneous composition/execution environments

5

Page 6: Tuning VIM performance for unikernels

Heterogeneous composition/execution environments

• Towards more «fine-grained» decomposition…

• E.g. modular software routers (like Click)– Click elements are combined in configurations (Direct Acyclic Graphs)

• E.g. XSFM-based (eXtended Finite State Machine) decomposition of traffic forwarding / flow processing tasks, and HW support for wire speed execution

6

Page 7: Tuning VIM performance for unikernels

Towards sub 10 ms service instantiation

Why a superfluid NFV

• Quick provisioning of services: JIT proxies, firewalls, on-the-fly monitoring

• Quick migration of services

• Hosting large number of services on the same server: e.g., vCPE

• Optimized use of resources thanks to dynamic sharing

• General investment and operating cost reductions

7

Page 8: Tuning VIM performance for unikernels

We need a superfluid virtualization : use of Unikernels

Containerse.g. Docker• Lightweight• Poor isolation

8

Hypervisors (traditional VMs)e.g. XEN, KVM, wmware…• Strong isolation• Heavyweight

UnikernelsSpecialized VMs (e.g. MiniOS, ClickOS…)• Strong isolation• Very Lightweight• Very good security properties

They break the “myth” of VMs being heavy weight…

Page 9: Tuning VIM performance for unikernels

Outline

• The SUPERFLUIDITY project – goals and approach– Toward sub 10 ms service instantiation

• A Unikernel primer – Memory footprint and boot time results

• Orchestration of Unikernels– Virtual Infrastructure Managers (VIMs): analysis of performance– Tuning of VIM performance

9

Page 10: Tuning VIM performance for unikernels

A Unikernel Primer

• Specialized VM: single application + minimalistic OS

• Single address space, co-operative scheduler so low overheads

• Unikernel virtualization platforms extend existing hypervisors (e.g. XEN)

driver1

driver2

app

1

(e.g., Linux, FreeBSD)

KER

NEL

SPA

CE

USE

R S

PAC

E

app

2

app

Ndriver

N

Vdri

ver1

vdri

ver2

app SING

LE AD

DR

ESSSPA

CE

10

General purpose OS Unikernela minimalistic OS(e.g., MiniOS, Osv)

Page 11: Tuning VIM performance for unikernels

Example Unikernel Memory Footprint (ClickOS)

• Hello world guest VM – 296KB

• Ponger (ping responder) guest VM : ~700KB– 350KB come from lwip and newlibc– this is with minor optimizations to MiniOS

(e.g., reducing the threads’ stack size)

11

Page 12: Tuning VIM performance for unikernels

Unikernels boot time

• Without xen store: 1.43 ms

Guest configuration: MiniOS, 1 VCPU, 8MB RAM, 1 VIF

• Without libxl: 6.67 msecs

• 87.77 msecs

12

State of the art results

Recent results (from SUPERFLUIDITY),by redesigning the toolstack

Page 13: Tuning VIM performance for unikernels

Outline

• The SUPERFLUIDITY project – goals and approach– Toward sub 10 ms service instantiation

• A Unikernel primer – Memory footprint and boot time results

• Orchestration of Unikernels– Virtual Infrastructure Managers (VIMs): analysis of performance– Tuning of VIM performance

13

Page 14: Tuning VIM performance for unikernels

ETSI MANagement and Orchestration (MANO) Model

14

Page 15: Tuning VIM performance for unikernels

VM instantiation and boot time

15

Orchestrator request

Page 16: Tuning VIM performance for unikernels

VM instantiation and boot time

16

Orchestrator request

VIMoperations

VirtualizationPlatform

Guest OS (VM)Boot time

1-2 s

5-10 s

~1 s

Page 17: Tuning VIM performance for unikernels

VM instantiation and boot time

17

Orchestrator request

VIMoperations

VirtualizationPlatform

Guest OS (VM)Boot time

1-2 s

~1 ms

~1 ms

• Unikernels can provide low latency instantiation times for “Micro-VNF”

• What about VIMs (Virtual Infrastructure Managers) ?

Page 18: Tuning VIM performance for unikernels

Performance analysis and Tuning of VIMs for Micro VNFs

• General model of the VNF instantiation process

• Modifications to VIMs to instantiate Micro-VNFs based on

ClickOS Unikernel

• Methodology to evaluate the performances

• Performance Evaluation

18

Page 19: Tuning VIM performance for unikernels

Virtual Infrastructure Managers (VIMs)

We considered the performance of two VIMs :

• OpenStack Nova– OpenStack is composed by subprojects– Nova: orchestration and management of computing resources ---> VIM – 1 Nova node (scheduling) + several compute nodes (which interact with the hypervisor)– Not tied to a specific virtualization technology

• Nomad by HashiCorp– Minimalistic cluster manager and job scheduler– Nomad server (scheduling) + Nomad clients (interact with the hypervisor)– Not tied to a specific virtualization technology

19

Page 20: Tuning VIM performance for unikernels

Reference Model of the VNF instantiation process

20

Page 21: Tuning VIM performance for unikernels

Mapping of the reference model to the considered VIMs

21

Page 22: Tuning VIM performance for unikernels

VIM instantiation model for Openstack Nova

22

Page 23: Tuning VIM performance for unikernels

VIM instantiation model for nomad

23

Page 24: Tuning VIM performance for unikernels

VIM modifications to instantiate (ClickOS) Micro VNFs

24

A regular VM can boot its OS from an image or a disk snapshot that can be read from an associated block device (disk). The host hypervisor instructs the VM to run the boot loader, which reads the kernel image from the block device.

ClickOS based MicroVNFs, are shipped as a tiny kernel without a block device. These VMs need to boot from a so-called diskless image. The host hypervisor reads the kernel image from a file or a repository and directly injects it in the VM memory.

Virtual Infrastructure

Manager

VirtualizationPlatform

(Hypervisor)

This interface needs to be modified to support

the boot of “diskless images”

Page 25: Tuning VIM performance for unikernels

VIM modifications to instantiate (ClickOS) Micro VNFs

• OpenStack – Xen supported out of the box, using the Libvirt toolstack – We considered the boot of diskless images targeting only one component (Nova

Compute) and a specific toolstack, Libvirt. – Libvirt talks with Xen using libxl the default Xen toolstack API.– We modify the XML description of the guest domain provided by the driver,

changing the XML description on the fly before the creation of the VM

• Nomad– Xen not supported out of the box– We developed a new Nomad driver for Xen, called XenDriver .– The new driver communicates with the XL Xen toolstack and it is also able to

instantiate a ClickOS VM.25

Page 26: Tuning VIM performance for unikernels

VIM performance evaluation approach

• We evaluate the VM scheduling and instantiation phase, combining message trace analysis and timestamps in the code

• Message traces (coarse information, beginning and end of the different phases) – VIM Message Analyzer capable of analyzing Nova and Nomad message exchanges

• Detailed breakdown with timestamps in the code (Nomad Client, Nova Compute)

• Workload generators:– OpenStack : Rally benchmarking tool– Nomad : developed the “Nomad Pusher”, a utility written in the GO language which

programmatically submits jobs to the Nomad Server.

26

Page 27: Tuning VIM performance for unikernels

Results – ClickOS instantiation times

27

OpenStack Nova

Nomad

seconds

seconds

Page 28: Tuning VIM performance for unikernels

There is no comparison implied…

• NB: the purpose of the work is NOT to compare OpenStack vs. Nomad. The goal is to understand how both behave and find ways to reduce instantiation times.

• A direct comparison makes few sense. OpenStack is a much more complete framework in terms of offered functionality and different types of supported hypervisors. Moreover, the comparison is unfair also because for the Nomad case we have developed a driver only targeted to support the Xen/Click OS case.

28

Page 29: Tuning VIM performance for unikernels

VIM Tuning

• OpenStack– Diskless VM -> we can skip most of the actions performed during the image creation;– UniKernels are special purpose VMs:

• SSH is really needed ?• Full-IP stack ?

– We were able to reduce the spawning time of about 70%– Looking at the overall instantiation time, the relative reduction is about 45%;

• Nomad– No much space for the optimization;

• We implemented only the necessary functionality;– We introduced further improvements assuming a local store for the Micro VNFs,

reducing the Driver operation of about 30 ms;

29

Page 30: Tuning VIM performance for unikernels

seconds

seconds

Results – OpenStack details and tuning

30

OpenStack Nova overall

OpenStack Nova spawn phase

Page 31: Tuning VIM performance for unikernels

Results – Nomad details and tuning

31

Nomad overall

Nomad spawn phase

seconds

seconds

Page 32: Tuning VIM performance for unikernels

VIM performances - Ongoing & Future Work

• Consider the impact of system load on the performance– Measure the average instantiation times considering batches of incoming requests with given

rate (requests/s) and arrival patterns. – Analyze the impact of the number of already allocated VMs and of the number of target nodes to

be deployed.

• Keep improving the performance of the considered VIMs – e.g. trying to replace the lazy notification mechanism of Nomad with a reactive approach

• Extend the analysis to another VIM– OpenVIM from the OSM project

32

Page 33: Tuning VIM performance for unikernels

Unikernel virtualization in the SUPERFLUIDITY vision

• We have considered the optimization of Unikernel virtualization and the needed enhancements to Virtual Infrastructure Managers to support Unikernels.

• In the SUPERFLUIDITY vision, Unikernels are interesting as they support the decomposition of network services in “smaller” components that can be deployed on the fly (NB: Unikernels are complementary to other approaches!)

• The NFV Infrastructure should be extended in order to support Unikernel virtualization in addition to traditional VMs. This way it will be possible to design services that exploit the most efficient solutions depending on several factors.

33

Page 34: Tuning VIM performance for unikernels

Conclusions

• Unikernel virtualization can provide VM instantiation and boot time in the order of ms– ongoing: consolidation of results, generic and automatic optimization process for

hypervisor toolstack and for guests

• Work is still needed at the level of Virtual Infrastructure Managers– e.g. OpenStack (~ 1 s), Nomad (~ 300 ms)

• VIMs are currently designed for generality, the challenge is to specialize them in a flexible way, keeping the compatibility with the mainstream versions

34

Page 35: Tuning VIM performance for unikernels

References & paper download

• SUPERFLUIDITY project Home Page http://superfluidity.eu/

• G. Bianchi, et al. “Superfluidity: a flexible functional architecture for 5G networks”, Transactions on Emerging Telecommunications Technologies 27, no. 9, Sep 2016

• P. L. Ventre, C. Pisa, S. Salsano, G. Siracusano, F. Schmidt, P. Lungaroni,N. Blefari-Melazzi, “Performance Evaluation and Tuning of Virtual Infrastructure Managers for (Micro) Virtual Network Functions”,IEEE NFV-SDN Conference, Palo Alto, USA, 7-9 November 2016http://netgroup.uniroma2.it/Stefano_Salsano/papers/salsano-ieee-nfv-sdn-2016-vim-performance-for-unikernels.pdf

35

Page 36: Tuning VIM performance for unikernels

References – Speed up of Virtualization Platforms / Guests

• J. Martins, M. Ahmed, C. Raiciu, V. Olteanu, M. Honda, R. Bifulco, F. Huici, “ClickOS and the art of network function virtualization”, NSDI 2014, 11th USENIX Conference on Networked Systems Design and Implementation, 2014.

• F. Manco, J. Martins, K. Yasukata, J. Mendes, S. Kuenzer, F. Huici,“The Case for the Superfluid Cloud”, 7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 15), 2015

• Recent unpublished results are included in this presentation:S. Salsano, F. Huici, “Superfluid NFV: VMs and Virtual Infrastructure Managers speed-up for instantaneous service instantiation”, invited talk at EWSDN 2016 workshop, 10 October 2016, The Hague, Netherlandshttp://www.slideshare.net/stefanosalsano/superfluid-nfv-vms-and-virtual-infrastructure-managers-speedup-for-instantaneous-service-instantiation

36

Page 37: Tuning VIM performance for unikernels

Thank you. Questions?

ContactsStefano SalsanoUniversity of Rome Tor Vergata / [email protected]

The tools we developed are available on githubhttps://github.com/netgroup/vim-tuning-and-eval-tools

Please find this presentation on slidesharehttps://www.slideshare.net/stefanosalsano/tuning-vim-performance-for-unikernels

37

Page 38: Tuning VIM performance for unikernels

The SUPERFLUIDITY project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No.671566

(Research and Innovation Action).

The information given is the author’s view and does not necessarily represent the view of the European Commission (EC). No liability is accepted for any use that may be

made of the information contained.

38