Enabling Fast, Dynamic Network Processing with ClickOS
Joao Martins*, Mohamed Ahmed*, Costin Raiciu§, Felipe Huici** NEC Europe, Heidelberg, Germany
§University Politehnica of Bucharest
The Idealized Network
Physical
Datalink
Network
Transport
Application
Physical
Datalink
Network
Transport
Application
Physical
Datalink
Network
Physical
Datalink
Page 2
A Middlebox World
Page 3
carrier-grade NAT
load balancer
DPIQoE monitor
ad insertion
BRAS
session border controller
transcoder
WAN accelerator
DDoS protection
firewall
IDS
Hardware Middleboxes - Drawbacks
▐ Middleboxes are useful, but…ExpensiveDifficult to add new features, lock-inDifficult to manageCannot be scaled with demandCannot share a device among different tenantsHard for new players to enter market
▐ Clearly shifting middlebox processing to a software-based, multi-tenant platform would address these issuesBut can it be built using commodity hardware while still achieving high
performance?
▐ ClickOS: tiny Xen-based virtual machine that runs Click
Page 4
Xen Background - Overview
hardwarehardware
hypervisorhypervisordom0interface
dom0
paravirt
apps
guestOS
domU
paravirt
apps
guestOS
domU
paravirt
apps
guestOS
domU
paravirt
apps
guestOS
domU
paravirt
apps
guestOS
Page 5
Xen Background – Split Driver Model
Page 6
ClickOS - Contributions
domU
paravirt
apps
guestOS
ClickOS
paravirt
Click
miniOS
Page 7
▐ Work consisted ofBuild system to create ClickOS images (5 MB in size)Emulating a Click control plane over MiniOS/XenOptimizations to reduce boot times (30 miliseconds)Optimizations to the data plane (10 Gb/s for larger pkt sizes)
Xen I/O Subsystem and Bottlenecks
Page 8
netback
Driver Domain (e.g., dom0) ClickOS Domain
Xen bus/store
Event channel
netfront
Xen ring API(data)
NW driver Linux/OVS bridge
vif
Click
FromNetfront
ToNetfront
300 Kp/s 350 Kp/s 225 Kp/s
pkt size (bytes) 10Gb rate
64 14.8 Mp/s
128 8.4 Mp/s
256 4.5 Mp/s
512 2.3 Mp/s
1024 1.2 Mp/s
1500 810 Kp/s
VALE
Optimized Xen I/O
Page 9
netback
Driver Domain (e.g., dom0) ClickOS Domain
netfrontXen bus/store
Event channel
Xen ring API(data)
NW driver Linux/OVS bridge
vif
Click
FromNetfront
ToNetfront
netback
Xen bus/store
Event channel
Netmap API(data)
Throughput – One CPU Core
Page 10
ClickOS rate meter
10Gb/s direct cable
Boot times
Page 11
30 milliseconds
220 milliseconds
Conclusions
▐ Presented ClickOSTiny (5MB) Xen VM tailored at network processingCan be booted in 30 millisecondsCan run a large number of ClickOS vm concurrently (128)Can achieve 10Gb/s throughput using only a single core.
▐ Future work Implementation and performance evaluation of ClickOS middleboxes
(e.g., firewalls, IDSes, carrier-grade NATs, software BRASes)Work to adapt Linux netfront to netmap APIService chaining
Page 12
Top Related