VINI: Virtual Network Infrastructure
Nick FeamsterGeorgia Tech
Andy Bavier, Mark Huang, Larry Peterson, Jennifer RexfordPrinceton University
VINI Overview
• Runs real routing software• Exposes realistic network conditions• Gives control over network events• Carries traffic on behalf of real users• Is shared among many experiments
Simulation
Emulation
Small-scaleexperiment
Livedeployment
?VINI
Bridge the gap between “lab experiments” and live experiments at scale.
Goal: Control and Realism
• Control– Reproduce results– Methodically change or
relax constraints
• Realism– Long-running services
attract real users– Connectivity to real Internet– Forward high traffic
volumes (Gb/s)– Handle unexpected events
TopologyActual network
Arbitrary, emulated
TrafficReal clients, servers
Synthetic or traces
Network EventsObserved in operational network
Inject faults, anomalies
Overview
• VINI characteristics– Fixed, shared infrastructure– Flexible network topology– Expose/inject network events– External connectivity and routing adjacencies
• PL-VINI: prototype on PlanetLab• Preliminary Experiments• Ongoing work
Fixed Infrastructure
Shared Infrastructure
Arbitrary Virtual Topologies
Exposing and Injecting Failures
Carry Traffic for Real End Users
s
c
Participate in Internet Routing
s
c
BGP
BGP
BGP
BGP
PL-VINI: Prototype on PlanetLab
• First experiment: Internet In A Slice– XORP open-source routing protocol suite (NSDI ’05)– Click modular router (TOCS ’00, SOSP ’99)
• Clarify issues that VINI must address– Unmodified routing software on a virtual topology– Forwarding packets at line speed– Illusion of dedicated hardware– Injection of faults and other events
PL-VINI: Prototype on PlanetLab
• PlanetLab: testbed for planetary-scale services• Simultaneous experiments in separate VMs
– Each has “root” in its own VM, can customize
• Can reserve CPU, network capacity per VM
Virtual Machine Monitor (VMM)(Linux++)
NodeMgr
LocalAdmin
VM1 VM2 VMn…PlanetLab node
XORP: Control Plane
• BGP, OSPF, RIP, PIM-SM, IGMP/MLD
• Goal: run real routing protocols on virtual network topologies
XORP(routing protocols)
User-Mode Linux: Environment
• Interface ≈ network• PlanetLab limitation:
– Slice cannot create new interfaces
• Run routing software in UML environment
• Create virtual network interfaces in UML
XORP(routing protocols)
UML
eth1 eth3eth2eth0
Click: Data Plane
• Performance– Avoid UML overhead– Move to kernel, FPGA
• Interfaces tunnels– Click UDP tunnels
correspond to UML network interfaces
• Filters– “Fail a link” by blocking
packets at tunnel
XORP(routing protocols)
UML
eth1 eth3eth2eth0
Click
PacketForwardEngine
Control
DataUmlSwitch
element
Tunnel table
Filters
Intra-domain Route Changes
s
c
1176
587 846
260
700
6391295
2095
902
548
233
1893
366
856
Ping During Link Failure
70
80
90
100
110
120
0 10 20 30 40 50
Pin
g R
TT
(m
s)
Seconds
Link down
Link up
Routes converging
Close-Up of TCP Transfer
2.1
2.15
2.2
2.25
2.3
2.35
2.4
2.45
17.5 18 18.5 19 19.5 20
Meg
abyt
es in
str
eam
Seconds
Packet receiv ed
Slow start
Retransmitlost packet
PL-VINI enables a user-space virtual networkto behave like a real network on PlanetLab
Challenge: Attracting Real Users
• Could have run experiments on Emulab
• Goal: Operate our own virtual network– Carrying traffic for actual users– We can tinker with routing protocols
• Attracting real users
Conclusion
• VINI: Controlled, Realistic Experimentation
• Installing VINI nodes in NLR, Abilene
• Download and run Internet In A Slice
http://www.vini-veritas.net/
TCP Throughput
0
2
4
6
8
10
12
0 10 20 30 40 50
Meg
abyt
es t
rans
ferr
ed
Seconds
Packet receiv ed
Link down
Link up
Zoom in
Ongoing Work
• Improving realism– Exposing network failures and changes in the
underlying topology– Participating in routing with neighboring networks
• Improving control – Better isolation– Experiment specification
Resource Isolation
• Issue: Forwarding packets in user space– PlanetLab sees heavy use– CPU load affects virtual network performance
Property Depends On Solution
Throughput CPU% received PlanetLab provides CPU reservations
Latency CPU scheduling delay
PL-VINI: boost priority of packet forward process
Performance is bad
• User-space Click: ~200Mb/s forwarding
VINI should use Xen
Experimental Results
• Is a VINI feasible?– Click in user-space: 200Mb/s forwarded– Latency and jitter comparable between network and
IIAS on PL-VINI.– Say something about running on just PlanetLab?
Don’t spend much time talking about CPU scheduling…
Low latency for everyone?
• PL-VINI provided IIAS with low latency by giving it high CPU scheduling priority
Internet In A SliceXORP• Run OSPF• Configure FIB
Click• FIB• Tunnels• Inject faults
OpenVPN & NAT• Connect clients
and servers
S
C
S
C
C
S
PL-VINI / IIAS Router
• Blue: topology– Virtual net devices– Tunnels
• Red: routing and forwarding– Data traffic does not enter
UML
• Green: enter & exit IIAS overlay
UML
XORP
eth1 eth3eth2
UmlSwitch
UmlSwitchelementFIB
Encapsulation table
eth0
Control
Data
Click
tap0
PL-VINI SummaryFlexible Network Topology
Virtual point-to-point connectivity Tunnels in Click
Unique interfaces per experiment Virtual network devices in UML
Exposure of topology changes Upcalls of layer-3 alarms
Flexible Routing and Forwarding
Per-node forwarding table Separate Click per virtual node
Per-node routing process Separate XORP per virtual node
Connectivity to External Hosts
End-hosts can direct traffic through VINI Connect to OpenVPN server
Return traffic flows through VINI NAT in Click on egress node
Support for Simultaneous Experiments
Isolation between experiments PlanetLab VMs and network isolation
CPU reservations and priorities
Distinct external routing adjacencies BGP multiplexer for external sessions
PL-VINI / IIAS Router
• XORP: control plane• UML: environment
– Virtual interfaces
• Click: data plane– Performance
• Avoid UML overhead• Move to kernel, FPGA
– Interfaces tunnels– “Fail a link”
XORP(routing protocols)
UML
eth1 eth3eth2eth0
Click
PacketForwardEngine
Control
DataUmlSwitch
element
Tunnel table
Top Related