A-exam talk.

37
18th December 2007 1 Priority layered approach to Transport protocol for Long fat networks Vidhyashankar Venkataraman Cornell University

Transcript of A-exam talk.

Page 1: A-exam talk.

18th December 2007 1

Priority layered approach to Transport protocol for Long fat

networksVidhyashankar Venkataraman

Cornell University

Page 2: A-exam talk.

18th December 2007 2

TCP: Transmission Control Protocol

TCP: ubiquitous end-to-end protocol for reliable communication

Networks have evolved over the past two decades TCP has not

TCP is inadequate for current networks

NSFNet: 1991 (1.5Mbps) Abilene backbone: 2007 (10Gbps)

Page 3: A-exam talk.

18th December 2007 3

Long Fat Networks (LFNs)

Bandwidth delay product BW X Delay = Max. amount of data ‘in the pipe’ Max. data that can be sent in one round trip time

High value in long fat networks Optical eg. Abilene/I2 Satellite networks Eg: 2 satellites 0.5 secs, 10Gbps radio link can send up to 625MB/RTT

Page 4: A-exam talk.

18th December 2007 4

TCP: Basics

Reliability, in-order delivery

Congestion-aware: Slow Start (SS): Increase window size (W) from 1 segment

Additive Increase Multiplicative Decrease (AIMD) AI: Conservative increase by 1 segment/RTT MD: Drastic cutback of window by half with loss

AIMD ensures fair throughput share across network flows

SS

AIMD

Window

t

Page 5: A-exam talk.

18th December 2007 5

TCP’s AIMD revisited (Adapted from Nick Mckeown’s slide)

Only W packets may be outstanding

Rule for adjusting W AI : If an ACK is received: W ← W+1/W MD: If a packet is lost: W ← W/2

Rule for adjusting W AI : If an ACK is received: W ← W+1/W MD: If a packet is lost: W ← W/2

Source Dest

Bottleneck

maxW

2maxW

tMultiple cutbacks Timeout

LossWindow sizeEarlycutbackAI

MD

SS

Page 6: A-exam talk.

18th December 2007 6

TCP’s inadequacies in LFNs W ~ 105 KB or more in LFNs

Two problems Sensitivity to transient congestion and random losses Ramping up back to high W will take a long time (AI)

Detrimental to TCP’s throughput Example: 10 Gbps link, 100ms; Loss rate of 10-5 yields only

10Mbps throughput!

Another problem: Slow start: Short flows take longer time to complete

Page 7: A-exam talk.

18th December 2007 7

Alternate Transport Solutions

Congestion Control in LFNs

Explicit

Implicit

Loss Delay

• Explicit notification from routers• XCP

• Loss: signal for congestion• CUBIC, HS-TCP, STCP

• RTT increase: signal for congestion (Queue builds up)• Fast

End-to-end (like TCP)

Taxonomy based on Congestion signal to end host

General Idea:Window growth curve `better’

than AIMD

Page 8: A-exam talk.

18th December 2007 8

Problems with existing solutions These protocols strive to achieve both:

Aggressiveness: Ramping up quickly to fill pipe Fairness: Friendly to TCP and other flows of same protocol

Issues Unstable under frequent transient congestion events Achieving both goals at the same time is difficult

Slow start problems still exist in many of the protocols

Example: XCP: Needs new router hardware FastTCP, HS-TCP: Stability is scenario-dependent

Page 9: A-exam talk.

18th December 2007 9

A new transport protocol

Need: “good” aggressiveness without loss in fairness “good”: Near-100% bottleneck utilization

Strike this balance without requiring any new network support

Page 10: A-exam talk.

18th December 2007 10

Our approach: Priority Layered Transport (PLT)

Separate aggressiveness and fairness: Split flow into 2 subflows Send TCP (SS/AIMD) packets over subflow 1 (Fair) Blast packets to fill pipe, over subflow 2 (Aggressive)

Requirement: Aggressive stream ‘shouldn’t affect’ TCP streams in network

Src1Dst1

Bottleneck

Subflow 1: Legacy TCP

Subflow 2

Page 11: A-exam talk.

18th December 2007 11

Prioritized Transfer

Sub-flow 1 strictly prioritized over sub-flow 2

Meaning: Sub-flow 2 fills pipe whenever 1 cannot and does that quickly

Routers can support strict priority queuing: DiffServ Deployment issues discussed later

t

Window sizeSub flow 2 fills the troughs

W+B(W+Buffer)

W(Pipe capacity)

Page 12: A-exam talk.

18th December 2007 12

Evident Benefits from PLT Fairness

Inter protocol fairness: TCP friendly Intra protocol fairness: As fair as TCP

Aggression Overcomes TCP’s limitations with slow start

Requires no new network support

Congestion control independence at subflow 1 Sub flow 2 supplements performance of sub flow 1

Page 13: A-exam talk.

18th December 2007 13

PLT Design

Scheduler assigns packets to sub-flows High priority Congestion Module (HCM): TCP

Module handling subflow 1

Low priority Congestion Module (LCM) Module handling subflow 2

LCM is lossy Packets could get lost or starved when HCM saturates pipe LCM Sender knows packets lost and received from

receiver

Page 14: A-exam talk.

18th December 2007 14

The LCM

Is naïve no-holds-barred sending enough? No! Can lead to congestion collapse Wastage of Bandwidth in non-bottleneck links Outstanding windows could get large and simply

cripple flow

Congestion control is necessary…

Page 15: A-exam talk.

18th December 2007 15

Congestion control at LCM Simple, Loss-based, aggressive

Multiplicative increase Multiplicative Decrease (MIMD)

Loss-rate based: Sender keeps ramping up if it incurs tolerable loss rates More robust to transient congestion

LCM sender monitors loss rate p periodically Max. tolerable loss rate μ p < μ => cwnd = *cwnd (MI, >1) p >= μ => cwnd = *cwnd (MD, <1) Timeout also results in MD

Page 16: A-exam talk.

18th December 2007 16

Choice of μ

Too High: Wastage of bandwidth Too Low : LCM is less aggressive, less

robust

Decide from expected loss rate over Internet Preferably kernel tuned in the implementation Predefined in simulations

Page 17: A-exam talk.

18th December 2007 17

Sender Throughput in HCM and LCM LCM fills pipe

in the desired manner

LCM cwnd = 0 whenHCM saturates pipe

Page 18: A-exam talk.

18th December 2007 18

Simulation study

Simulation study of PLT against TCP, FAST and XCP

250 Mbps bottleneck

Window size: 2500

Drop Tail policy

Page 19: A-exam talk.

18th December 2007 19

FAST TCP

Delay-based congestion control for LFNs: Popular Congestion signal: Increase in delay

Ramp up much faster than AI If queuing delay builds up, increase factor reduces

Uses parameter to decide reduction of increase factor Ideal value depends on number of flows in network

TCP-friendliness scenario-dependent Though equilibrium exists, difficult to prove convergence

Page 20: A-exam talk.

18th December 2007 20

XCP: Baseline

Requires explicit feedback from routers

Routers equipped to provide cwnd increment

Converges quite fast

TCP-friendliness requires extra router support

Page 21: A-exam talk.

18th December 2007 21

Single bottleneck topology

Page 22: A-exam talk.

18th December 2007 22

Effect of Random loss

PLT: Near-100% goodput if loss rate< μ

TCP, Fast and XCPunderperform at high

loss rates

0

Page 23: A-exam talk.

18th December 2007 23

Short PLT flows

Frequency distribution offlow completion times

Most PLT flows finishwithin 1 or 2 RTTs

Flows pareto distributed(Max size = 5MB)

Page 24: A-exam talk.

18th December 2007 24

Effect of flow dynamics3 flows in the network

Flows 1 and 2 leave, the other flow ramps up quickly

Congestion in LCM due toanother flow’s arrival

Page 25: A-exam talk.

18th December 2007 25

Effect of cross traffic

Page 26: A-exam talk.

18th December 2007 26

Effect of Cross traffic

Aggregate goodput of flows

FAST yields poor goodputs even with low UDP bursts

PLT yields 90% utilization even with 50 Mbps bursts

Page 27: A-exam talk.

18th December 2007 27

Conclusion

PLT: layered approach to transport Prioritize fairness over aggressiveness Supplements aggression to a legacy congestion

control

Simulation results are promising PLT robust to random losses and transient

congestion We have also tested PLT-Fast and results are

promising!

Page 28: A-exam talk.

18th December 2007 28

Issues and Challenges ahead

Deployability Challenges PEPs in VPNs Applications over PLT PLT-shutdown

Other issues Fairness issues Receiver Window dependencies

Page 29: A-exam talk.

18th December 2007 29

Future Work: Deployment(Figure adapted from Nick Mckeown’s slides)

How could PLT be deployed? In VPNs, wireless networks Performance Enhancing Proxy boxes sitting at the edge

Different applications? LCM traffic could be a little jittery Performance of streaming protocols/ IPTV

PEP

PEP

PLT connection

Page 30: A-exam talk.

18th December 2007 30

Deployment: PLT-SHUTDOWN In the wide area, PLT should be disabled if

no priority queuing Unfriendly to fellow TCP flows otherwise!

We need methods to detect priority queuing at bottleneck in an end-to-end manner

To be implemented and tested on the real internet

Page 31: A-exam talk.

18th December 2007 31

Receive Window dependency

PLT needs larger outstanding windows LCM is lossy: Aggression & Starvation Waiting time for retransmitting lost LCM packets

Receive window could be bottleneck LCM should cut back if HCM is restricted Should be explored more

Page 32: A-exam talk.

18th December 2007 32

Fairness considerations

Inter-protocol fairness: TCP friendliness Intra-protocol fairness: HCM fairness

Is LCM fairness necessary? LCM is more dominant in loss-prone networks Can provide relaxed fairness Effect of queuing disciplines

Page 33: A-exam talk.

18th December 2007 33

EXTRA SLIDES

Page 34: A-exam talk.

18th December 2007 34

Analyses of TCP in LFNs

Some known analytical results At loss p, (p. (BW. RTT)2)>1 => small throughputs Throughput 1/RTT Throughput 1/√p

(Padhye et. al. and Lakshman et. al.)

Several solutions proposed for modified transport

Page 35: A-exam talk.

18th December 2007 35

Fairness

Average goodputs of PLT and TCP flows in small buffers

Confirms that PLT is TCP-friendly

Page 36: A-exam talk.

18th December 2007 36

PLT ArchitectureSender Receiver

App App

Input buffer

LCM RexmtBuffer

PLT Sender PLT Receiver

HCM

LCM

HCM-R

LCM-R

HCM Packet

LCM Packet

Strong ACKDropped Packets

HCM ACK

Socketinterface

Page 37: A-exam talk.

18th December 2007 37

Other work: Chunkyspread

Bandwidth-sensitive peer-to-peer multicast for live-streaming

Scalable solution: Robustness to churn, latency and bandwidth Heterogeneity-aware Random graph Multiple trees provided: robustness to churn

Balances load across peers

IPTPS’ 06, ICNP’ 06