Load Sharing in Networking Systems

48
Load Sharing in Networking Systems Lukas Kencl Intel Research Cambridge Weiguang Shi, University of Alberta/Kiyon Jean-Yves Le Boudec, EPFL Patrick Droz, IBM Research With citations from D. Thaler, R. Ravishankar and K. Ross.

Transcript of Load Sharing in Networking Systems

Page 1: Load Sharing in Networking Systems

Load Sharing in Networking Systems

Lukas KenclIntel Research Cambridge

Weiguang Shi, University of Alberta/KiyonJean-Yves Le Boudec, EPFLPatrick Droz, IBM Research

With citations from D. Thaler, R. Ravishankar and K. Ross.

Page 2: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

2

Outline

Load sharing in networking systems• Problem statement• Motivation• Imbalance due to traffic properties…………………. Break ………………………………….……• Inspiration: distributed Web caching• Solutions and properties:

– Adaptive HRW hashing– Adaptive Burst Shifting– Integrated: Hashing with Adaptive Burst Shifting

Page 3: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

3

Load Sharing in Networking Systems

Problem Statement

Page 4: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

4

Map packets to processors

Assumptions:Assumptions:•• Data arrives in packets.Data arrives in packets.•• Any processor can Any processor can process any packet.process any packet.HeterogenousHeterogenous

processor capacityprocessor capacity μμjj..

Incoming PacketsIncoming PacketsIncoming PacketsIncoming Packets

Multiple (M)Multiple (M)ProcessorsProcessors

11

22

33

MM

Multiple (N)Multiple (N)InputsInputs

11

22

33

44

NN

Task:Task:•• Map packets to processors so that load within some Map packets to processors so that load within some measure of balance.measure of balance.

Page 5: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

5

Map packets to multiple processorsFlows: avoid remapping or mis-sequencing!!!

Incoming PacketsIncoming Packets

Multiple (M)Multiple (M)ProcessorsProcessors

11

22

33

MM

Multiple (N)Multiple (N)InputsInputs

11

22

33

44

NN

Task:Task:•• Load on processors within some measure of balance.Load on processors within some measure of balance.•• SequenceSequence--preserving: preserving:

•• flow order flow order –– minimize reordering.minimize reordering.•• same flow to same processor (context) same flow to same processor (context) –– minimize remapping.minimize remapping.

Advantage:Advantage: system optimization.system optimization. Drawback:Drawback: complexity, overhead.complexity, overhead.

Assumptions:Assumptions:•• Data arrives in ordered Data arrives in ordered packetizedpacketized flows.flows.

•• Any processor can Any processor can process any packet.process any packet.HeterogenousHeterogenous processor processor

capacity capacity μμjj..

1111

11

11

22 22

22

22

22 11

Page 6: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

6

Measures of Balance

Processing load on processor j j (t)Capacity on processor j j

Workload intensity on processor j j(t) = j(t) / j

Total system workload intensity (t) = Σ j(t) / Σ j

“No single processor is overutilized if the system in total is not overutilized,and vice versa.”

Acceptable load sharing:if (t) [ 1 then ∀j, j(t) [ 1,if (t) > 1 then ∀j, j(t) > 1.

• Packet drops

• Acceptable Load Sharing:

Page 7: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

7

Coefficient of Variation (CV)as a measure of balance

Useful, as it takes the scale of measurements out of consideration.

Page 8: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

8

Reduce state maintenance of Flow-to-Processor Mapping

Multiple (N)Multiple (N)InputsInputs

11

22

33

NN

Multiple (M) Multiple (M) ProcessorsProcessors

11

22

33

44

MM

Incoming PacketIncoming Packet

flow flow identifier identifier vectorvector vv

v v

Upon packet arrival, a decision is made where to process the pacUpon packet arrival, a decision is made where to process the packet, based ket, based on the flow identifier. A on the flow identifier. A flowflow--toto--processor mapping processor mapping ff is thus established.is thus established.

f ( f ( vv ) = ) = 33

Page 9: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

9

Minimizing Disruption

Possible Goal:Possible Goal: Acceptable load sharing Acceptable load sharing withoutwithout maintaining maintaining flow stateflow stateinformation and yet information and yet minimizingminimizing the probability of the probability of mapping disruptionmapping disruption (flow (flow

remapping or reordering).remapping or reordering).

Special 0Special 0--1 Integer Programming Problem:1 Integer Programming Problem:

maxmax ΣΣvv Δ Δ vv(t(t) . ) . ΣΣjj (1{ (1{ f(tf(t--Δ Δ t)(vt)(v) = j} .) = j} . 1{ 1{ f(t)(vf(t)(v)= j})= j}),),whilewhile ΣΣvv aavv(t(t) .) . 1{ 1{ f(t)(vf(t)(v)= j})= j} = = jj (t)(t) [ [ jj , , ∀ ∀ j.j.

v v -- flow identifier vector in the packet header,flow identifier vector in the packet header,ff(t)(t)(v(v)) -- function mapping flows to processors, changing over time,function mapping flows to processors, changing over time,ΔΔvv(t)(t)cc {0,1}{0,1} -- indicator ifindicator if vv has appeared in the intervalshas appeared in the intervals (t(t--22ΔΔ t, tt, t--ΔΔ t) t) and and (t(t--ΔΔ t, t)t, t),,aavv(t(t)) -- how many times hashow many times has vv appeared in the intervalappeared in the interval (t(t--ΔΔ t, t),t, t),

tt -- iteration intervaliteration interval. .

We suspect an We suspect an NPNP--complete problem complete problem –– use use heuristicsheuristics..

Page 10: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

10

Summarize

Balance load

Avoid remapping - or packets out-of-sequence

Minimize overhead

Page 11: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

11

Load Sharing in Networking Systems

Motivation

Page 12: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

12

Practical Motivation:Multiprocessor Networking Systems

Distributed Router, or Parallel Forwarding Engine

Server Farm Load Balancer

Network Processor

Multicore Processor

Server FarmLoad Balancer

S1

S2

SM

Packet-to-server

mapping computation

Incoming Packet

Packet forwarding

Multiple (M) Servers

Page 13: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

13

Network Processor

NPNP

Pool of Microengines (ME)Pool of Microengines (ME)

GPPGPPSRAMSRAM DRAMDRAM

PacketsPackets

Network processor:• Pool of multi-threaded forwarding

engines• Integrated or attached GPP• Memories of varying bandwidth and

capacity

Large parallel programming problem• Concurrency• Packet order• Resource allocation• Load balancing

At 10 Gbps, packets arrive every 36 ns!!!

Page 14: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

14

Load Sharing in Networking Systems

Traffic properties make balancing difficult

Page 15: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

15

Traffic Properties

Address distribution – 32bit address space very unevenly assigned and populated

Burstiness

Power law

Page 16: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

16

TCP Traffic is Bursty

TCP behaviour: ideal and typical

Packets arrive in periodic bursts!

Page 17: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

17

Networking Traffic Exhibits Power-Law (Zipf-law) Properties

Flow popularities in traffic traces

P(R)~1/Ra

Frequency of an event as a function of its rank obeys power-law.

Network flows: a close to 1 (from above).

Page 18: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

18

Power-law leads to imbalance if static mapping (Shi et al, 2004)

m - number of processors,K - number of distinct object addresses or flow IDs,pi (0<i<K) - popularity of object i, qj (0<j<m) - number of distinct adresses or flow IDs mapped to processor j

Page 19: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

19

Conclusion Part I

Complex problem

Traffic properties work against static solutions

“Technology that just works”

“Technology that just works”

Concealing Complexity

Page 20: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

20

Break 5 minutes

Page 21: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

21

Load Sharing in Networking Systems

Inspiration: Distributed Web Caching and Web Server Load Balancing

Page 22: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

22

What is a Distributed Web Cache

Intranet

Internet

Proxy Cache stores Web objects locally

Collection of distributed Web caches

Page 23: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

23

HRW Mapping

Def.Def.: : ObjectObject--toto--Processor mapping functionProcessor mapping function f, f (v): Vf, f (v): V d d M :M :f (f (vv) = j) = j ww xxjj . g (. g ( v, v, jj ) = ) = maxmaxkk xxkk . g . g ((v, v, kk),),

wherewhere vv is the object identifier vector,is the object identifier vector, x x = (x= (x11, ... , x, ... , xmm)) is a weights' vector is a weights' vector and and g (g (vv, j), j)cc (0,1)(0,1) is a pseudorandom function of uniform distribution. is a pseudorandom function of uniform distribution.

Highest Random Weight (HRW) MappingHighest Random Weight (HRW) Mapping, , ThalerThaler, , RavishankarRavishankar, 1997, Ross, 1998, 1997, Ross, 1998, CARP Protocol, Windows NT LB, CARP Protocol, Windows NT LB

Minimal disruption of mappingMinimal disruption of mapping in case of processor addition or removal/failure.in case of processor addition or removal/failure.

Load balancing over Load balancing over heterogenousheterogenous processorsprocessors: weights: weights’’ vectorvector xx is in a 1is in a 1--toto--1 1 correspondence tocorrespondence to p p = (p= (p11, ... , p, ... , pmm)),, the vector of object fractions received the vector of object fractions received at each processor.at each processor.Pseudorandom function Pseudorandom function g (g (vv, , j)j)cc (0,1)(0,1) can be implemented as a fastcan be implemented as a fast--computable computable

hash function.hash function.

0

x1

x2

1 2 3

x2 . g (v, 2)Map to max of:

x3 x3 . g (v, 3)Example

x1 . g (v, 1)

: 3 processors

Page 24: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

24

HRW Minimal Disruption

Example: Add processor No. 4, vectors mapped either (i) as before addition or (ii) to the newly added processor – minimal number of vectors change mapping.

0 1 2 3

g (v, 3)g (v, 1)

Max of:g (v, 2)

1

4

g (v, 4)

0 1 2 3

g (v, 3)g (v, 1)

g (v, 4)

4

g (v, 2)

1Max of:

Partitioning into contiguous set HRW

Minimal disruption of mapping in case of processor addition:

Page 25: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

25

HRW Load Balancing

m - number of servers, S1, …, SmK - number of distinct object addresses or flow IDs, from the set of objects Opi (0<i<K) - popularity of object i, qj (0<j<m) - number of distinct adresses or flow IDs mapped to processor j , e.g. sum of the popularities of objects mapped to Sj

Page 26: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

26

Load Sharing in Networking Systems

Adaptive Methods- Adaptive HRW Hashing- Adaptive Burst Shifting

Page 27: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

27

Adaptive HRW Hashing(Kencl, Le Boudec 2002)

Page 28: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

28

Adaptive HRW mapping

Page 29: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

29

4. Download new4. Download newx := xx := x(t).(t).

2. Evaluate 2. Evaluate (t(t)) = (= ( 1 1 (t)(t), , 2 2 (t)(t), ... , , ... , mm (t)(t)))

(compare threshold).(compare threshold).

3. Compute new3. Compute newx x (t)(t) = (x= (x1 1 (t)(t), ..., , ..., xxmm (t)(t)).).

Adaptation through Feedback

Trigger definition targets Trigger definition targets preventing overload, preventing overload, if system in if system in total not overloaded, and vice total not overloaded, and vice versa.versa.A A threshold threshold triggers adaptation triggers adaptation

when close to load sharing when close to load sharing bounds. bounds. FlowFlow--toto--processor mapping processor mapping ff

becomes a becomes a function of timefunction of timef(t)(f(t)(vv))Adaptation may cause Adaptation may cause flowflow

remapping! remapping! How to minimize the How to minimize the amount remapped?amount remapped?

Multiple (N)Multiple (N)Input CardsInput Cards

11

22

33

NN

Multiple (M)Multiple (M)processorsprocessors

11

22

33

44

MM

CPCP

Control PointControl Point

1. Filtered workload 1. Filtered workload intensity ( j) =intensity ( j) = jj(t).(t).

Problem:Problem: incoming requests incoming requests are packets, not flows! Packets not evenly distributed over floware packets, not flows! Packets not evenly distributed over flows s -->> not evenly distributed over the request object spacenot evenly distributed over the request object space --> HRW mapping not sufficient for > HRW mapping not sufficient for acceptable load sharing boundsacceptable load sharing bounds ––> need to> need to adapt!adapt!

Page 30: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

30

Adaptation Algorithm

StartStart

TriggerTriggerAdaptationAdaptation

??

Wait time Wait time tt

Adapt Adapt weights' vector weights' vector xx

and uploadand upload

Triggering PolicyTriggering Policy Adaptation PolicyAdaptation Policy

YesYesNoNo

Compute filtered Compute filtered processor workload processor workload

intensity intensity (t(t))

Page 31: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

31

Triggering Policy

Dynamic workload intensityDynamic workload intensitythresholdthreshold

'' (t) = 1/2 (1+(t) = 1/2 (1+ (t))(t))

Triggering policyTriggering policy(i)(i) if if (t(t) ) [[ 1 1 andand maxmax jj (t) > (t) > (t) (t)

then then adaptadapt;;(ii) if(ii) if (t(t) > 1) > 1 andand minmin jj (t)(t) < < (t) (t)

then then adapt.adapt.

ExampleExample::(t) = (0.8, 0.2, 0.2), (t) = (0.8, 0.2, 0.2), (t) = 0.4(t) = 0.4

ee (t) = 0.7,(t) = 0.7,11 (t)(t) >> (t)(t) e e adapt.adapt.

0 10 20 30 40 50 60 70 80 90 1000.7

0.8

0.9

1

1.1

1.2

1.3

1.4

ρ(t), system in totalworkload intensity threshold

Triggering thresholdTriggering threshold

(t)(t) = = maxmax(( '' (t), upper)(t), upper)or vice versaor vice versa

0 10 20 30 40 50 60 70 80 90 1000.7

0.8

0.9

1

1.1

1.2

1.3

1.4

ρ(t), system in totaltriggering threshold

HysteresisHysteresis boundbound

upper: (1+upper: (1+ HH(t(t)) .)) . (t)(t)lower: (1 lower: (1 -- HH(t)) .(t)) . (t)(t)

0 10 20 30 40 50 60 70 80 90 1000.7

0.8

0.9

1

1.1

1.2

1.3

1.4

ρ(t), system in totalmax, min hysteresis bound

Page 32: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

32

Adaptation Policy:Minimal Disruption

•• A, B A, B -- mutually exclusive subsets ofmutually exclusive subsets of M={1,...,m}M={1,...,m}, , M=AM=A 44 B.B.•• α α c c (0, 1).(0, 1).•• f, f'f, f' –– two HRW mappings with the weights' vectorstwo HRW mappings with the weights' vectors xx, , x'x'::

x'x'jj = = αα . . xxjj,, ∀ ∀ jj c c A,A,x'x'jj = = xxjj,, ∀ ∀ jj c c BB..

•• ppjj , , p'p'jj -- fraction of objects mapped to nodefraction of objects mapped to node j j usingusing ff, , f'f'..

1) 1) p'p'jj [ [ ppjj , , jj c c AA,,p'p'jj m m ppjj , j, j c c BB..

2) Fraction of objects mapped to a different node by each mapp2) Fraction of objects mapped to a different node by each mappingingisis MINIMAL, MINIMAL, that is, equal tothat is, equal to | | p'p'jj -- ppjj | . |V | | . |V | at every nodeat every node j. j.

Page 33: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

33

• reduced receives less, unaltered receives more, if reduction by a single, invariable multiplier.• minimal disruption of the mapping.

0

x1

x2

1 2 3

x2 . g (v, 2)

x1 . g (v, 1)Max of:

1 2 3

x3

02/3.x1

2/3.x2

x3 . g (v, 3)

2/3 . x2 . g (v, 2)

2/3 . x1 . g (v, 1)Max of:

x3 x3 . g (v, 3)

Adaptation Policy: Minimal Disruption Example (3 proc.)

Page 34: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

34

Adaptation Policy

Let Let (t(t)) [ [ 11.. Then:Then:

xxjj (t) : = (t) : = c(t)c(t) . x. xj j (t(t-- t) ,t) , ifif jj (t)(t) >> (t) (t) (( jj exceeds thresholdexceeds threshold (t(t)))),,xxjj (t) : = (t) : = xxjj (t(t-- t),t), ifif jj (t)(t) [[ (t) (t) (( jj does not exceed thresholddoes not exceed threshold (t(t))))..

IfIf (t) > 1(t) > 1,, the adaptation is carried out in a symmetrical manner.the adaptation is carried out in a symmetrical manner.

The weights' The weights' multiplier coefficient multiplier coefficient c(tc(t)) ::

( )( )(t)(t)minmin {{ jj(t(t)) | | jj(t(t)) >> (t)(t)}}c(t) =c(t) =

1/m1/m

FactorFactor c(t)c(t) is proportional to the minimal error and to the number of nodes.is proportional to the minimal error and to the number of nodes.

Page 35: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

35

Adaptive Burst Shifting(Shi, MacGregor, Gburzynski 2005)

Page 36: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

36

Adaptive Burst Shifting

Insight: the periodicity of bursts may allow shifting flows in-between bursts, without affecting order within flows.

Page 37: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

37

Adaptive Burst Shifting

Burst Distributor Algorithm

Flow Table monitors packets in system

Page 38: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

38

Performance evaluation

Page 39: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

39

Adaptive HRW on generated traffic

Page 40: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

40

Expectations

•• Workload intensity on individual processors close to that of thWorkload intensity on individual processors close to that of the e system in total system in total (acceptable load sharing);(acceptable load sharing);•• Packet loss probability lowered Packet loss probability lowered (acceptable load sharing);(acceptable load sharing);•• Persistent flows (appearing in two consecutive iterations) Persistent flows (appearing in two consecutive iterations) seldom remapped seldom remapped (minimize disruption).(minimize disruption).

Page 41: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

41

AHH Keeps Per-processor Workload Intensity Close to Ideal

0 50 100 150 2000

0.5

1

1.5

2

2.5W

orklo

ad in

tens

ity

Time

ρ(t), system in total

Naive, no LS.Naive, no LS.

0 50 100 150 2000

0.5

1

1.5

2

2.5

Wor

kload

inte

nsity

Time

ρ(t), system in total

Static HRW Static HRW

0 50 100 150 2000

0.5

1

1.5

2

2.5

Wor

kload

inte

nsity

Time

ρ(t), system in total

Adaptive HRWAdaptive HRW

0 50 100 150 2000

0.5

1

1.5

2

2.5

Wor

kload

inte

nsity

Time

ρ(t), system in totalMax, min ρj(t), no LSMax, min ρj(t), static LSMax, min ρj(t), adaptive LS

Max and min of all.Max and min of all.

Page 42: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

42

Packet Loss Significantly Reducedwith the Adaptive Control Loop

0 50 100 150 2000

2

4

6

8

10

x 104

Pac

kets

dro

pped

s

Time

System in totalNo LSStatic LSAdaptive LS

Packet loss: Naive, Static, Adaptive, Ideal. Packet loss: Naive, Static, Adaptive, Ideal.

0 50 100 150 2000

0.5

1

1.5

2

2.5

3

3.5x 10

4

Pac

kets

dro

pped

Time

Static LSAdaptive LS

Packet loss in excess of Ideal: Static, Adaptive.Packet loss in excess of Ideal: Static, Adaptive.

Adaptive HRW saves on average 60% Adaptive HRW saves on average 60% of packets dropped in by the static load sharing.of packets dropped in by the static load sharing.

Page 43: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

43

Minimal Disruption Property Ensures Few Flow Remappings

0 50 100 150 20010

1

102

103

104

105

Flo

ws

Time

Flows appearingFlows persistentFlows remapped

Flows, per iteration: appearing, Flows, per iteration: appearing, persitentpersitent and remapped.and remapped.

Adaptive control loop Adaptive control loop leads on average to:leads on average to:•• less than 0.05% less than 0.05% of the of the appearing flowsappearing flowsremapped per iteration;remapped per iteration;•• less than 0.2% less than 0.2% of the of the persistent flowspersistent flowsremapped per iteration.remapped per iteration.

Page 44: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

44

AHH and ABS on traffic traces

Page 45: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

45

Adaptive HRW and Adaptive Burst Switching

Packet drop rates.Packet drop rates. Packet reordering.Packet reordering.

Packet remapping.Packet remapping.

Page 46: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

46

Conclusion: AHH and ABS

•• ABS better in preventing packet drop ABS better in preventing packet drop •• AHH better in preventing remapping and reordering, AHH better in preventing remapping and reordering, although larger table would improve ABSalthough larger table would improve ABS•• ABS works on much smaller timeABS works on much smaller time--scale scale –– can address can address packet burstspacket bursts•• AHH converges to optimal allocation, but timescale too AHH converges to optimal allocation, but timescale too long for burstslong for bursts•• hybrid solution?hybrid solution?

Page 47: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

47

Just in: HABS – a hybrid solution(Kencl, Shi - ANCS 2006)

Good performance in all three metrics of drop rates, Good performance in all three metrics of drop rates, remapping and reordering.remapping and reordering.

Page 48: Load Sharing in Networking Systems

9. 10. 2006 L. Kencl, IR Cambridge – Load Sharing in Networking SystemsMFF UK, Praha

48

The End

Thank you!

Seminar @ 10:40