Outline
• Background
• Motivation
• Implementations alternatives and constraints
• Case Study: Technical constraints in multiple
TCP flows based measurement
• Conclusions
2
Background
• Internet Speed measurements (for End Users)
– De-facto standards
– Lack of unified approach
– Lack of replicability/reliability
– Fulfilment of advertised speed/SLA
• ITU-T Q15/11
– Framework + methodology
• More info– http://www.itu.int/en/ITU-T/C-I/Pages/IM/Internet-speed.aspx
3
Download
Upload
Different QoS measurement platforms
Me
asu
red
Sp
ee
d(m
bp
s)
0
10
20
30
40
50
60
70
80
90
0 1 2 3 4 5 6 7
Background: Motivation
4
Implementation alternatives
• Typically 2 approaches– A) Operator/Regulator driven Large Scale Measurement
Platforms• Coordinated
• Active
• Statistical value only
– B) End Users• Unmanaged
• Active/Passive
• For both approaches– HW (probe/STB
– SW (web based/app)
– Measurement mechanism• Single/multiple TCP flows
• More sophisticated metricsNo clear winner
http://www.oecd.org/internet/speed-tests.htm
5
Regulator
Operator End Users
Availability (“time to market”)
Simplicity/understandable to users
Technically feasible to deploy in multi-
technology UEs (including web browsers, apps,
etc).
Comparison with “de facto” standards
Controllable
Capability to incorporate multiple metrics
Technically sound
Ability to identify network problems and their
causes
Reliable
Comparable tests
SLA’s and customer protection
Temperature of the broadband market
Approach B
End Users
Approach A
Large Scale Platforms
Examples:
SamKnows
RIPE Atlas
Bismark
Dasu
Examples:
Speedtest
RTR
Implementation alternatives
6
Selection criteria/constraints
• Depending on the final target– Accuracy
– Replicability
– Adoptability
– Deployment
– Cost
• Criteria or Constraint?– No accuracy &
replicability• No SLA check
– Usable byEnd Userswithout buying orinstallinganything
Cost
Adoptability
DeploymentReplicability
Accuracy
Web based
App
End User HW probe
Operator/Regulator platform
7
Case study: technical constraints in
Multiple TCP stream based measurement
• Current de-facto standard
• Constraints
– Conditions that may prevent achieving maximum
speed
• Technical “hard” constraints
– Sliding window =>
max. gput<(max. in-flight pkts./RTT)
» Limitations due to Maximum Effective Window Size
– Measurement Interval
• Time to reach maximum speed
• Significative period
– TCP dynamics
– Network evolution
8
Sp
eed
Time
Evolution of TCP Window
Constraint analysis:
Maximum Effective TCP Window Size
• MEWS=min {STATIC, DYNAMIC variables }– STATIC (OS dependant -configurable-Flow Control related)
• min(negotiated WS, MaxSenderTxBuffer, MaxReceiverRxBuffer)
– DYNAMIC (-TCP flavour dependant-Congestion and Flow control)
• min(CWND(t),AWND(t))– Not only due to capacity
» CWND(t)=f(t|e2e available capacity, TCP dynamics, channel errors, competing flows )
Actual maximum available e2e
capacity
MEWS limitation for a certain BDP
9
0 5 10 15 20
0
1
2
3
4
5
6
7
8
9
Different Window Scale appearances within 24 hours (Client => Server)
Hours of day
Win
do
w S
cale
va
lue
s
High percentage of WS=0, resulting in
a maximum achievable capacity of
64KB in the receiving buffer.
Essential assessment to ensure
measurement reliability.
Constraint analysis:
MEWS => WS limitation
10
58
Constraint analysis:
MEWS => WS limitation
Option a) Use 58 connection for the
test => close to e2e capacity but
unrealistic (no user uses 58 conns)
Option b) Use realistic #conns => User
will never be able to get advertised
speed from his connection
11
• WS and Tx/Rx buffer sizes result in a fixed and unsolvable constraint to the maximum achievable goodput/connection for a certain RTT
• Although most modern OS have either big enough or autotuning buffers, access to remote resources may still result in a bottleneck for very high RTTs (not than unusual)
• Considered Access and Internet Resources tests show very different constraints
Sp
eed
Time
Connection 1
Connection 4Connection 2
Connection 3
Resulting aggregated speed
Single threaded speed
Quicker
Slow Start
More stable evolution
(and resistance to competing flows)
Constraint analysis:
TCP Window evolution & multiple flows
12
Sp
eed
Time
Speed1
Speed2
Speed3
Speed4
Speed5
Constraint analysis:
TCP Window evolution & measurement period
13
Sp
eed
Time
Speed
Constraint analysis:
TCP Window evolution & measurement period
14
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
Time(s)
go
odp
ut
(mb
ps)
Samples of Goodput evolution for single TCP connections Amsterdam=>Bilbao during 24 hours Once the pipe is full
the obtained
goodput is stable
regardless CWND
evolution.
Constraints analysis:
Measurement period (example 1)
Measurement
interval 5-10 secs
15
0 10 20 30 40 50 600
10
20
30
40
50
60
70
80
90
100
Time(s)
goo
dpu
t (m
bp
s)
Samples of Goodput evolution for single TCP connections USA=>EUR during 24 hours Goodput follows
CWND evolutions
Measurement period proportional to epoch
time =f (TCP, e2e delay, network dynamics,
cross traffic)
Constraints analysis:
Measurement period (example 2)
16
Conclussions
• Technical and non technical constraints– Stakeholder dependant
– Criteria become constraints as soon as accuracy and replicability are mandatory. For example
• Avoid dependence on user equipment/cross traffic � HW probe or STB based
• Publicly available + no need to install + usable in your own connection � WEB/TCP based
• Key findings multiple TCP case– Relevance of End User’s OS/SW Configuration
– Problem of number of concurrent connections• High enough but unrealistic vs. realistic but under-performing (bug or feature?)
– Warning upon suspicious tests» Not big enough MEWS
– Reasonable “epoch-time” related test duration
– Statistical post-processing
• Pending issues– Effect of competing flows
– Mobile network effects
• Other metrics
17
Top Related