Data transfer over high latency and high packet ... - umu.se · Ume a University Department of...

65
Ume˚ a University Department of Computing Science SE-901 87 UME ˚ A SWEDEN Data transfer over high latency and high packet loss networks. Robin ¨ Odling August 2016 Master’s thesis in Computing Science, 30 credits Supervisor at CS-UmU: Stefan Johansson Examiner: Henrik Bj¨ orklund

Transcript of Data transfer over high latency and high packet ... - umu.se · Ume a University Department of...

Page 1: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Umea UniversityDepartment of Computing Science

SE-901 87 UMEASWEDEN

Data transfer over high latency and high packet

loss networks.

Robin Odling

August 2016Master’s thesis in Computing Science, 30 credits

Supervisor at CS-UmU: Stefan JohanssonExaminer: Henrik Bjorklund

Page 2: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet
Page 3: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Abstract

This thesis aims to pin down which aspects affect the data transfer speed through the

Internet, especially with regard to latency and packet loss. What is required from a

technical point of view in order to efficiently mitigate the effects of latency and packet

loss to network transfer? What are the difficulties when testing network protocols? The

approach is to test four different protocols; TCP CUBIC, Westwood+, Tsunami, and

UDT, while gathering metrics such as throughput, the size of the congestion window

and slow start threshold, and then analysing the results. The analysis of the results

show that latency have the most impact on throughput, effecting the network transfer by

decreasing the number of times the congestion window is able to grow in RTT dependant

algorithms, effectively decreasing the throughput. Packet loss effects network transfer

because protocols interpreting the loss as a congestion problem on which the protocol

decreases the sending rate. The size of this decrease is shown to impact the throughput,

where an aggressive decrease shows a poorer performance with packet loss. The study can

aid anyone who seeks to develop a network transport protocol.

i

Page 4: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

ii

Page 5: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Acknowledgements

I would like to thank my internal supervisor Stefan Johansson at the Department of

Computing Science at Umea University for being a huge help with the thesis layout and

for being meticulous when proofreading the thesis. I also want to thank Stefan for keeping

my hopes up when things have been going bad.

I would like to thank my external supervisor Isak Jonsson at Vidispine for helping me

with the idea for the subject of the thesis and for being understanding and helpful during

discussions about the study.

I would like to thank Codemill for providing me with a workspace and a computer at the

office, for the endless coffee, and of course for the popcorn every Friday.

Finally I would like to thank my girlfriend Josefine for always being there for me when I

had my bad days, and for sharing the good days with me.

iii

Page 6: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

iv

Page 7: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Contents

Abstract i

Acknowledgements iii

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Purpose and Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.4 Thesis organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Methodology 4

2.1 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3 Analysis and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Background 5

3.1 Data transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3.2 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.3 Packet loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.4 Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.5 Network Congestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Transport protocols and Congestion Control 8

4.1 TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4.1.1 TCP CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

4.1.2 TCP Westwood+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

4.2 UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

4.3 Open source transport protocols . . . . . . . . . . . . . . . . . . . . . . . . 14

4.3.1 UDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.3.2 Tsunami UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

v

Page 8: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

vi CONTENTS

5 Testing 18

5.1 Testing methodology and metrics . . . . . . . . . . . . . . . . . . . . . . . . 18

5.1.1 TCP tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.1.2 UDP tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.2 Environment setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.2.1 Equipment configuration . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.3 TCP settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.3.1 TCP Window scaling . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.3.2 Buffer sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

5.3.3 SACK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

5.3.4 No metrics save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.3.5 Autotuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.4 Tsunami settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.5 Network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.6 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.6.1 iPerf3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.6.2 TCP Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.6.3 netem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

6 Results 25

6.1 TCP tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6.1.1 CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6.1.2 Westwood+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

6.1.3 CUBIC and Westwood+ . . . . . . . . . . . . . . . . . . . . . . . . . 28

6.2 UDP tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.2.1 Tsunami block sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6.2.2 Tsunami and UDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

7 Analysis and Discussion 34

7.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7.1.1 Westwood+ and CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . 34

7.1.2 Tsunami and UDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7.1.3 10 Mbps, 100 Mbps, and 1000 Mbps . . . . . . . . . . . . . . . . . . 40

7.1.4 TCP and UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

7.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7.2.1 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7.2.2 Problems during the thesis . . . . . . . . . . . . . . . . . . . . . . . 41

Page 9: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

8 Conclusion 44

8.1 Summary of Thesis Achievements . . . . . . . . . . . . . . . . . . . . . . . . 44

8.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

8.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Appendices 46

Appendix A Network Settings 47

Bibliography 50

vii

Page 10: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

viii

Page 11: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

List of Tables

4.1 Available settings of Tsunami. . . . . . . . . . . . . . . . . . . . . . . . . . 16

5.1 Host system specification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.2 Host system configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5.3 VM configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

ix

Page 12: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

x

Page 13: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

List of Figures

4.1 CUBIC window growth curve during CUBIC mode. . . . . . . . . . . . . . 11

5.1 The network topology used during the tests. . . . . . . . . . . . . . . . . . . 23

6.1 Graphs showing TCP CUBIC throughput test showing results using default

and optimized buffer sizes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

6.2 Graphs showing TCP Westwood+ throughput test showing results using

default and optimized buffer sizes. . . . . . . . . . . . . . . . . . . . . . . . 27

6.3 Graphs showing TCP Westwood+ and TCP CUBIC throughput tests show-

ing results performed with 100 Mbps bandwidth. . . . . . . . . . . . . . . . 28

6.4 Graphs showing TCP Westwood+ and TCP CUBIC throughput tests with

1000 Mbps bandwidth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

6.5 Graphs showing throughput tests results for Westwood+ (a) and CUBIC

(b) run for 200 seconds with 1000 Mbps bandwidth for different latency and

packet loss. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6.6 Graphs showing throughput tests results for Westwood+ (a) and CUBIC

(b) run for 200 seconds with 10 Mbps bandwidth for different latency and

packet loss. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

6.7 Graphs showing Tsunami UDP throughput tests using different block sizes

for latency and packet loss. . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

xi

Page 14: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6.8 Graphs showing Tsunami UDP and UDT tests at 100 Mbps throughput

tests with different latency and packet loss. . . . . . . . . . . . . . . . . . . 32

6.9 Graphs showing Tsunami UDP and UDT tests at 1000 Mbps throughput

tests with different latency and packet loss. . . . . . . . . . . . . . . . . . . 32

7.1 Graphs showing Westwood+ (a) and CUBIC (b) congestion window (cwnd)

and slow start threshold (ssthresh) during a latency test with optimized

buffer sizes, 1000 Mbps bandwidth, and 350 ms latency. . . . . . . . . . . . 35

7.2 Graphs showing a closer look at Westwood+ (a) and CUBIC (b) congestion

window (cwnd) and slow start threshold (ssthresh) during a latency test

with optimized buffer sizes, 1000 Mbps bandwidth, and 350 ms latency. . . 36

7.3 Graphs showing Westwood+ (a) and CUBIC (b) congestion window (cwnd)

and slow start threshold (ssthresh) during a packet loss test with optimized

buffer sizes and 1000 Mbps bandwidth. . . . . . . . . . . . . . . . . . . . . . 37

7.4 Graphs showing a closer look on Westwood+ (a) and CUBIC (b) congestion

window (cwnd) and slow start threshold (ssthresh) during a packet loss test

with optimized buffer sizes and 1000 Mbps bandwidth. . . . . . . . . . . . . 37

7.5 Graphs showing Westwood+ (a) and CUBIC (b) utilization of the available

bandwidth for 10, 100, and 1000 Mbps with optimized buffer sizes during

packet loss tests. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

xii

Page 15: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 1

Introduction

1.1 Motivation

Media content management within larger producers of media has become a necessity with

the increasing amount of content being produced today. The quality of the media is also

increasing and as a result of this the size of the content is increasing. The media content

is rarely solely stored locally, but is shared with parties which can be located anywhere

across the world.

A solution called Vidixplore1 have been developed by a company named Vidispine. This

solution helps content producers to share and view folders of media files in the cloud.

Vidixplore is accessible from a web interface where you can upload media files (file size

max 2 GB), view media files and do various jobs on the media files. There also exists a

client which can be installed on Windows and Mac. This client allows for synchronisation

of local folders to Vidixplore. Vidiexplore also allows for synchronisation with other types

of cloud storage providers like Dropbox, Amazon S3, and Azure for easy access between

different cloud services.

The viewable media in the web interface is not the original uploaded files. Instead the

media is available for viewing by creating so called proxies, which are trans-coded low-

resolution copies of the original file. These proxies are the files which are uploaded to the

cloud as a first step. The owner or admin of the content can upload the whole original

full-size media file. The proxy and original file (if uploaded) can be downloaded through

the web interface.

In Vidixplore the media files are transferred by the client by opening an ssh tunnel and

then transferred using the TCP protocol. A known fact is that TCP uses flow control

and congestion control which decreases the transfer speed as the packet loss increases, and

increases the transfer speed as the packet loss decreases.

1https://www.vidixplore.com

1

Page 16: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

2 Chapter 1. Introduction

Sharing a large amount of data a long distance with a potentially unstable network without

loosing any data integrity could take a long time. Vidixplore seeks another solution to

transfer data from the client to the cloud which is faster than the current solution.

1.2 Problem Statement

Since the proxies, explained in the motivation, could be uploaded from anywhere in the

world, possible with an unreliable and unstable network, a problem presents itself. How do

we transfer the data fast and reliable? Can we pin down the important aspects of network

communication and data transfer which affects the way data is transferred in different

network environments? What is required in order to transfer data fast and reliably over a

long and unreliable network? How does existing solutions handle these kind of problems

and what makes the different solutions behave the way they do?

1.3 Purpose and Goals

The goal of this thesis is to analyse different transport protocols and their congestion con-

trol in order to examine and test how protocols behave in different network environments

with latency and packet loss. A more general discussion about what affects the transfer

speed and what solutions seems to be working in certain network environments is also in-

cluded in order to aid future attempts on creating a new network protocol. The tests are

analysed as well as the testing procedure itself in order to find problems and improvements

applicable when doing these kinds of tests.

1.4 Thesis organization

The thesis consists of eight chapters. In this section every chapter is briefly explained.

Chapter 1 — Introduction

In the introduction the motivation, purpose, and goals of the thesis are explained.

Chapter 2 — Methodology

The methodology chapter explains the methodology that was used during the thesis.

Chapter 3 — Background

The background chapter gives an introduction into the background of the thesis

problem.

Chapter 4 — Transport protocols and Congestion Control

This chapter explains different transport protocols and congestion control algorithms

in detail.

Page 17: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

1.4. Thesis organization 3

Chapter 5 — Testing

This chapter explains how the testing was conducted and what configuration and

tools that were used.

Chapter 6 — Results

This chapter presents the results from the tests.

Chapter 8 — Conclusion

This chapter concludes the thesis. The chapter discusses the results and answer the

questions that were asked prior to the work on the thesis began.

Page 18: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 2

Methodology

This chapter explains the 3 major parts of the thesis methodology.

2.1 Research

Facts and information are retrieved and gathered about technical details and previous

work done in the field prior to this thesis. The research is done initially through various

platforms which provides written articles and theses. Information is also gathered by

Internet search engines.

2.2 Data collection

To analyse different existing transport protocols and congestion control algorithms data

is collected during several tests. The tests collect several different metrics for each test

case. This data collection is conducted in order to be able to analyse and compare existing

solutions in order to reach a conclusion of the thesis.

2.3 Analysis and discussion

The method of the analysis is mainly quantitative analysis by comparing the data col-

lected. Qualitative analysis is also conducted by reflecting on the gathered information

and collected data. This analysis and discussion is an attempt to find the reasons for dif-

ferent network transport behaviour in different network environments, and is an attempt

to tie the thesis together, discussing how the problem statement have been treated in the

thesis and how this thesis have contributed to new knowledge in the field.

4

Page 19: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 3

Background

In order to understand the underlying problem some basic background information have

to be explained. In this chapter the basic information about data transfer and what affects

the transfer speed is explained.

3.1 Data transfer

When transferring data over a network, the time it takes to transfer the data depends on

a number of factors like:

• Bandwidth

• Router configuration and hardware

• Transport media

• Distance from source to destination

• Protocol implementation

Bandwidth is the measure of how much data that can pass a given channel over a time

unit, typically bits per second (bps). The router configuration and hardware introduces

limitations on how much data the router can handle and pass on to the destination.

The transport media, like copper cables, fiber optics, WiFi, cellular network, or satellite

network, have an impact on the transfer rate. The time taken for data to be transferred

from a source to a destination increases with the distance between the two endpoints. The

last factor is the protocol implementation and the algorithms used. Depending on what

flow control and congestion control, and other mechanisms that are used internally in the

transport protocol implementation will affect the speed of the transfer.

5

Page 20: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6 Chapter 3. Background

3.2 Latency

Latency is a measure of the time it takes for data to transfer from the destination to the

source.

On the path which the data travels from the source to the destination the data could

possibly pass through multiple routers. Every router on the path introduces a small delay

of the data. These delays adds up and is added to the latency.

The distance from the source to the destination also adds to the latency. Over a regular

land based fiber optics network connection, data travels slower than the speed of light

due to light refraction. The speed of light in glass fiber calculates to approximately

206 856.48 km/s. For data that have to travel to the other side of the earth, at a distance

of 20 037.50 km, this would introduce a latency of 96.86 ms one way. The router delay

and the distance delay adds up to the final latency.

3.3 Packet loss

Packet loss is the event of data, bundled in so called packages, not reaching its destination.

This event could depend on multiple factors but there are two major causes of packet loss.

The first major factor is network congestion. Network congestion happens when a router

or a switch receives more packages than it can handle. Routers buffers the incoming data,

and when the buffer is full, any incoming package is dropped, causing a packet loss.

The second major factor is transmission errors (bit errors). This means that during the

transmission of packets from the source to the destination, some bits of the data becomes

flipped from 0 to 1 or 1 to 0. When the packet arrives at the destination, the data is

controlled and is marked as a faulty packet. A faulty packets is dropped, resulting in

a packet loss. Transmission error ratio is extremely small for fixed networks but, for

example, when using WiFi or cellular network the ratio is much higher.

3.4 Flow Control

Flow control in data communications manages the rate of data transfer between two nodes

in order to prevent a fast sender from overwhelming a slow receiver. This can be done in

different ways and flow control can be classified by whether or not the receiving node send

feedback to the sending node.

TCP uses a sliding window flow control where the receiver allocates buffer space for n

frames. The sender can send and the receiver can accept n frames without having to

wait for an acknowledgement. Every frame is numbered and the receiver acknowledges by

Page 21: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

3.5. Network Congestion 7

sending an acknowledgement with the number of the next frame number expected. The

sender then knows that the receiver can receive n frames from the specified frame number

received in the acknowledgement and forward. The receive window ”slides” to the next

frames to receive, hence the name sliding window.

3.5 Network Congestion

Network congestion is a state where a node or link carries more data than it can hold,

causing a deterioration of the network service quality. This can result in queuing delays,

frame or data packet loss, and blocking of new connections. Network congestion is a global

phenomenon and affects every router and host on the subnet.

Network congestion and in turn congestive collapse is usually countered by something

called congestion control. Congestion control works by reducing the rate of which packets

are transferred to a rate that fits the network and ensures that other nodes across the

network gets a fair amount of access to the available network resources. Congestion control

exists in many different implementations with many different algorithms.

Page 22: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 4

Transport protocols and

Congestion Control

In order to handle data communications between different terminals on the Internet, trans-

port protocols are used. Two of the most common are TCP and UDP which are ”Transport

Layer” protocols in the Open Systems Interconnection (OSI) model. TCP and UDP are

very different in how they handle data communication and data transfer. Several open

source transport protocols have been developed in order to improve the performance and

speed of data transfer on the Internet. Most of them are based on UDP for data transfer

with a custom reliability layer on top of it. Congestion control algorithms are commonly

used in order to protect the network from network congestion which happens when the

network is carrying more data than it can handle. This could potentially lead to delay,

packet loss, and by extension decreased throughput.

This chapter explains two different transport protocols TCP and UDP, and different con-

gestion control algorithms which aims to protect the network from congestion.

4.1 TCP

In the core version, an additive increase/multiplicative decrease (AIMD) sliding window

algorithm is used for the congestion control[2]. AIMD combines linear growth of the

congestion window with an exponential reduction when a congestion takes place.

The size of the congestion window (called cwnd) is used in TCP to determine the number

of bytes that can be outstanding (in flight between the source and destination and not yet

acknowledged) at any time. The size of cwnd is in the beginning usually set to 1, 2, or 10

packets and is then increased in two different ways depending on what phase it currently

is in, either slow start or congestion avoidance[12].

Another variable, the slow start threshold (called sshtresh), is used to determine whether

8

Page 23: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

4.1. TCP 9

the slow start or congestion avoidance algorithm is used to control data transmission.

If cwnd < sstresh slow start algorithm is used and if cwnd ≥ sstresh then congestion

avoidance algorithm is used.

TCP starts with the slow start phase where the cwnd is increased with every incoming

acknowledgement (ACK), effectively doubling the window size each round trip time (RTT)

until a loss is detected, or the receiver’s window is the limiting factor.

After the slow start phase the transmission enters the congestion avoidance phase. In

this phase cwnd is, as AIMD states, increased by an additive factor a, typically every

RTT or at every ACK reception, such that the transfer rate at time t is increased by

w(t + 1) = w(t) + a. The additive increase parameter a is typically one MSS (maximum

segment size).

When an ACK have not been received by the sender before a set timeout for that segment,

the non-acknowledged segment is retransmitted and sshtresh is set to half of cwnd, and

cwnd is set to sshtresh.

Basic TCP’s slow start and AIMD algorithm congestion control are too conservative to

fully utilize the high-speed network that is present at this time.

A major problem with basic TCP is that it is biased against connections with long RTTs

[3]. Connections with long RTTs will receive ACKs more slowly than connections with

high RTTs and therefore the congestion window will grow slower than a connection with

shorter RTT. This gives the connection with long RTT a less portion of the bandwidth.

Another major problem with TCP is that it does not distinguish between packets lost

by real congestion or by transmission errors. All packet loss is considered to be caused

by congestion and the sending rate is therefore aggressively decreased, even though there

might not actually be congestion.

Several modifications have been made to the basic TCP in forms of different congestion

avoidance algorithms, each trying to improve the performance of TCP in different envi-

ronments.

One modification that has been made is called fast retransmit [12], where in the case

the sender receives three duplicate acknowledgements (DUPACK), the lost or out-of-order

segment is retransmitted directly before waiting for the timeout for that segment. Another

modification is called fast recovery where the TCP sender goes into fast recovery[12] after

fast retransmit. This sets the cwnd to sshtresh plus 3 times the segment size. Each time

a DUPACK is received the size of the cwnd is increased by one segment size. Then a new

packet is sent if allowed by the size of cwnd. When the next ACK that acknowledges new

data is received, cwnd is reset to the value it held before the fast recovery. This received

ACK acknowledges all the intermediate segments sent between the packet that was lost

and the receipt of the first DUPACK.

Fast retransmit and fast recovery is described in [12]. The fast retransmit algorithm first

Page 24: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

10 Chapter 4. Transport protocols and Congestion Control

appeared in the 4.3BSD Tahoe release, and it was followed by slow start. The fast recovery

algorithm appeared in the 4.3BSD Reno release.

Changes to the TCP congestion control have been made by two algorithms called TCP

CUBIC and TCP Westwood+, which are described in the following sections.

4.1.1 TCP CUBIC

CUBIC[7] is a congestion control protocol for TCP developed as an improvement over

basic TCP to improve the performance on connections with high bandwidth and high

latency. The protocol modifies the linear window growth function of TCP to be a cubic

function in order to improve scalability. CUBIC also improves fairness among flows with

different RTTs (round trip times) by making the window growth function independent of

the RTT. Instead the window growth function is a cubic function of the time between two

consecutive congestion events.

TCP CUBIC features two different modes while in congestion avoidance, TCP and CUBIC

mode. Each mode corresponds to different ways of increasing the window size. The window

size of standard TCP can be calculated in the term of elapsed time t as:

Wtcp = Wmax × (1 − β) + 3β

2 − β× t

RTT(4.1)

where Wmax is the maximum window size, β is a multiplicative decrease factor and RTT

is the calculated round-trip-time. When an ACK is received, if cwnd < ssthresh then

cwnd = cwnd + 1. This is the regular TCP slow start. If the current cwnd is less than

Wtcp when an ACK is received, TCP mode is active and the cwnd is set to Wtcp. If this

is not the case, CUBIC mode is active. When CUBIC mode is active, upon each ACK

reception the current cwnd is set to

W (t) = C(t−K)3 +Wmax (4.2)

where C is a cubic parameter, t is the elapsed time from the last congestion event. K is

the time period that the above function takes to increase cwnd to Wmax when there are

no further loss events and is

K =3

√Wmaxβ

C. (4.3)

W (t) is designed to quickly converge to Wmax and then plateau for a while before increas-

ing again to probe for an increase in available bandwidth and a new Wmax.

Page 25: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

4.1. TCP 11

Wmax

Steady state behaviour

Max probing

Figure 4.1: CUBIC window growth curve during CUBIC mode.

Figure 4.1 shows CUBICs windows growth curve during CUBIC mode. When cwnd < Wmax

the growth is a concave curve called the steady state behavior. In this state the window

grows rapidly in the beginning and slows down closer to Wmax. When cwnd > Wmax

the growth is a convex curve called max probing, where CUBIC has passed the limit of

where the last congestion happened, and is probing for any possible increase in available

bandwidth.

If a packet loss occurs, i.e, in a congestion event, Wmax is set to the current cwnd and then

ssthresh and cwnd is set to cwnd × (1 − β). By default in the Linux kernel CUBIC ap-

plies a feature called fast convergence which is used to improve the speed of which CUBIC

converges to a fair bandwidth when a new flow joins the network. When fast convergence

is enabled, a packet loss occurs, Wmax is set to cwnd× (2 − β)/2 instead of cwnd.

In recent Linux kernels, CUBIC adopts something called HyStart (Hybrid Slow Start).

HyStart is a modification of the standard TCP slow start. Instead of waiting for a packet

loss to occur before exiting slow start, two other indicators are used. The first indicator

works by the sender, S, sending N back-to-back probe packets to the receiver, R. If

N > 2 the probe packets are called a packet train. The length of the train can be written

as ∆(N) =∑N−1

k=1 δk, where N is the number of packets in a train and δk is the inter-arrival

time between packet k and k + 1. Using this length, R can measure the bandwidth b(N)

by

b(N) =(N − 1)L

∆(N). (4.4)

The ACK train measurement allows S to calculate the dispersion roughly by accumulation

the inter-arrival times of corresponding ACKs. For simplicity, suppose the receiver doesn’t

delay an ACK. Then the ACK train length can be written as Λ(N) =∑N−1

k=1 λk where

λk is the inter-arrival time between an ACK k and k + 1. S can from (4.4) measure the

ACK train length Λ(N). When Λ(N) is the same as the minimum forward delay of the

link, b(N)Λ(N) is the bandwidth delay product (BDP) of the forward link which holds

Page 26: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

12 Chapter 4. Transport protocols and Congestion Control

(N − 1)L packets in flight. Λ(N) can be used to infer whether the packets in flight is

approaching the BDP of a path, and this is when slow start is exited.

The other indicator is based on detection of increasing packet delays during slow start.

A TCP sender S measures an increase in packet delays of the first packets in each RTT

round during slow start. When the increase in delays is over a certain threshold, it implies

that other flows are building up the bottleneck queues and the slow start is exited.

4.1.2 TCP Westwood+

TCP Westwood+ is an improvement of TCP Westwood and is a sender-side modification

of TCP Reno. TCP Westwood was developed to optimize the performance of the basic

TCP congestion control over both wired and wireless networks. It relies on end-to-end

bandwidth estimation to discriminate the cause of packet loss in order to improve transfer

over wireless networks with lossy links.

TCP Westwood continuously measures, at the TCP source the rate of the connection,

by monitoring the rate of returning acknowledgements. This estimate of the rate is then

used to compute the congestion window and slow start threshold after a congestion event,

that is, either three duplicate ACKs or a timeout. The congestion window and slow start

threshold are selected which are consistent with the effective bandwidth used at the time

congestion is experienced. This has been named faster recovery.

The bandwidth estimation algorithms of TCP Westwood presented in [9] was shown to

not work properly in the presence of ACK compression [4], and therefore Westwood+ was

developed with a different bandwidth estimation algorithm as the only difference.

The bandwidth estimation algorithm proposed with Westwood+ relies on bandwidth sam-

ples collected every RTT, instead of after each congestion event as in Westwood. The

amount of data Dk, which is acknowledged during the last RTT = Tk is counted and

the bandwidth sample Bk is calculated by Bk = Dk/Tk. A minimum value of 50ms have

been set for Tk in order to avoid potential numerical instabilities due to small RTT values.

Congestion depends on low frequency components of available bandwidth [8], and there-

fore Bk samples are filtered through a discrete time low-pass filter in order to obtain an

estimate of the used bandwidth as follows:

BWEk =7

8BWEk−1 +

1

8Bk (4.5)

The following pseudo-code explains the algorithm of Westwood+.

Page 27: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

4.2. UDP 13

Algorithm 1 Westwood+ algorithm

1: procedure On ACK2: if cwnd < ssthresh then3: cwnd = cwnd+ 14: end if5: if cwnd > ssthresh then6: cwnd = cwnd+ 1/cwnd (Approximately 1 per RTT)7: end if8: end procedure9:

10:

11: procedure 3 DUPACKs received by sender12: ssthresh = (BWE ×RTTmin)/MSS13: if ssthresh < 2 then14: ssthresh = 215: end if16: cwnd = ssthresh17: end procedure18:

19:

20: procedure Coarse timeout expires21: ssthresh = (BWE ×RTTmin)/MSS22: if ssthresh < 2 then23: ssthresh = 224: end if25: cwnd = 126: end procedure

In the Linux implementation, Westwood+ modifies the rate-halving mechanism of Reno

by exploiting the gracefully cwnd decreasing phase of the mechanism while not allowing

cwnd to become smaller than the bandwidth estimation times the minimum round-trip-

time measured by the sender: BWE × RTTmin. This means that after a congestion the

cwnd is reduced by one every two ACK’s down to BWE ×RTTmin.

4.2 UDP

User Datagram Protocol (UDP) as described in [11] is a transport protocol that uses a

simple connectionless transmission model. It does not provide any handshake functionality

and does not provide any guarantee of delivery, ordering, duplicate protection or congestion

control. It does however provide checksums for data integrity.

UDP is generally faster than TCP due to UDP’s lack of flow control and error correction.

UDP does not handle lost or reordered data and uses whatever data that arrives. This is

useful for streaming media, for example, where it usually does not matter if some data is

lost, as long as the receiver does not have to wait for data to arrive.

Page 28: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

14 Chapter 4. Transport protocols and Congestion Control

Since UDP does not provide any congestion control, it does not share the available band-

width fairly, as TCP’s congestion control intend to do.

4.3 Open source transport protocols

This section describes two open source implementations of transport protocols, UDT and

Tsunami UDP.

4.3.1 UDT

UDT is an open source reliable UDP data transport protocol developed for distributed

data intensive applications over wide area high-speed networks. In [5] from 2007 the

creators of UDT, Yunhong Gu and Robert L. Grossman describes the protocol and states

that TCP underutilizes network bandwidth over high-speed connections with long delays.

This underutilization is the reason for the development of UDT.

UDT employs congestion control and provides users with an application level protocol with

support for user configurable control algorithms, APIs, and reliable data streaming. The

native congestion control algorithm implemented by the creators of UDT uses rate-based

congestion control (rate control) and window-based flow control to regulate the outgoing

traffic and makes use of acknowledgements for congestion control and data reliability. The

protocol uses timer-based selective acknowledgement which generates acknowledgements

at a fixed interval if there are new continuously received data packets. This means that the

faster the transfer speed, the less ratio of the bandwidth is consumed by acknowledgement

overhead since more packets will be transferred for every acknowledgement. With very

low bandwidth UDT behaves as other protocols which sends acknowledgements for every

packet.

In order to handle packet loss UDT uses negative acknowledgements, NAKs, which are

generated when the receiving side detects a packet loss. A NAK is used to report missing

packets back to the sender, so that the sender can react as quickly as possible. This

mechanism is similar to TCP Selective Acknowledgements (SACK)[10]. The complete

algorithm UDT uses upon packet loss can be seen in [5].

For congestion control UDT uses a custom additive increase multiplicative decrease (AIMD)

algorithm, called DAIMD (decreasing AIMD). For each rate control interval, which con-

trols the rate of transmission, if there are no negative feedback from the receiver but

there are positive feedback in the form of acknowledgements, the packet-sending rate x

(packets/second) is increased by a(x)

x = x+ a(x) (4.6)

Page 29: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

4.3. Open source transport protocols 15

where a(x) is non-increasing and approaches 0 as x increases.

The fixed rate control interval of UDT is SYN, or the synchronization time interval, which

is set to 0.01 seconds. UDT rate control is a special DAIMD algorithm by specifying a(x)

as

a(x) = 10dlog(L−C(x))−τe × 1500

S× 1

SY N(4.7)

where x has the unit of packets/second, L is the link capacity measured by bits/second, S

is the UDT packet size (in terms of IP payload) in bytes, C(x) is a function that converts

the unit of the current sending rate x from packets/second to bits/second (C(x) = x∗S∗8),

τ is a protocol parameter which is 9 in the current protocol specification. The factor of1500S balances the impact of flows with different packet sizes. UDT treats 1500 bytes as a

standard packet size.

For any negative feedback the sending rate is decreased by a constant factor 0 < b < 1 :

x = (1 − b) × x. (4.8)

This strategy is proven to be globally asynchronously stable and will converge to fairness

equilibrium[6].

The UDT congestion control is only enabled when the first NAK is received or the flow

window has reached the maximum size. This is the slow start period of the UDT congestion

control. The initial flow windows size is 2 and is updated to the number of acknowledged

packets every time an ACK is received. Slow start only occurs at the beginning of the

UDT connection, and once the congestion control is enabled it will not occur again.

UDT does not react to the first packet loss during a congestion event. It will only decrease

the sending rate if there is more than one packet loss in one congestion event. This is done

to handle noisy links, where the reason for lost packets is not congestion, but instead a

noisy link.

4.3.2 Tsunami UDP

The Tsunami UDP protocol[1] was created in 2002 by the Pervasive Technology Labs of

Indiana University. Since the release several people have improved the code and have

added more functionality.

Tsunami was developed for high speed transfer over network paths that have a high

Page 30: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

16 Chapter 4. Transport protocols and Congestion Control

bandwidth-delay product. It was created to address the problem of TCP which is not

very well suited for network paths with a large bandwidth-delay product. It is built

with a client-server architecture with a command line interface which allows the client to

configure settings and retrieve files from the server. Tsunami allows the user to specify

several settings which affects the performance and behaviour of tsunami and Table 4.1

shows some of the settings.

blocksize how large UDP blocks to use

rate the target transfer rate

error maximum error rate to maintain by rate throttling

slowdown how fast to start throttling the rate

speedup how fast to recover and move up towards target rate again

history weighting of past error rates in the throttling algorithm

Table 4.1: Available settings of Tsunami.

Tsunami performs a file transfer by sectioning the file into blocks with a default size of

32768 bytes. The communication between server and client is performed through TCP and

the data is transferred over UDP. The client holds the most protocol intelligence and the

server simply sends blocks that the client requests. The client keep tracks of which blocks it

has received with a bitfield with one bit per block, where 0 is not received and 1 is received.

If a block with a higher block number than expected is received, the missing intervening

blocks are queued up for retransmission. At regular intervals the retransmission list is

processed. Blocks that have been received in the meantime are removed from the list and

the remaining blocks in the list are requested as normal block transmission requests to the

server. When adding a new pending retransmission to the clients list of retransmissions,

if the list size exceeds a hard-coded limit, the entire transfer is re-initiated to the start at

the first block in the list, that is, the earliest block of the file that has not successfully

been delivered.

Tsunami uses a rate based congestion control mechanism by the client regularly sending

error rate information to the server. The error rate is based on the number of blocks

placed in the retransmission queue in the last period, the current size of the ring buffer,

and the previous value for the error rate. This information is used by the server to adjust

the inter-packet delay which is directly linked to the transmission rate.

Page 31: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

4.3. Open source transport protocols 17

Algorithm 2 Inter-packet delay adjustment

1:

2: procedure On error rate reception3: if errorrcvd > errormax then ( error rate is higher than the maximum, decrease)4: factor1 = (slowernum/slowerden) − 15: factor2 = (1 + errorrcvd − errormax)/(100000 − errormax)6:

7: ipd = ipd ∗ (1 + (factor1 ∗ factor2))8: else(increase)9: ipd = ipd ∗ (fasternum/slowerden)

10: end if11: end procedure

Algorithm 2 shows how the inter-packet delay is adjusted when error rate is received on

the server where ipd is the inter-packet delay, errorrcvd is the error rate received from

client, errormax is the maximum error rate to maintain by rate throttling, slowernum

and slowerden is the numerator and denominator of the slowdown setting, fasternum and

fasterden is the numberator and denominator of the speedup setting.

The server can gradually slow down the rate when the client reports that it is too slow in

receiving and processing UDP packets.

Tsunami allows the user to limit the transfer rate in order not to overflow the network

link, and due to the simplicity of the algorithm Tsunami will flood the network if the

transfer rate is set too high. There is no way for Tsunami to reach fairness equilibrium

automatically.

Page 32: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 5

Testing

One goal of this thesis is to experimentally investigate how different protocols and con-

gestion control algorithms behave and perform during different network conditions with

aspect to latency, packet loss, and bandwidth. Therefore an environment which allows

these different network conditions to be simulated and measured has been setup.

This chapter explains how the testing was conducted and the details of the tests is ex-

plained.

5.1 Testing methodology and metrics

The tests are conducted using the different transport protocols and congestion control

algorithms explained in Chapter 4.

In the tests the packet loss is specified as the probability of packet loss, not the actual

packet loss that was observed during the tests. This makes the tests correspond better

to the network environment which exists in the real world. As a side effect there will be

randomness introduced to the tests. This randomness is handled by performing multi-

ple tests. Tests are run several times for each test case and the average result for every

test case is the result presented. The main metric that is captured is throughput. The

throughput is measured with 100% of data sent, including overhead and control messages.

5.1.1 TCP tests

The TCP tests are conducted by sending a stream of data with the tool iPerf3, see Sec-

tion 5.6. The streaming of data is run for 64 seconds or 204 seconds, with the first 4

seconds omitted from the throughput result in order to eliminate slow start from the re-

sult.

18

Page 33: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

5.2. Environment setup 19

When conducting the TCP tests secondary metrics are captured. These metrics are cwnd

(congestion window) size and ssthresh (slow start threshold) size.

5.1.2 UDP tests

The UDP tests are conducted by sending a 1GB file between the sender and receiver.

Disk writes are deactivated in order to eliminate any bottlenecks that may occur from

disk limitations.

5.2 Environment setup

This section describes how the testing environment is configured.

5.2.1 Equipment configuration

The tests were conducted using a host system running two virtual machines (VM). These

VMs were used to test the different transport protocols and congestion control algorithms

by sending data between them while measuring the throughput achieved during different

simulated network conditions.

5.2.1.1 Host

The host system specification is listed in Table 5.1 and the host configuration is listed in

Table 5.2.

Computer Aspire V3-772

CPU Intel(R)Core(TM) i7-4702MQ CPU @ 2.20GHz

Memory 8 GiB DDR3 1600 MHz

Ethernet Interface NetLink BCM57780 Gigabit Ethernet PCIe

ATA Disk TOSHIBA MQ01ABD0 500GB

Table 5.1: Host system specification.

Operating system Architecture Version Linux kernel

Ubuntu Desktop 64-bit 14.04.1 3.19.0-59-generic

Table 5.2: Host system configuration.

5.2.1.2 Guest VM

For the conducted experiment two virtual machines were setup using Oracle VM Virtual-

Box1, with both VM’s using the configuration described in Table 5.3.

1https://www.virtualbox.org/

Page 34: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

20 Chapter 5. Testing

Operating system Ubuntu Server

Architecture 64-bit

Version 14.04.4

Linux kernel 4.2.0-27-generic

Memory 768 MB

Network hardware Intel PRO/1000 MT Desktop (82540EM)

Table 5.3: VM configuration.

VirtualBox allows the VM’s to run in different network modes. The network mode used

in the tests is Internal Network. This internal network mode allows the virtual machines

to communicate with each other but not with anything else, not even the host system.

This network mode was chosen to minimise any external effects on the tests.

5.3 TCP settings

The Linux kernel allows for heavy modification and optimization of the TCP/IP stack.

The settings used affects the throughput in different ways when using TCP as the transport

protocol. This section explains the most important settings used in the tests. The complete

list of settings, except buffer size settings, can be found in Appendix A.

5.3.1 TCP Window scaling

net.ipv4.tcp window scaling = 1

This option, if enabled, increases the receive window size that is allowed in TCP above

65,535 bytes to a maximum of 1,073,725,440 bytes. It increases the throughput of networks

with a high bandwidth and high latency with a bandwidth delay product above 65,535

bytes.

5.3.2 Buffer sizes

The buffer sizes are the most crucial settings when tuning throughput. It decides how much

unacknowledged data the receive and send buffers can hold and this limits the maximal

theoretical throughput.

The Bandwidth Delay Product (BDP) of a connection can be described as how much data

that must be transmitted in order to efficiently use the bandwidth. For example, if trans-

mitting data over a network with a bandwidth of 100Mbps with a RTT delay of 500ms,

the BDP is calculated as 12500[KBps]×500[ms] = 6250000[bytes]. This would mean that

in order to efficiently use the bandwidth of 100Mbps with 500ms delay, the maximal size

of the send and receive buffers must be set to at least 6250000 bytes.

Page 35: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

5.3. TCP settings 21

During the tests the buffer size settings have been set to different values consciously in

order to test how these settings affects the test results.

5.3.2.1 Default

net.ipv4.tcp rmem = 4096 87380 5456000

net.ipv4.tcp wmem = 4096 16384 4194304

net.core.wmem max = 212992

net.core.rmem max = 212992

These are the default settings for buffer sizes for the configuration described in Section 5.2.

These settings are tested in order to see what is achievable without tuning these settings,

and to compare against different settings.

5.3.2.2 100 Mbps

net.ipv4.tcp rmem = 4096 87380 6295520

net.ipv4.tcp wmem = 4096 87380 6295520

net.core.wmem max = 6295520

net.core.rmem max = 6295520

This setting is used for testing 100 Mbps bandwidth.

5.3.2.3 1000 Mbps

net.ipv4.tcp rmem = 4096 87380 62505520

net.ipv4.tcp wmem = 4096 87380 62505520

net.core.wmem max = 62505520

net.core.rmem max = 62505520

This setting is used for testing 1000 Mbps bandwidth.

5.3.3 SACK

net.ipv4.tcp sack = 1

Selective Acknowledgement (SACK)[10] is a setting that is especially developed for han-

dling lossy connections. It allows the receiver to acknowledge specific data which in turn

allows the sender to only retransmit specific parts of the TCP window with lost data, and

not the whole TCP window as core TCP does.

Page 36: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

22 Chapter 5. Testing

5.3.4 No metrics save

net.ipv4.tcp no metrics save = 1

TCP is able to save several different metrics from a connection when it is closed. These

metrics are then used when a new TCP connection is established in order to set the initial

conditions for that new connection.

In the tests conducted the No metrics save is enabled so that the TCP test results is

independent from any previous test.

5.3.5 Autotuning

net.ipv4.tcp moderate rcvbuf = 1

This setting allows the receiver TCP window size to dynamically (no larger than tcp rmem)

update for each connection. The buffer size is automatically sized to match the size re-

quired by the path for full throughput.

net.ipv4.tcp mem = 7992 10656 15984

This setting defines how the TCP stack should behave with different levels of memory

usage. If the memory page usage is below the first value, the TCP stack does nothing. if

the memory page usage exceeds the second value, the TCP stack moderates its memory

consumption until the memory page usage drops below the first value. The third value

specifies the maximum number of pages that TCP will allocate.

These values are left to its default values, which are calculated at boot time from the

amount of available memory.

5.4 Tsunami settings

Tsunami allows the user to specify several settings to adjust how tsunami performs and

behaves. In the tests conducted these are the specified settings:

buffer = 20000000

blocksize = 10240

rate = 650000000

error = 10%

slowdown = 25/24

speedup = 5/6

history = 25%

lossless = yes

Page 37: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

5.5. Network topology 23

Different block sizes were tested in order to find out how big of a difference the block

sizes make, and which block size would be most suitable to run the throughput tests on.

In the tests the block size of 10240 is shown to be the optimal block size, and is thus used

as the block size in the tests.

5.5 Network topology

In the tests conducted, a network topology was created in order to simulate various network

conditions including bandwidth, latency, and packet loss.

Figure 5.1: The network topology used during the tests.

As seen in Figure 5.1, a sender and a receiver is connected to each other with a network

emulator between the sender’s outgoing interface and the receiver’s incoming interface.

The network emulator allows for configuration of the bandwidth, latency, and packet loss.

The bottleneck bandwidth is set to 10, 100, or 1000 Mbps for all connections, depending

on what test is conducted.

When testing different TCP algorithms, both the sender and receiver are configured to

use the same algorithm.

5.6 Tools

The tests were conducted using several tools for measurements and network emulations.

This section describes these tools briefly and how they are used in this thesis.

5.6.1 iPerf3

Iperf32 is a tool for active measurements of the maximum achievable bandwidth on IP

networks. It provides a client and a server and measurements are taken by sending data

2https://iperf.fr/

Page 38: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

24 Chapter 5. Testing

from the client to the server.

iPerf3 is used in this thesis to measure throughput for the different TCP congestion control

algorithms.

5.6.2 TCP Probe

TCP Probe3 is a linux module that records the state of a TCP connection in response to

incoming packets. Using this tool the congestion window and sequence numbers can be

captured over time.

TCP Probe is used in this thesis to capture the size of the congestion window during TCP

tests.

5.6.3 netem

netem4 provides network emulation functionality for testing protocols by emulating the

properties of wide area networks. netem emulates variable latency, loss, duplication, and

re-ordering.

netem is used to emulate the network used in the tests as discussed in Section 5.5.

3http://www.linuxfoundation.org/collaborate/workgroups/networking/tcpprobe4http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

Page 39: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 6

Results

In this chapter the results from the conducted tests are presented. The tests are done

for 10, 100, or 1000 Mbps. However, the results are only presented for 100 Mbps and

1000 Mbps. For an explanation why 10 Mbps is omitted see Section 7.1.3.

6.1 TCP tests

The following TCP tests conducted show results from both default buffer settings and

optimized buffer settings, as explained in Section 5.3 under Buffer sizes.

6.1.1 CUBIC

Figures 6.1a and 6.1b show the throughput tests using CUBIC at 100 Mbps for default

buffer sizes and optimized buffer sizes with different latency and packet loss. As seen in

Figure 6.1a the throughput begins at about 95 Mbps and drops at 200 ms latency with

default buffers, and at 250 ms with optimized buffers and continues to drop after that. In

Figure 6.1b similar throughput for both default buffers and optimized buffers is shown.

The throughput drops significantly from 1.5% packet loss and continues to drop gradually

down to 10% packet loss where the decrease seems to stop.

Figures 6.1c and 6.1d show the throughput tests using CUBIC at 1000 Mbps for de-

fault buffer sizes and optimized buffer sizes with different latency and packet loss. As seen

in Figure 6.1c the throughput drops from around 950 Mbps to around 230 Mbps from

0 ms latency to 50 ms latency for both buffer settings. After 50 ms latency the optimized

buffer throughput drops a little more to about 180-200 Mbps. The default buffer size

throughput compared to the optimized buffer size throughput drops even more down to

20 Mbps where it stays.

25

Page 40: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

26 Chapter 6. Results

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

CUBIC 100~Mbps default and optimized buffer sizes (latency)

Buffer

Default

(a) CUBIC 100 Mbps Latency tests.

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

CUBIC 100~Mbps optimized and default buffer sizes (packet loss)

Buffer

Default

(b) CUBIC 100 Mbps Packet Loss tests.

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

CUBIC 1000~Mbps optimized and default buffer sizes (latency)

Buffer

Default

(c) CUBIC 1000 Mbps Latency tests.

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

CUBIC 1000~Mbps optimized and default buffer sizes (packet loss)

Buffer

Default

(d) CUBIC 1000 Mbps Packet Loss tests.

Figure 6.1: Graphs showing TCP CUBIC throughput test showing results using defaultand optimized buffer sizes.

Page 41: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6.1. TCP tests 27

6.1.2 Westwood+

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ 100~Mbps optimized and default buffer sizes (latency)

Buffer

Default

(a) Westwood+ 100 Mbps Latency tests.

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ 100~Mbps optimized and default buffer sizes (packet loss)

Buffer

Default

(b) Westwood+ 100 Mbps Packet Loss tests.

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ 1000~Mbps optimized and default buffer sizes (latency)

Buffer

Default

(c) Westwood+ 1000 Mbps Latency tests.

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ 1000~Mbps optimized and default buffer sizes (packet loss)

Buffer

Default

(d) Westwood+ 1000 Mbps Packet Losstests.

Figure 6.2: Graphs showing TCP Westwood+ throughput test showing results using de-fault and optimized buffer sizes.

Figures 6.2a and 6.2b shows the throughput tests using Westwood+ at 100 Mbps for

default buffer sizes and optimized buffer sizes with different latency and packet loss. Fig-

ure 6.2a shows an almost identical throughput as the CUBIC latency test for 100Mbps

seen in Figure 6.2b. The default buffer size drops in throughput at 200 ms while the

optimized buffer drops throughput at 250 ms.

Figures 6.2c and 6.2d shows the throughput tests using Westwood+ at 1000 Mbps for

default buffer sizes and optimized buffer sizes with different latency and packet loss. In Fig-

ure 6.2c one can see that the throughput for both default buffer size and optimized buffer

size drops aggressively when the latency increases, and at around 100 ms the throughput

reached around 100 Mbps for optimized buffer and around 40 Mbps for default buffer sizes.

Page 42: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

28 Chapter 6. Results

6.1.3 CUBIC and Westwood+

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 100~Mbps optimized buffer sizes (packet loss)

Westwood+

CUBIC

(a) Westwood+ and CUBIC 100 MbpsPacket Loss tests with optimized buffer sizes.

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 100~Mbps default buffer sizes (packet loss)

Westwood+

CUBIC

(b) Westwood+ and CUBIC 100 MbpsPacket Loss tests with default buffer sizes.

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 100~Mbps optimized buffer sizes (latency)

Westwood+

CUBIC

(c) Westwood+ and CUBIC 100 Mbps La-tency tests with optimized buffer size.s

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 100~Mbps default buffer sizes (latency)

Westwood+

CUBIC

(d) Westwood+ and CUBIC 100 Mbps La-tency tests with default buffer sizes.

Figure 6.3: Graphs showing TCP Westwood+ and TCP CUBIC throughput tests showingresults performed with 100 Mbps bandwidth.

Figures 6.3a and 6.3b shows the throughput tests using Westwood+ and CUBIC at

100 Mbps for different packet loss rate with optimized buffer size and default buffer size.

Both graphs show a similar result with the throughput starting at 95 Mbps at 0% packet

loss and with the throughput dropping at around 1.5-2% packet loss. The throughput of

Westwood+ is higher than that of CUBIC, with the biggest difference around 6% packet

loss where the difference of throughput is about 40 Mbps, where Westwood+ is ∼188%

faster than CUBIC.

Figures 6.3c and 6.3d shows the throughput tests using Westwood+ and CUBIC at

100 Mbps with different latency and with optimized buffer sizes and default buffer sizes.

Both optimized buffer size and default buffer size shows a similar result. The both graphs

shows that Westwood+ and CUBIC performs with the same throughput during the same

network conditions.

Figures 6.4a and 6.4b shows the throughput tests using Westwood+ and CUBIC at

1000 Mbps for different latency with optimized buffer size and default buffer size. Fig-

ure 6.4a shows a big loss of throughput already between 0-50 ms of latency. At latencies

Page 43: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6.1. TCP tests 29

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000~Mbps optimized buffer sizes (latency)

Westwood+

CUBIC

(a) Westwood+ and CUBIC 1000 Mbps La-tency tests with optimized buffer sizes.

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000Mbps default buffer sizes (latency)

Westwood+

CUBIC

(b) Westwood+ and CUBIC 1000 Mbps La-tency tests with default buffer sizes.

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000~Mbps optimized buffer sizes (packet loss)

Westwood+

CUBIC

(c) Westwood+ and CUBIC 1000 MbpsPacket Loss tests with optimized buffer sizes.

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000~Mbps default buffer sizes (packet loss)

Westwood+

CUBIC

(d) Westwood+ and CUBIC 1000 MbpsPacket Loss tests with default buffer sizes.

Figure 6.4: Graphs showing TCP Westwood+ and TCP CUBIC throughput tests with1000 Mbps bandwidth.

from 50 ms to 500 ms CUBIC gives a throughput starting at 279 Mbps which drops down

to 153 Mbps. Westwood+ follows the same curve, but at lower throughput. Westwood+

drops in throughput even more after 50 ms latency with a throughput of 100 Mbps at

100 ms latency. With latencies between 50 ms and 500 ms Westwood+ shows a through-

put starting at 235 Mbps at 50 ms which drops to 72 Mbps at 500 ms. In Figure 6.4b

an aggressive drop of throughput is shown, and at 100ms latency Westwood+ reaches

a throughput of ∼10Mbps and CUBIC reaches a throughput of ∼20Mbps. These two

throughputs stay all the way up to 500ms. From the graphs we can see that both West-

wood+ and CUBIC performs with a higher throughput when using optimised buffer sizes

and that CUBIC benefits the most.

Figures 6.4c and 6.4d shows the throughput difference of Westwood+ and CUBIC at

1000 Mbps with different packet loss with optimized buffer size and default buffer size.

Both of the graphs shows a difference between Westwood+ and CUBIC. Westwood+ have a

higher throughput than CUBIC throughout the test. CUBIC shows a curve which declines

faster in the beginning while Westwood+ show a more linear decrease of throughput. At

10% packet loss rate the throughput of Westwood+ and CUBIC get close to each other

with ∼10 Mbps.

Page 44: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

30 Chapter 6. Results

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000 Mbps optimized buffer sizes 200 s (latency)

Westwood+

CUBIC

(a)

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 1000 Mbps optimized buffer sizes 200 s (packet loss)

Westwood+

CUBIC

(b)

Figure 6.5: Graphs showing throughput tests results for Westwood+ (a) and CUBIC (b)run for 200 seconds with 1000 Mbps bandwidth for different latency and packet loss.

Figure 6.5 shows throughput tests conducted for 200 seconds instead of 64 second with

1000 Mbps bandwidth and optimized buffer sizes for different latency and packet loss.

These tests were run to check how the results of the tests were linked to the time the tests

were conducted. The graphs show that the behaviour of Westwood+ and CUBIC tests

run for 200 seconds are similar to tests conducted for 64 seconds.

0 100 200 300 400 5000

2

4

6

8

10

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 10~Mbps optimized buffer sizes 200 s (latency)

Westwood+

CUBIC

(a)

0 2 4 6 8 100

2

4

6

8

10

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Westwood+ and CUBIC 10~Mbps optimized buffer sizes 200 s (packet loss)

Westwood+

CUBIC

(b)

Figure 6.6: Graphs showing throughput tests results for Westwood+ (a) and CUBIC (b)run for 200 seconds with 10 Mbps bandwidth for different latency and packet loss.

In Figure 6.6 results from running throughput tests at 10 Mbps with Westwood+ and

CUBIC is show. Figure 6.6a show that the throughput behaviour is almost identical

compared to the results gather at 100 Mbps when the both transport protocols is affected

by different latencies. Figure 6.6a shows that when the bandwidth is 10 Mbps Westwood+

and CUBIC performs very alike with CUBIC outperforming Westwood+ slightly up to

8% packet loss.

Page 45: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6.2. UDP tests 31

6.2 UDP tests

This section show the results from test run with Tsunami and UDT. When the Tsunami

tests were conducted a CPU utilization of 100% was observed while running the tests at

any other setting than zero latency and packet loss with both 100 Mbps and 1000 Mbps.

UDT showed a high CPU utilization at 1000 Mbps. This means that the results obtained

from the UDP protocols at the high CPU utilization test settings could be misleading

and not show a correct behaviour to what they could show if the test set-up had a CPU

which could cope with the large CPU demands of Tsunami. This section shows the tests

conducted with the UDP protocols Tsunami and UDT.

6.2.1 Tsunami block sizes

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Tsunami 100 Mbps different block sizes (latency)

1024 bytes

2048 bytes

5120 bytes

10240 bytes

20480 bytes

30720 bytes

51200 bytes

0 1 2 3 4 50

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Tsunami 100 Mbps different block sizes (packet loss)

1024 bytes

2048 bytes

5120 bytes

10240 bytes

20480 bytes

30720 bytes

51200 bytes

Figure 6.7: Graphs showing Tsunami UDP throughput tests using different block sizes forlatency and packet loss.

The graphs in Figure 6.7 shows throughput tests using different Tsunami block sizes, with

different latency and different packet loss rate. As seen in the latency test the larger block

sizes perform better than the smaller sizes when it comes to packet loss. In the packet loss

test on the other hand the opposite can be seen where larger block sizes performs worse

when it comes to packet loss. When viewing both figures the block size 10240 is among the

top performing block sizes for both the latency and the packet loss tests. In the latency

test a value for 1024 bytes at 500 ms is missing. This is because Tsunami could not handle

such a small block size with that high latency. As explained in Section 4.3.2 the Tsunami

client regularly sends error rate information to the server. This is also used as a heartbeat

message, indicating if the client is alive or not. If the server has not received the heartbeat

from the client for 15 consecutive seconds (by default) the connection is terminated, and

this is what happened while performing the tests with the stated settings. It is unclear ife

either the client cannot send the heartbeat or the server can not process it.

Page 46: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

32 Chapter 6. Results

0 100 200 300 400 5000

20

40

60

80

100

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Tsunami and UDT 100~Mbps (latency)

Tsunami

UDT

(a) Tsunami & UDT 100 Mbps Latency.

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Tsunami and UDT 100~Mbps (packet loss)

Tsunami

UDT

(b) Tsunami & UDT 100 Mbps Packet Loss.

Figure 6.8: Graphs showing Tsunami UDP and UDT tests at 100 Mbps throughput testswith different latency and packet loss.

6.2.2 Tsunami and UDT

Figure 6.8 show throughput tests of Tsunami and UDT at 100 Mbps with different latencies

and packet loss rates. Figure 6.8a shows that the throughputs for Tsunami and UDT are

close to each other, with UDT showing a higher throughput especially between 0-100 ms

latency and between 300-500 ms. Figure 6.8b shows that Tsunami performs better than

UDT in the packet loss tests. UDT throughput drops drastically as early as with 0.01%

packet loss, down to 65 Mbps. At 1% packet loss UDT throughput is down to 8 Mbps.

Tsunami shows a higher throughput than UDT, with a throughput of 64 Mbps at 5%

packet loss. Between 5%-7% the throughput drops more rapidly down to 18 Mbps.

0 100 200 300 400 5000

200

400

600

800

1000

Latency [ms]

Th

rou

gh

pu

t [M

bp

s]

Tsunami and UDT 1000~Mbps (latency)

Tsunami

UDT

(a) Tsunami & UDT 1000 Mbps Latency.

0 2 4 6 8 100

200

400

600

800

1000

Packet loss [%]

Th

rou

gh

pu

t [M

bp

s]

Tsunami and UDT 1000~Mbps (packet loss)

Tsunami

UDT

(b) Tsunami & UDT 1000 Mbps PacketLoss.

Figure 6.9: Graphs showing Tsunami UDP and UDT tests at 1000 Mbps throughput testswith different latency and packet loss.

Figure 6.9a shows similar result as the test using 100 Mbps bandwidth. Tsunami and

UDT perform similar but UDT has around 46 Mbps higher throughput between 100 ms

and 500 ms which translates to ∼170% higher than Tsunami. UDT during packet loss at

1000 Mbps is similar to the results at 100 Mbps with a rapidly decreasing throughput at

low packet loss rates, see Figure 6.9b. Tsunami has a higher throughput, not dropping as

Page 47: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

6.2. UDP tests 33

rapidly as UDT.

Page 48: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 7

Analysis and Discussion

In this chapter the background theory and results are analysed in order to draw conclusions

about what variables affect data transportation on networks and how these variables affect

data transportation in different environments.

7.1 Analysis

From the results several different behaviours can be observed for the different transport

protocols in the different network environments. This section explains these behaviours.

7.1.1 Westwood+ and CUBIC

This section analyses how and why latency and packet loss affects Westwood+ and CUBIC

transport protocols.

7.1.1.1 Latency

The results show a particular characteristics to how the throughput of TCP is affected

by latency. The throughput rapidly decreases as the latency increases both with default

buffer size and optimized buffer size.

As explained in Section 4.1 the congestion window (cwnd) of core TCP is increased upon

every ACK reception. With higher latency, the ACKs will be received slower, the conges-

tion window will grow slower and hence the throughput will be lower. In other words this

means that the sender will spend more time idle, waiting to send new data, and therefore

the data transfer suffer from a decrease in throughput.

34

Page 49: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

7.1. Analysis 35

In Figure 6.4a, a difference can be seen in the throughput between Westwood+ and CU-

BIC. CUBIC performs with an average of more than double the throughput in comparison

to Westwood+ from 50ms up to 500ms. This result can be explained by looking at how

the congestion window varies between the two algorithms.

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

0 10 20 30 40 50 60

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

Westwood+ cwnd and sshthresh 1000 Mbps and 350 ms

cwndssthresh

(a)

0

5000

10000

15000

20000

25000

30000

35000

40000

45000

0 5 10 15 20 25 30 35 40 45 50 55

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

CUBIC cwnd and sshthresh 1000 Mbps and 350 ms

cwndssthresh

(b)

Figure 7.1: Graphs showing Westwood+ (a) and CUBIC (b) congestion window (cwnd)and slow start threshold (ssthresh) during a latency test with optimized buffer sizes,1000 Mbps bandwidth, and 350 ms latency.

In Figure 7.1a you can see the size of cwnd of Westwood+ increase exponentially during

the slow start period but then drastically drop at around 8 seconds. This drop happens due

to congestion. After the congestion event the cwnd and ssthresh is set to 1300. This is the

calculated value for cwnd and ssthresh to fit the estimated bandwidth. After 13 seconds

the cwnd stays stable, increasing from 2853 at 13 seconds to 2954 at 56 seconds with 8

congestion events during that time. During these flat episodes between the congestion

events the cwnd increases linearly with 1 every RTT, that is, every 350ms.

In Figure 7.1b the cwnd and ssthresh are shown during a test of CUBIC with 350ms

latency and with 1000Mbps bandwidth. The image shows the slow start period in the same

way as Westwood+, but after the slow start the two algorithms starts to differ. CUBIC

experiences a congestion event and drops the ssthresh and cwnd down to cwnd× (1 − β).

In the implementation tested (CUBIC Linux v2.3) the multiplicative window decrease

constant is β ≈ 0.3.

Looking at these figures we can see that a part of the explanation to why our results show

that CUBIC performs with a higher throughput than Westwood+ with high latency. Fig-

ure 7.1b shows that the congestion window of cubic gradually decreases after the slow start,

while Figure 7.1a shows that Westwood+ cwnd decreases immediately down to a stable

size. During the gradual decrease CUBIC have a higher throughput than Westwood+ and

this increased the total throughput measured and plotted in Figure 6.4a. Because of this,

additional tests have been conducted for 204 seconds instead of 64 seconds to see how the

total throughput is affected by this factor.

The results of these tests can be seen in Figure 6.5. By looking at the throughput measured

we see that the behavior is similar to tests run for 64 seconds but with different numbers.

CUBIC performs better on average than Westwood+ during latency tests and Westwood+

Page 50: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

36 Chapter 7. Analysis and Discussion

performs better than CUBIC during packet loss tests.

0

1000

2000

3000

4000

5000

6000

20 25 30 35 40 45 50

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

Westwood+ cwnd and sshthresh 1000 Mbps and 350 ms

cwndssthresh

(a)

0

1000

2000

3000

4000

5000

6000

20 25 30 35 40 45 50

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

CUBIC cwnd and sshthresh 1000 Mbps and 350 ms

cwndssthresh

(b)

Figure 7.2: Graphs showing a closer look at Westwood+ (a) and CUBIC (b) congestionwindow (cwnd) and slow start threshold (ssthresh) during a latency test with optimizedbuffer sizes, 1000 Mbps bandwidth, and 350 ms latency.

Looking closer at the cwnd and ssthresh during congestion avoidance of both Westwood+

and CUBIC it is clearly shown that they differ from each other. In Figure 7.2a West-

wood+’s linear increase during congestion avoidance is shown. In Figure 7.2b CUBIC’s

CUBIC mode with cwnd < Wmax is clearly shown. CUBIC’s fast convergence is also

shown. When a congestion event occurs, since cwnd never reaches Wmax, Wmax is reduced

even more, making the function plateau earlier the next congestion avoidance period than

it should have if cwnd would have reached Wmax.

From analysing the performance and behaviour of the two TCP algorithms Westwood+

and CUBIC in an environment with varying latency it is clear that latency severely affects

throughput. From comparing Westwood+ and CUBIC we can draw the conclusion that

an important aspect for improving the throughput in environments with high latency and

high BDP is the rate of which the congestion window is increased and decreased. West-

wood+ uses an additive linear increase of the cwnd size where it is increased with 1 every

RTT. As the latency increases the throughput of Westwood+ suffers since the window will

grow slower. During a short transfer and high latency the size of cwnd might not have

time to reach the size which corresponds to the available and potential bandwidth. CUBIC

on the other hand increases the size of cwnd more aggressively. The CUBIC algorithm

increases the size of cwnd towards Wmax rapidly during the steady state behaviour which

allows CUBIC to quickly approach the recent measured maximum throughput. After that

CUBIC quickly increases the cwnd size again during the max probing until a loss is de-

tected. This approach allows CUBIC to increase the size of cwnd faster than Westwood+

and therefore CUBIC is not as limited by the RTT.

7.1.1.2 Packet loss

When congestion is detected in some way Westwood+ and CUBIC uses different ways

to edit the size of cwnd in order to adapt to the network limit. Westwood+ sets cwnd

Page 51: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

7.1. Analysis 37

to the calculated size to fit the estimated bandwidth. This makes Westwood+ keep a

stable size of the cwnd with minimal fluctuations. CUBIC on the other hand decreases

cwnd to cwnd × 0.7 (CUBIC v2.3). This becomes a problem when the packet loss rate

increases. With packet loss occuring often, CUBIC often lowers the size of cwnd, and

even more with CUBICS fast convergence feature. This of course lowers the throughput.

Westwood+ which do not blindly lower the size of cwnd in a multiplicative decreasing

manner does not suffer from this problem.

100

150

200

250

300

350

400

0 10 20 30 40 50

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

Westwood+ cwnd and sshthresh 1000 Mbps 1% packet loss

cwndssthresh

(a)

0

50

100

150

200

250

300

0 10 20 30 40 50 60 70 80

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

CUBIC cwnd and sshthresh 1000 Mbps 1% packet loss

cwndssthresh

(b)

Figure 7.3: Graphs showing Westwood+ (a) and CUBIC (b) congestion window (cwnd)and slow start threshold (ssthresh) during a packet loss test with optimized buffer sizesand 1000 Mbps bandwidth.

Figure 7.3 shows the size of cwnd and ssthresh during tests of Westwood+ and CUBIC

with 1% packet loss, optimized buffer sizes and 1000 Mbps bandwidth. The cwnd of

Westwood+ is very stable. The size of cwnd of CUBIC is on the opposite very unstable

and fluctuates with a lot of different sizes.

320

325

330

335

340

20 22 24 26 28 30

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

Westwood+ cwnd and sshthresh 1000 Mbps 1% packet loss

cwndssthresh

(a)

0

50

100

150

200

250

14 15 16 17 18 19 20 21

Segm

ents

(cw

nd, ssth

resh)

time (seconds)

CUBIC cwnd and sshthresh 1000 Mbps 1% packet loss

cwndssthresh

(b)

Figure 7.4: Graphs showing a closer look on Westwood+ (a) and CUBIC (b) congestionwindow (cwnd) and slow start threshold (ssthresh) during a packet loss test with optimizedbuffer sizes and 1000 Mbps bandwidth.

Looking closer at the size of cwnd and ssthresh in Figure 7.4 you can clearly see the

difference between Westwood+ and CUBIC. The size of cwnd of Westwood+ is stable

around 330 while the size of cwnd of CUBIC fluctuates between 5 and 180.

The congestion window of CUBIC does reach sizes around 300 twice during the test,

in the beginning and at 47 seconds, see Figure 7.3b, which is close to the size which

Page 52: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

38 Chapter 7. Analysis and Discussion

Westwood+ constantly holds, . But since CUBIC uses a multiplicative decrease of cwnd,

the size quickly decreases which decreases the throughput. This is reflected in the measured

throughput of Westwood+ and CUBIC, see Figure 6.4c, where Westwood+ performs with

a higher throughput in the effects of packet loss.

7.1.1.3 Buffer size

In the results presented in section 6.1, a difference can be seen in transport protocol per-

formance between tests conducted with default buffer sizes and optimized buffer sizes. In

every test the performance was equal or better than the default values that were set upon

a fresh install of the Ubuntu Server. These results strengthen what can be expected from

a theoretical point of view, where the buffer size is very important as it have to match the

BDP in order for TCP to theoretically be able to fully utilize the available bandwidth.

The difference between optimized and default buffer sizes were bigger with a bandwidth of

1000 Mbps in comparison to 100 Mbps. With 100 Mbps and 500 ms the default buffer size

would give a theoretical maximum throughput of 68.72 Mbps which is 68.72% of the max

bandwidth. With the default buffer size at 1000 Mbps the theoretical maximum through-

put is the same, which is calculated to 6.872% of the max bandwidth. Hence the increase

is much bigger from 6.872% up to 100% than from 68.72% up to 100%, and therefore the

biggest improvement is achieved at 1000 Mbps.

7.1.2 Tsunami and UDT

This section analyses how and why latency and packet loss affects Tsunami and UDP

transport protocols. In 2009 an article [14] was released which evaluates UDP-based high-

speed transport protocols, among them UDT and Tsunami. The results in this article

differ from the results we obtained. The tests performed was over a 1000 Mbps link,

transferring a 2 Gb file and testing both latency and packet loss. In the results achieved,

Tsunami performs with a higher throughput during both latency and packet loss tests.

In the book ”Big Data: Storage, Sharing, and Security” a chapter [13] is written by Se-

young Yu, Nevil Brownlee, and Aniket Mahanti in which results from tests made on UDT

and Tsunami with 10 Gb/s bandwidth are presented. The results presented show that

UDT performs with a higher throughput than Tsunami up until around 150 ms latency

where tsunami started performing with a higher throughput. It also shows that Tsunami

throughput did not change much with increasing latencies.

Neither of these results can validate the results obtained from the tests made in this thesis

which could mean that the performance of Tsunami and UDT is highly affected by the

hardware setup used to perform the tests. While running tests with Tsunami at 100 Mbps

and 1000 Mbps bandwidth and UDT with 1000 Mbps bandwidth the CPU utilization was

Page 53: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

7.1. Analysis 39

at 100% in every test case except at zero latency and packet loss. This shows that the

UDP transport protocol is sensitive to the hardware used.

7.1.2.1 Latency

From the results of the UDP tests conducted, see section 6.2, we can see that UDT

performs better than Tsunami when exposed to latency with 100 Mbps bandwidth and

1000 Mbps. The two UDP transport protocols do follow the same behaviour but UDT

shows a higher throughput than Tsunami. This behaviour can not be explained within

the time limits of this thesis. More tests have to be conducted in order to pin down the

reason for the obtained results.

7.1.2.2 Packet loss

A clear difference can bee seen between Tsunami and UDT when affected by packet loss.

Figure 6.8b shows that Tsunami performs with much greater throughput than UDT.

UDT uses a custom congestion control algorithm explained in section 4.3.1 which reacts to

congestion (packet loss) by decreasing the sending rate in order to avoid network congestion

and to reach fairness in the network. The results show that UDT decreases the sending rate

drastically, more specific, UDT decreases the sending rate for each congestion period by

a random factor between 1/8 and 1/2. This means that the sending rate could be heavily

decreased when packet loss occurs, and as the packet loss rate increases the sending rate

will be decreased even more. Tsunami on the other hand is not affected by packet loss as

much as UDT according to the test results. Tsunami uses a rate-based congestion control

where the rate is decreased by increasing the inter-packet delay and increased by decreasing

the inter-packet delay. The algorithm is explained in Algorithm 2. This algorithm means

that if an error rate of 10.001% was received by the server, which is bigger than the set

maximum error rate of 10%, the inter-packet delay would be increased by the following:

factor1 =25

24− 1

factor2 =1 + 10001 − 10000

100000 − 10000

ipd =ipd ∗ (1 + (factor1 ∗ factor2))

ipd =ipd ∗ 1.000000925

(7.1)

This increase of the inter-packet delay, and decrease of sending rate, is extremely small in

comparison to UDT’s decrease between 1/8 and 1/2 and will allow Tsunami to perform

better in a packet loss rich environment.

Page 54: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

40 Chapter 7. Analysis and Discussion

7.1.3 10 Mbps, 100 Mbps, and 1000 Mbps

Tests have been conducted for three different bottleneck bandwidths: 10 Mbps, 100 Mbps,

and 1000 Mbps.

By looking at the results and comparing them one can see that lower bandwidths mean

less extreme ratios of throughput decrease when the TCP protocols are affected by latency

and packet loss. For example when comparing the packet loss results for TCP at 10, 100,

and 1000 Mbps it is clear that the protocols were least affected by packet loss at 10 Mbps

and most at 1000 Mbps, see Figure 7.5. This without comparing the actual throughput

but instead comparing to the max throughput possible for that test.

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Utiliz

atio

n [

%]

Westwood+ utilization curve for 10, 100, and 1000 Mbps (packet loss)

1000 Mbps

100 Mbps

10 Mbps

(a)

0 2 4 6 8 100

20

40

60

80

100

Packet loss [%]

Utiliz

atio

n [

%]

CUBIC utilization curve for 10, 100, and 1000 Mbps (packet loss)

1000 Mbps

100 Mbps

10 Mbps

(b)

Figure 7.5: Graphs showing Westwood+ (a) and CUBIC (b) utilization of the availablebandwidth for 10, 100, and 1000 Mbps with optimized buffer sizes during packet loss tests.

The UDP protocols was only tested for 100 Mbps and 1000 Mbps. The protocols shows

the same behaviour as the TCP protocols: the utilization for 100 Mbps is higher than for

1000 Mbps.

The reason for this behaviour could be that the hardware setup used in the tests affects

the speed of which packets are sent, received, and processed, and therefore affects the

achieved throughput. When the protocols are sending at a faster rate, the CPU-utilization

is increased as well which could affect the throughput.

7.1.4 TCP and UDP

The tests conducted on TCP and UDP protocols cannot be compared to each other from

looking at the test results because the UDP protocols required more CPU power than was

available which the TCP protocols did not.

Page 55: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

7.2. Discussion 41

7.2 Discussion

In this section the research methodology, results, and problems which arose during the

project are discussed.

7.2.1 Fairness

Something that is not a part of this thesis is the fairness aspect of congestion control. TCP

protocols especially tries to achieve fairness between several flows of the same kind. Since

the thesis does not consider this aspect, this thesis is not a recipe for the best transport

protocol. It only focuses on throughput and what affects it and how it is affected during

one single flow. With several flows the throughput is ideally shared equally to both flows.

The analysis of this thesis is only based on one flow.

7.2.2 Problems during the thesis

During the work of this thesis a lot of problems were encountered. Below these problems

are presented and it is explained why they happened and how they were solved if they

were solved.

7.2.2.1 Changes to the aim of the thesis

In the beginning of the thesis the aim was to perform tests to different TCP and UDP

protocols in order to compare them to see if UDP would be a good choice of transport

protocol for VidiXplore to upload and download media to/from the cloud. After the tests

were done an implementation would be built. This implementation was at first going to be

an integration of an UDP protocol into VidiXplore. Tests of the new integration would be

made and from the tests results a conclusion would be found if UDP would be suitable as a

replacement for the current transport protocol in use. As the work on the thesis progressed

and since the initial tests of the different protocols took much longer than anticipated, it

became clear that there would not be enough time for an implementation. The goal of the

thesis switched at that point to become an experimental analysis of the different protocols

in order to understand them and figure out what affects the performance of the protocols

and how they are affected by latency and packet loss.

Initially focus were planned to be on high-speed transfer, that is with bandwidths equal

to or greater than 1000 Mbps. Tests were therefore initially planned to be performed with

a 1000 Mbps bandwidth bottleneck, but after some further testing and discussion with

the external supervisor it was found that 100 Mbps were a more common bandwidth used

with their system. Therefore 10, 100, and 1000 Mbps were tested.

Page 56: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

42 Chapter 7. Analysis and Discussion

7.2.2.2 Testing

While testing several problems were encountered, mostly due to me not understanding how

complicated network testing really is. I did not read in advance enough about network

testing and different methodologies and tools for network testing, which have forced me

to perform new tests several times.

7.2.2.2.1 Bandwidth usage

In the beginning, tests were planned to only be performed with a 1000 Mbps bandwidth

bottleneck. Out of curiosity tests were also conducted for 100 Mbps and noticeable changes

of performance were discovered. Therefore tests had to be made for 100 Mbps as well to

discuss what these changes of performance was. 10 Mbps were then also tested to get a full

overview of how the throughput performance were affected by the available bandwidth.

7.2.2.2.2 High CPU utilisation

As stated in the thesis, there were problems with the CPU utilisation while running the

UDP tests. The utilisation was at 100% for most of the tests. This was discovered while

running tests. Attempts were made to increase the available CPU power in the virtual

machines but this did not work. This fact resulted in the tests being run several more

times to get some kind of conclusive result. An attempt were also made to measure the

CPU utilisation during tests in order to draw conclusions of the throughput results with

the CPU utilisation. The time resources available for the thesis project was not enough

to be able to complete these tests and they are stated in section 8.3 as future work. The

CPU utilization has already been tested and the results are presented in [14].

7.2.2.2.3 TCP tuning

Many times during the testing phase of the thesis there emerged new facts to different

TCP tuning settings which resulted in several re-tests with the new settings. The moste

important setting was the buffer size. This setting was documented to have a direct con-

nection and impact to the throughput of TCP. Many more settings discussed in section 5.3

were discovered over time which also resulted in re-tests. This have been the largest con-

sumer of time resources during this project. Had enough research been done prior to the

testing phase about different testing methodologies and existing results from TCP testing

the re-tests might have been avoided, saving a lot of time.

7.2.2.3 Virtual machines

VirtualBox allows the user to specify different network settings for the virtual machines.

Some of the settings include network hardware and network mode. The emulated network

hardware showed to have an impact on the results, some of the settings did not perform

with the same results as others. This was not considered in the beginning and was noticed

Page 57: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

7.2. Discussion 43

when performing tests. In order to be able to ssh into the virtual machines the mode Host

Only was first chosen as network mode. This network mode showed to affect the result

when network was used on the host machine while tests were running. This is obviously

not optimal and therefore Internal Network was finally chosen to minimize the effect of

the host machine to the guest machines.

During tests it was noticed that the CPU of the guest machines were affected by activities

performed on the host machine. In retrospect this is obvious but this was not thought of

when the tests were initiated.

All of these problems led to several re-tests. A solution to this problem would be to use

a dedicated software for testing. To eliminate the virtual machine problems the virtual

machines could be removed and tests could be run between two dedicated stand-alone

hardware stations.

7.2.2.3.1 Omit the slow start period

When tests were made the first 4 seconds of the transfer were not included into the test

results in order to remove the slow start period of a transfer. As seen in Figure 7.1 the

slow start period is actually longer than 4 seconds, which mean that the slow start period

is included in the test results. The slow start period is an important factor of any transfer

and the results are therefore not considered being faulty. The slow start is the same for

both the TCP protocols as well.

7.2.2.3.2 Test duration

When the tests were completed the results showed that the duration of which the tests

were conducted, 60 seconds, could be to short for optimal results. The captions of the

congestion window increases and decreases showed that the values had not stabilized

completely after 60 seconds. Tests were made with optimized buffer sizes for 200 seconds

and the tests showed similar results. Even though the values had not fully stabilized, they

had stabilized enough for me to consider the results valid. If the tests were to be made

again, the duration of all the test runs should be increased to 200 seconds.

7.2.2.3.3 Disk read and Write

In the middle of the testing phase, information gathered from various sources pointed to

the fact that disk reads and writes could limit the transfer speed of UDT and Tsunami.

These protocols were measured by sending a file. After this fact was discovered the disk

read and writes were disabled in both implementations and the tests were re-run.

Page 58: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Chapter 8

Conclusion

8.1 Summary of Thesis Achievements

The goal of the thesis was to explore how we transfer data fast and reliable over the

Internet in conditions suffering from latency and packet loss. What is it that affects the

transfer speed, more precisely the throughput, on the Internet and how does existing

solutions handle this problem? The experiments and analysis conducted in this thesis

revealed that several factors have impact on how network transfer performs. Latency have

the biggest impact on throughput on the Internet both when transferring data with TCP

and UDP. Latency is a problem where the root of the problem is the distance between the

two parts exchanging data. This problem is best solved by minimizing the distance. In

every implementation with a sending rate adjusting mechanism, the latency is the factor

which limits how fast the sending rate is increased. An implementation which allows

the congestion window to increase fast will allow the throughput to be less affected by

latency. Packet loss affects the throughput most in implementations where the sending rate

is decreased aggressively upon detection of loss. Packet loss is mistaken for a congestion

problem and the sending rate is decreased to avoid congestion and congestion collapse.

Solutions which does not blindly decrease the sending rate performs better than solutions

that does. Packet loss which inherit from transmission error or wireless links has to be

separated from the packet loss that is caused by congestion. A lost packet that is not

caused by congestion does not have to lower the sending rate.

Network testing is complicated and there are a huge amount of variables affecting the

performance of network transfer. When attempting to test network transport protocols it

is important to read into what variables exists and how they effect the performance and

in the end the results of the tests.

44

Page 59: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

8.2. Applications 45

8.2 Applications

The result from this thesis can guide anyone who seeks to implement a new transport

protocol. It provides guidelines on what to consider and what approach to take.

8.3 Future Work

The research presented in this thesis show that there are several things that can be done

to improve the results in order to further understand the mechanisms of network transfer.

Firstly, new tests should be conducted on the UDP protocols with a better testing envi-

ronment. The results gathered are not reliable enough due to the fact that the testing

environment could not handle high speed protocols. More metrics would be good to gather

while performing tests on Tsunami and UDT in order to analyse the protocols in more

detail, especially the variables which control the sending rate of the protocols.

Secondly, the TCP tests should be complemented with tests performed in a more reliable

and static testing environment to minimize randomness. The tests performed show results

more accurate to the real world, which is not necessary most optimal for testing how

different congestion control mechanisms behave. Additional tests on Westwood+ and

CUBIC should be made with focus on differentiating packet loss caused by congestion

from packet loss and other causes like transmission error or WiFi links. A suggestion for a

testing tool is Network Simulator (NS) 1. This environment allows more control over the

testing environment.

Finally, a new implementation of a transport protocol should be done and tested to verify

the findings in the thesis.

1http://www.isi.edu/nsnam/ns/

Page 60: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Appendices

46

Page 61: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Appendix A

Network Settings

net.ipv4.tcp abort on overflow = 0

net.ipv4.tcp adv win scale = 1

net.ipv4.tcp allowed congestion control = cubic reno

net.ipv4.tcp app win = 31

net.ipv4.tcp autocorking = 1

net.ipv4.tcp available congestion control = cubic reno

net.ipv4.tcp base mss = 1024

net.ipv4.tcp challenge ack limit = 100

net.ipv4.tcp congestion control = cubic

net.ipv4.tcp dsack = 1

net.ipv4.tcp early retrans = 3

net.ipv4.tcp ecn = 2

net.ipv4.tcp ecn fallback = 1

net.ipv4.tcp fack = 1

net.ipv4.tcp fastopen = 1

net.ipv4.tcp fastopen key = 00000000-00000000-00000000-00000000

net.ipv4.tcp fin timeout = 60

net.ipv4.tcp frto = 2

net.ipv4.tcp fwmark accept = 0

net.ipv4.tcp invalid ratelimit = 500

net.ipv4.tcp keepalive intvl = 75

net.ipv4.tcp keepalive probes = 9

net.ipv4.tcp keepalive time = 7200

net.ipv4.tcp limit output bytes = 262144

net.ipv4.tcp low latency = 0

net.ipv4.tcp max orphans = 4096

net.ipv4.tcp max reordering = 300

net.ipv4.tcp max syn backlog = 128

net.ipv4.tcp max tw buckets = 4096

47

Page 62: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

48 Appendix A. Network Settings

net.ipv4.tcp mem = 7992 10656 15984

net.ipv4.tcp min tso segs = 2

net.ipv4.tcp moderate rcvbuf = 1

net.ipv4.tcp mtu probing = 0

net.ipv4.tcp no metrics save = 1

net.ipv4.tcp notsent lowat = -1

net.ipv4.tcp orphan retries = 0

net.ipv4.tcp probe interval = 600

net.ipv4.tcp probe threshold = 8

net.ipv4.tcp reordering = 3

net.ipv4.tcp retrans collapse = 1

net.ipv4.tcp retries1 = 3

net.ipv4.tcp retries2 = 15

net.ipv4.tcp rfc1337 = 0

net.ipv4.tcp sack = 1

net.ipv4.tcp slow start after idle = 1

net.ipv4.tcp stdurg = 0

net.ipv4.tcp syn retries = 6

net.ipv4.tcp synack retries = 5

net.ipv4.tcp syncookies = 1

net.ipv4.tcp thin dupack = 0

net.ipv4.tcp thin linear timeouts = 0

net.ipv4.tcp timestamps = 1

net.ipv4.tcp tso win divisor = 3

net.ipv4.tcp tw recycle = 0

net.ipv4.tcp tw reuse = 0

net.ipv4.tcp window scaling = 1

net.ipv4.tcp workaround signed windows = 0

net.core.bpf jit enable = 0

net.core.busy poll = 0

net.core.busy read = 0

net.core.default qdisc = pfifo fast

net.core.dev weight = 64

net.core.flow limit cpu bitmap = 0

net.core.flow limit table len = 4096

net.core.message burst = 10

net.core.message cost = 5

net.core.netdev budget = 300

net.core.netdev max backlog = 1000

net.core.netdev tstamp prequeue = 1

net.core.optmem max = 20480

Page 63: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

49

net.core.rmem default = 212992

net.core.rps sock flow entries = 0

net.core.somaxconn = 128

net.core.tstamp allow data = 1

net.core.warnings = 0

net.core.wmem default = 212992

net.core.xfrm acq expires = 30

net.core.xfrm aevent etime = 10

net.core.xfrm aevent rseqth = 2

net.core.xfrm larval drop = 1

Page 64: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

Bibliography

[1] Sourceforge tsunami udp protocol, http://tsunami-udp.sourceforge.net/, Ac-

cessed: 2016-03-23.

[2] M. Allman, V. Paxson, and W. Stevens, Tcp congestion control, RFC 2581, April

1999, http://www.rfc-editor.org/rfc/rfc2581.txt, Accessed: 2016-08-11.

[3] E. Altman, C. Barakat, E. Laborde, P. Brown, and D. Collange, Fairness analysis of

tcp/ip, (2000), In proceedings of IEEE Conference on Decision and Control (CDC).

[4] L.A. Grieco and S. Mascolo, Westwood tcp and easy red to improbe fairness in high

speed networks, In proceedings of IFIP/IEEE Secent International Workshop on Pro-

tocols For High-Speed networks (2002), pp. 130–146.

[5] Y. Gu and Robert L. Grossman, Udt: Udp-based data transfer for high-speed wide

area networks, Computer Networks 51 (2007), pp. 1777–1799.

[6] Y. Gu, X. Hong, and Robert L. Grossman, An analysis of aimd algorithms with

decreasing increases, (2004).

[7] S. Ha, I. Rhee, and L. Xu, Cubic: A new tcp-friendly high-speed tcp variant, SIGOPS

Oper. Syst. Rev. 42 (2008), no. 5, pp. 64–74.

[8] S. Q. Li and C. Hwang, Link capacity allocation and network control by filtered input

rate in high speed networks, IEEE/ACM Transactions on Networking 3 (1995), no. 1,

pp. 744–750.

[9] S. Mascolo, C. Casetti, M. Gerla, M.Y. Sanadidi, and R. Wang, Tcp westwood: Band-

width estimation for enhanced transport over wireless links, In proceedings of ACM

Mobicom 2001 (2001).

[10] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow, Tcp selective acknowledgment

options, RFC 2018, October 1996, http://www.rfc-editor.org/rfc/rfc2018.txt,

Accessed: 2016-08-11.

[11] J. Postel, User datagram protocol, STD 6, August 1980, http://www.rfc-editor.

org/rfc/rfc768.txt, Accessed: 2016-08-11.

50

Page 65: Data transfer over high latency and high packet ... - umu.se · Ume a University Department of Computing Science SE-901 87 UME A SWEDEN Data transfer over high latency and high packet

BIBLIOGRAPHY 51

[12] W. Richard Stevens, Tcp slow start, congestion avoidance, fast retransmit, and fast

recovery algorithms, RFC 2001, January 1997, http://www.rfc-editor.org/rfc/

rfc2001.txt, Accessed: 2016-08-11.

[13] S. Yo, N. Brownlee, and A. Mahanti, Performance evaluation of protocols for big data

transfers, ”Big Data: Storage, Sharing and Security” (F. Hu, ed.), Taylor & Francis

Group, 2016, pp. 43–96.

[14] Z. Yue, Y. Ren, and J. Li, Performance evaluation of udp-based high-speed transport

protocols, In proceedings of IEE 2nd International Conference on Software Engineering

and Service Science (2011).