João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful...

116
End-to-end congestion control algorithms for the Internet João Pedro Fontinha Poupino Dissertação para obtenção do Grau de Mestre em Júri Presidente: Professora Doutora Maria dos Remédios Vaz Pereira Lopes Cravo Orientador: Professor Doutor António Carlos Cristovão Matias de Almeida Vogais: Professor Doutor Rui Jorge Morais Tomaz Valadas Outubro de 2010 Engenharia Informática e Computadores

Transcript of João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful...

Page 1: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

End-to-end congestion control algorithms for the Internet

João Pedro Fontinha Poupino

Dissertação para obtenção do Grau de Mestre em

Júri

Presidente: Professora Doutora Maria dos Remédios Vaz Pereira Lopes Cravo

Orientador: Professor Doutor António Carlos Cristovão Matias de Almeida

Vogais: Professor Doutor Rui Jorge Morais Tomaz Valadas

Outubro de 2010

Engenharia Informática e Computadores

Page 2: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.
Page 3: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Resumo

O protocolo TCP (Transmission Control Protocol) tem desempenhado um papel determin-ante no sucesso da Internet. Em particular, o controlo de congestionamento no TCP temsido responsável pela garantia da estabilidade da Internet, cujo tráfego tem evoluído a umritmo exponencial.

A perspectiva clássica do controlo de congestionamento na Internet é a chamada perspectivaextremo-a-extremo, baseada na detecção de perdas e variação de atraso dos pacotes. No ent-anto, as redes de próxima geração tornaram evidentes as limitações do desenho conservador docontrolo de congestionamento do TCP. Entretanto, têm sido propostos mecanismos baseadosna perspectiva assistida pelos routers, em que as fontes podem receber informação explícitados routers, promovendo um melhor desempenho e solucionando muitos desafios existentes.No entanto, esta estratégia ainda se encontra sob discussão, já que a sua implementação naInternet apresenta vários desafios, tanto a nível técnico como político.

Mais recentemente, foram propostas novas variantes de algoritmos extremo-a-extremo a seremimplementadas em ligações de alto débito. Praticamente todas as novas propostas permitemuma utilização mais eficiente da rede. No entanto, podem criar novos problemas, tais comojustiça reduzida e aumento da carga nas redes.

Neste trabalho, propomos e estudamos novos mecanismos extremo-a-extremo, incorporando-os numa das novas variantes TCP. Através dos resultados obtidos, pelo menos para os cenáriosde rede simulados, concluímos que é possível ter mecanismos de controlo de congestionamentoextremo-a-extremo adequados. Ao mesmo tempo, com estes mecanismos, transferimos a com-plexidade para os extremos da rede e mantemos a compatibilidade com as versões existentesdo TCP.

3

Page 4: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Palavras-chaveTCP

controlo de congestionamento

controlo extremo-a-extremo

CUBIC

controlo baseado em atraso

Page 5: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Abstract

The Transmission Control Protocol (TCP) has played a major role in the success of Internet.In particular TCP congestion control plays a critical role, which has ensured stability in anexponentially growing Internet.

The classical approach to Internet congestion control is called the end-to-end approach. TCPsources adapt their congestion window using network signals, e.g., packet losses and delayvariations. However, the advent of next-generation networks have shown the limitations ofTCP’s conservative congestion control design, creating several challenges such as reduced linkefficiency in heterogeneous networks. At the same time, mechanisms based on the router-assisted approach have also been proposed. Sources may use explicit information from routersin the network, achieving superior performance and solving many existing issues. Nonethe-less, this strategy is still under discussion, as its implementation in the current Internet ischallenging due to both technical and political aspects.

More recently, researchers have proposed several high-speed end-to-end congestion controlschemes. Virtually all proposals enable good efficiency, but they can create new problems,such as reduced fairness and increased network strain.

In this work we propose and study new end-to-end mechanisms, incorporating them in apopular loss-based, high-speed, end-to-end algorithm. We obtained through simulation en-couraging results, which means that, at least for the simulated network environments, it ispossible to have reasonable end-to-end congestion control algorithms. At the same time, withthese mechanisms, the complexity is transferred to the edge of the network and compatibilitywith existing TCP versions is maintained.

5

Page 6: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

KeywordsTCP

congestion control

end-to-end control

CUBIC

delay-based control

Page 7: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Contents

1 Introduction 1

1.1 Current TCP challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Current solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Contributions of this work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.4 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Related Work 9

2.1 Loss-based congestion control algorithms . . . . . . . . . . . . . . . . . . . . . 9

2.2 Delay-based congestion control algorithms . . . . . . . . . . . . . . . . . . . . 17

3 Proposed mechanisms 23

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.2 Design challenges and considerations . . . . . . . . . . . . . . . . . . . . . . . 24

3.3 Supporting procedures and mechanisms . . . . . . . . . . . . . . . . . . . . . 30

3.4 Max-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4.1 The core algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.4.2 Max-delay CUBIC with NewReno . . . . . . . . . . . . . . . . . . . . 38

3.4.3 Max-delay CUBIC with improved short-term fairness . . . . . . . . . . 39

3.5 Min-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4 Simulation results 45

4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.2 Single flow efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.3 RTT fairness improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.4 Short-term behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

i

Page 8: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4.1 Results for CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.4.2 Results for Max-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . 53

4.4.3 Results for Min-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . 59

4.5 Dynamic scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.5.1 Results for CUBIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.5.2 Results for Max-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . 63

4.5.3 Results for Min-delay CUBIC . . . . . . . . . . . . . . . . . . . . . . . 69

4.6 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.6.1 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.6.2 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

4.6.3 Queue occupancy and packet loss . . . . . . . . . . . . . . . . . . . . . 74

5 Conclusions and future work 79

A OTcl script example 81

B Example benchmark scripts 89

B.1 Jain’s Fairness Index calculation . . . . . . . . . . . . . . . . . . . . . . . . . 89

B.2 Link Efficiency calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Bibliography 95

ii

Page 9: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

List of Figures

1.1 TCP end-to-end congestion control architecture . . . . . . . . . . . . . . . . . 6

2.1 Reno’s congestion window. Based on [74]. . . . . . . . . . . . . . . . . . . . . 11

2.2 HighSpeed TCP’s congestion window . . . . . . . . . . . . . . . . . . . . . . . 14

2.3 BIC’s and CUBIC’s congestion window dynamics . . . . . . . . . . . . . . . . 17

2.4 TCP-Vegas’ congestion window . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.5 TCP-Illinois’ congestion window . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.1 Packet Arrival Time of Two Competing Flows [71] . . . . . . . . . . . . . . . 25

3.2 Throughput of 5 FAST TCP flows . . . . . . . . . . . . . . . . . . . . . . . . 26

3.3 Throughput of 32 FAST TCP flows . . . . . . . . . . . . . . . . . . . . . . . . 28

3.4 Queue of the bottleneck router with the dynamics of 32 FAST TCP flows . . 29

3.5 Packets dropped at the router with 32 FAST TCP flows . . . . . . . . . . . . 29

3.6 RTT fairness improvement curve . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1 Network topology used in NS-2 simulations . . . . . . . . . . . . . . . . . . . 46

4.2 Average efficiency of the five tested algorithms . . . . . . . . . . . . . . . . . . 49

4.3 2 Flows with RTT fairness improvement mechanism disabled . . . . . . . . . 51

4.4 2 Flows with RTT fairness improvement mechanism enabled . . . . . . . . . . 52

4.5 Throughput of 5 CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

4.6 Utilization ratio of the bottleneck link with 5 CUBIC flows . . . . . . . . . . 53

4.7 Fairness of 5 CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.8 Bottleneck router’s queue occupancy with 5 CUBIC flows . . . . . . . . . . . 54

4.9 Throughput of 5 Max-delay CUBIC flows (100 ms threshold) . . . . . . . . . 55

4.10 Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (100ms threshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

iii

Page 10: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.11 Fairness of 5 Max-delay CUBIC flows (100 ms threshold) . . . . . . . . . . . . 56

4.12 Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows (100 msthreshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4.13 Throughput of 5 Max-delay CUBIC flows (10 ms threshold) . . . . . . . . . . 57

4.14 Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (10 msthreshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.15 Fairness of 5 Max-delay CUBIC flows (10 ms threshold) . . . . . . . . . . . . 58

4.16 Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows (10 msthreshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.17 Throughput of 5 Max-delay CUBIC flows with short-term fairness improve-ment (100 ms threshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.18 Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (100ms threshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

4.19 Fairness of 5 Max-delay CUBIC flows with short-term fairness improvement(100 ms threshold) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.20 Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows withshort-term fairness improvement (100 ms threshold) . . . . . . . . . . . . . . 60

4.21 Throughput trajectory of 5 Min-delay CUBIC flows . . . . . . . . . . . . . . . 61

4.22 Utilization ratio of the bottleneck link with 5 Min-delay CUBIC flows . . . . 61

4.23 Fairness of 5 Min-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . 62

4.24 Bottleneck router’s queue occupancy router with 5 Min-delay CUBIC flows . 62

4.25 Throughput of 50 CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.26 Utilization ratio of the bottleneck link with 50 CUBIC flows . . . . . . . . . . 64

4.27 Fairness of 50 CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4.28 Bottleneck router’s queue occupancy with 50 CUBIC flows . . . . . . . . . . . 65

4.29 Packets dropped at queue of the bottleneck router with 50 CUBIC flows . . . 65

4.30 RTT of 50 CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.31 Throughput of 50 Max-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . 66

4.32 Utilization ratio of the bottleneck link with 50 Max-delay CUBIC flows . . . . 67

4.33 Fairness of 50 Max-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . . . 67

4.34 Bottleneck router’s queue occupancy with 50 Max-delay CUBIC flows . . . . 67

4.35 Packets dropped at the queue of the bottleneck router with 50 Max-delayCUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

iv

Page 11: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.36 RTT of 50 Max-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . 68

4.37 Throughput of 50 Min-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . 69

4.38 Utilization ratio of the bottleneck link with 50 Min-delay CUBIC flows . . . . 70

4.39 Fairness of 50 Min-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . . . 70

4.40 Bottleneck router’s queue with 50 Min-delay CUBIC flows . . . . . . . . . . . 70

4.41 Packets dropped at the bottleneck router with 50 Min-delay CUBIC flows . . 71

4.42 RTT of 50 Min-delay CUBIC flows . . . . . . . . . . . . . . . . . . . . . . . . 71

4.43 Link efficiency ratio with varying number of concurrent flows . . . . . . . . . 73

4.44 Jain’s Fairness Index with varying number of concurrent flows . . . . . . . . . 75

4.45 Queue occupancy ratio with varying number of concurrent flows . . . . . . . . 76

4.46 Packet error rate with varying number of concurrent flows . . . . . . . . . . . 76

v

Page 12: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

vi

Page 13: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

List of Tables

2.1 Simulation parameters for analyzing the window dynamics of several end-to-end congestion control algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 14

4.1 Simulation parameters for testing the efficiency of a single flow in a networkwith a large BDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.2 Total data transferred between the two endpoints during the simulation . . . 49

4.3 Simulation parameters for testing the RTT fairness improvement mechanism . 50

4.4 Simulation parameters for testing the algorithm short-term dynamics . . . . . 50

4.5 Simulation parameters for testing the algorithm behavior in a dynamic networkscenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.6 Simulation parameters for testing the algorithm scalability with increasing flownumber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

vii

Page 14: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

viii

Page 15: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Acronyms

ABR Available Bit Rate

ACK Acknowledgement

AIMD Additive Increase Multiplicative Decrease

ATM Asynchronous Transfer Mode

BDP Bandwidth-Delay Product

DOCSIS Data Over Cable Service Interface Specification

ECN Explicit Congestion Notification

IP Internet Protocol

MIMD Multiplicative Increase Multiplicative Decrease

MSS Maximum Segment Size

MTU Maximum Transmission Unit

NS-2 Network Simulator 2

RED Random Early Detection

RTO Retransmission Timeout

RTT Round-Trip Time

TCP Transmission Control Protocol

UDP User Datagram Protocol

VoIP Voice over IP

WLAN Wireless Local Area Network

XCP Explicit Control Protocol

ix

Page 16: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

x

Page 17: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Chapter 1

Introduction

The Internet has grown in a significant way in the last twenty years. Today, it connectsmillions of people and machines, and it has impacted numerous aspects of our lives. Next-generation consumer networks, such as residential fiber optic networks [29] and DOCSIS3.0 [38], are pushing the boundaries of what was considered to be impossible just a fewyears ago. For example, new services with ever increasing bandwidth requirements, suchas high definition video-on-demand, on-line music, and cloud-based storage services, arequickly changing the way we use the Internet. High-speed transatlantic networks are enablinglaboratories, like CERN and Fermilab, to exchange massive amounts of experimental datawith science research facilities across the globe everyday.

The Transmission Control Protocol (TCP) [54] plays a critical role in the Internet infrastruc-ture. By providing a connection-oriented, reliable, byte-stream service, TCP has enabled theInternet to thrive on many types of networks; ranging from high-speed, highly reliable fiberoptic links to error prone wireless links. Additionally, TCP provides a fundamental serviceto the stability of the Internet, called congestion control.

Congestion control is a key issue in any network. It is intrinsically related to network androuter capacity. The Internet, being composed by many heterogeneous networks with distinctcapacities, is particularly susceptible to congestion events. Routers must buffer packets inqueues before and after processing. If the arrival rate of packets is larger than the capacityof the network, a router must store them in a queue until it can forward the packets to thenext hop. Eventually, if the arrival rate continues to be larger than the departure rate, thequeue will grow to a size where the router has no alternative but to drop packets, disruptingnetwork flows. So, as the packet loss rate increases, throughput will decrease. The solutionis to decrease router queue occupancy, and consequently packet loss rates. To achieve this,sources must lower their sending rate. However, this was not the behavior of early TCPversions. TCP, being a reliable transport protocol, is expected to retransmit lost segments,but its original retransmission behavior was too aggressive, putting the network at risk ofcongestion collapse. A congestion collapse happens when a severely congested network reachesa steady state, where the actual useful throughput is very low.

1

Page 18: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2 CHAPTER 1. INTRODUCTION

In October of 1986, the Internet suffered its first congestion collapse, and more would soonfollow. These events fascinated researchers, prompting them to investigate the underlyingcauses of the collapses [30]. It was shown that TCP lacked the fundamental control mechan-isms to prevent congestion collapses. Several algorithms were proposed, along with criticalimprovements to TCP timers.

The TCP congestion control algorithms have been successful since their inception in 1988,and have allowed the Internet to scale multiple times. However, the proliferation of new-generation networks, such as links with high bandwidth-delay products or wireless networks,have made the limitations of traditional TCP congestion control more apparent. Even withits long history, TCP congestion control remains an hot topic to this day, as TCP is still theonly standard Internet protocol capable of end-to-end reliable transmission, and congestioncontrol.

1.1 Current TCP challenges

Link Efficiency

Amajor challenge currently faced by TCP is link efficiency. Link efficiency means TCP shouldbe able to fully use the capacity provided by a network. In fact, many recent contributionsto TCP congestion control come as a side effect of the need to enhance TCP’s performanceon links with high delay-bandwidth products.

To gain insight on this problem, one must understand the dynamics involved in congestionwindow control. TCP uses a sliding window mechanism as a form of flow control. Thiswindow is called the congestion window. A TCP sender will adjust the window size when anacknowledgement arrives from the receiver, or when congestion is detected in the network.The time between sending a segment and receiving its acknowledgement is called the round-trip time (RTT). Therefore, the agility of a TCP sender (i.e., the speed at which it can adaptits window), is subject to its RTT. This bias towards flows with smaller RTTs makes dramaticdifferences in TCP’s window growth. For instance, a sender with 10 ms of RTT can increaseits congestion window size to 100 segments in only 1 second; on the other hand, a senderwith 100 ms of RTT will take 10 seconds.

Advances in physical network technology have made long distance, multi-gigabit, networkscommonplace. However, although network capacity has been continually increasing, latencyremains constant. Additionally, a TCP sender adjusts its window according to the Addit-ive Increase Multiplicative Decrease (AIMD) algorithm, with fairly conservative parameters.This design, while safe, leads to very poor link utilization on networks with large delay-bandwidth products. This condition is aggravated as the bandwidth-delay product of thelink increases.

To illustrate, consider a link with 5 Gb/s of available bandwidth, MTU of 1500, and 100 msof RTT. TCP will need a window of approximately 42.000 segments to fully use the link.

Page 19: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

1.1. CURRENT TCP CHALLENGES 3

After detecting the first congestion event, the congestion window will be reduced to 21.000segments. Assuming no other loss event occurs, which is unlikely due to the dynamics of thelink itself, such as other TCP flows competing for bandwidth and the error rate of the link,TCP will need 35 minutes to reach full link utilization. In conclusion, TCP’s design is notsufficient for high-speed, high-latency networks.

Fairness

Fairness is also an important issue. We consider two different types of fairness. TCP fairness,which is how fair a new TCP proposal will be to standard TCP, and intra-protocol fairness,which is concerned with fair distribution of bandwidth among flows running the same TCPversion. Regarding legacy TCP fairness, bandwidth allocation for flows running TCP vari-ants should not dominate standard TCP flows, to allow for a less disruptive deployment inthe Internet. However, we do not consider this requirement as important as intra-protocolfairness, as it restrains the development of new protocols. Regarding intra-protocol fairness,any TCP algorithm which aims to provide efficient link utilization on high-speed networks,must eventually converge to fair distribution of bandwidth among competing flows. Also, thealgorithm should account for various aspects that may influence individual flow behavior, andconsequently its overall fairness. For example, it has been observed that flow properties likeround-trip time, starting time of flow, and aggressiveness of the increase factor can severelyimpair the fairness of many new TCP proposals [49]. Finally, the transient period, wherethe distribution of bandwidth among flows is not fair, should be as short as possible withoutcompromising the stability and efficiency of the algorithm.

Heterogeneous networks

Wireless networks have become very popular in the last few years. With the advent ofmobile computing, the number of devices using TCP over wireless links has seen an explosivegrowth. Nowadays, mobile phones with Internet access, wireless broadband Internet [1], andWLAN [28] residential networks are very common. They serve as a testimony of TCP’sflexibility to adapt to new physical media. However, it is important to note that TCP wasoriginally designed with wired networks in mind, where packet error rates are small, andthe network characteristics are fairly constant. This, of course, contrasts with the reality ofwireless networks. In wireless networks, error rates can spike due to interference, or framecollisions, caused by multiple senders trying to transmit simultaneously. Available bandwidthcan suddenly drop due to signal degradation. TCP is designed to interpret packet losses asa sign of congestion. So, in a wireless network, when TCP detects that a segment is lost, itwill frequently and erroneously assume that it was caused by congestion. In response to theloss event, TCP will lower its congestion window by half, unnecessarily affecting throughput.This is, perhaps, the most challenging problem of TCP, as it does not have any means todistinguish between losses caused by network congestion, and losses caused by link errors.

Page 20: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4 CHAPTER 1. INTRODUCTION

1.2 Current solutions

The limitations in TCP congestion control urged the research community to put forwardnew solutions. Many contributions have been proposed, but all can be framed in one of twoapproaches: end-to-end congestion control, and router-assisted congestion control.

End-to-end algorithms

The end-to-end strategy is the classical approach to congestion control. In this model, itis the responsibility of the TCP sender to detect congestion and act upon it. End-to-endalgorithms have a black-box perspective of the network. Therefore, all end-to-end conges-tion mechanisms can only rely on implicit congestion signals, i.e. packet losses and delayvariations. This approach has proved successful for many years. Nevertheless, it has somelimitations. Loss-based schemes cause network congestion by design, as it is the only wayto probe for available bandwidth, making them reactive, rather than proactive. Delay-basedschemes can proactively respond to network congestion, and thus are more network friendly,in the sense that they can keep low packet loss rates and low queueing delay. By inspectingfluctuations in measured RTT values, they can infer router queue occupancy, but face manychallenges with accurate RTT estimation and link under-utilization on the presence of loss-based algorithms. Moreover, it is yet to be determined if pure delay-based algorithms canwork correctly on many network scenarios. More recently, hybrid-based (i.e., that combineloss and delay) approaches are being proposed as possible solutions to some of the inherentproblems with pure loss- and pure delay-based mechanisms. Hybrid-based algorithms showimproved results in some scenarios, but they still retain most problems from the loss anddelay based algorithms, because they still have limited awareness of the true network state.Despite its limitations, the proponents of the end-to-end approach argue that only an end-to-end algorithm adheres to the “golden principle” that complexity of a network should be at itsedge and not at its core [21], allowing the network to scale [58]. Another important argumentis that the end-to-end approach allows for an incremental deployment of such algorithms. Avery attractive characteristic of and end-to-end congestion control algorithm is that only thesender side must be modified.

Router-assisted algorithms

At the same time, a different strategy to congestion control is being proposed. The router-assisted approach makes routers an active component of a congestion control architecture.

One early proposal is Random Early Detection (RED) [23] alongside Explicit CongestionNotification (ECN) [57]. RED aims to mitigate some of the problems with drop-tail queues.In drop tail queues, packets are dropped in a bottleneck router, after a queue buffer overflowoccurs. In RED managed queues, packets will be dropped earlier, based on the output of adrop probability algorithm. If the queue occupancy is low, almost no packets are dropped;conversely, if the queue starts to fill, the drop probability will rise proportionally. This

Page 21: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

1.3. CONTRIBUTIONS OF THIS WORK 5

approach has shown good results in controlling packet drops, queuing delay and lock-outbehavior [9]. However, TCP senders still detect network congestion by observing packetlosses, and inherit all the problems associated with implicit congestion signaling. Thus, ECNwas proposed. ECN uses the output from RED’s probability function, but instead of droppingthe packet, it will explicitly signal the TCP sender, allowing it to adapt to congestion beforeit occurs and avoid unnecessary retransmissions.

More sophisticated algorithms for Internet congestion control, called explicit-rate feedbackalgorithms, have also been proposed. They were inspired on ATM’s ABR protocol [16].Examples of such proposals are [5] and XCP [17], even though XCP uses windows, insteadof rates. In essence, explicit-rate feedback algorithms allow routers to present sources withhow much data they should be sending at a given time. Therefore, they exhibit advantagesover the end-to-end counterparts, solving most of their problems. This includes efficientutilization in high bandwidth-delay links, flow fairness, and the ambiguity of segment loss inheterogeneous networks. The main drawbacks are that, in addition to the sources, all routersin the network are required to participate in the protocol, and that more powerful routersare needed to cope with the increased complexity of such algorithms.

The router-assisted approach, while arguably superior, is still under discussion, since its im-plementation in the current Internet is very challenging due to both technical and politicalaspects. This, and other factors, have hindered so far the adoption of the router-assistedapproach.

Our work will focus on the end-to-end approach.

1.3 Contributions of this work

In this work we explore current, and introduce new, delay-based techniques in order toimprove an existing loss-based, end-to-end, congestion control algorithm. We believe thatsuch techniques can bring significant performance advantages in numerous network scenarios.We run simulations with the Network Simulator 2 (NS-2) [50] tool to verify the performanceimprovements of our proposals. The simulation results focus on key performance metrics suchas link efficiency, fairness, router queue occupancy, latency, and packet loss rate. Finally, wecompare our proposals with several other high-speed congestion control algorithms.

A TCP end-to-end congestion control algorithm is the simplest, and most scalable form ofcongestion control in the Internet. Except for the sender, and possibly the receiver [48, 61],the end-to-end congestion control approach treats all components of the Internet architectureas black-boxes. Congestion is inferred only through two implicit signals: packet losses anddelay variations. Packet losses are detected through the help of TCP timers and duplicateacknowledgements. Packet delay is detected through RTT measurements and mining the ac-knowledgment stream. Under these constraints, we develop and evaluate two new end-to-endcongestion control algorithms. We illustrate the end-to-end congestion control architecture in

Page 22: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

6 CHAPTER 1. INTRODUCTION

Figure 1.1: TCP end-to-end congestion control architecture

Figure 1.1. Note that some components are optional, e.g., in a loss-based algorithm, the RTTmeasurement component does not influence the congestion window size. We also comparethe existing proposals with the candidate algorithms.

In order to evaluate our proposals, we extend and improve the Hamilton Institute benchmarkTCP suite [7] to fulfill our requirements. The test suite has three different purposes. First,to assess the performance of existing TCP congestion control algorithms and establish abaseline. Next, to test the new candidate algorithms and modifications under identical testscenarios. Finally, to compare the obtained results for the new algorithms with the currentproposals.

We chose the Network Simulator Version 2 (NS-2) [50] tool to implement our proposals andperform the benchmarks. NS-2 is a discrete event network simulator that is widely used in thenetwork research community. It is considered by many a non-commercial “standard” for theevaluation of networks and networking algorithms. NS-2 supports a wide range of simulationscenarios, including wired and wireless networks, and protocols such as TCP. NS-2’s core iswritten in C++, and has an OTcl interface to specify various network scenarios – rangingfrom simple dumbbell to more complex topologies. Because of its open nature and veryflexible architecture, NS-2 has been constantly developed and improved. Therefore, choosingNS-2 for the implementation and evaluation of a new proposal gives other researches theopportunity to validate the results, and improve the acceptance of such proposal.

1.4 Thesis structure

The remaining of this work is organized as follows.

Chapter 2 describes the related work carried out by the network research community, withemphasis on the end-to-end-approach to Internet congestion control. The three major end-to-

Page 23: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

1.4. THESIS STRUCTURE 7

end approaches are discussed: loss-based congestion control, delay-based congestion control,and hybrid-based congestion control.

Chapter 3 presents two novel end-to-end congestion control algorithms, based on an existingloss-based congestion control proposal. It discusses the main design decisions and consid-erations behind the two proposals, and also several implementation issues faced during thecourse of our work.

Chapter 4 presents an in-depth simulation study using the Network Simulator 2 (NS-2) tool.It compares the performance of existing high-speed, end-to-end, congestion control algorithmswith our proposals in several next-generation network scenarios.

Chapter 5 concludes the thesis, summarizing the main findings of our work.

Page 24: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

8 CHAPTER 1. INTRODUCTION

Page 25: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Chapter 2

Related Work

End-to-end algorithms propose an attractive approach to Internet congestion control, both insimplicity and scalability. In this chapter, we present some of the most relevant work on TCPend-to-end congestion control. Recall that the only signals of network congestion availableto an end-to-end algorithm are packet losses and latency variations. Therefore, research hasbeen focused on three types of algorithms:

• Loss-based algorithms rely on packet losses alone to react to network congestion;

• Delay-based algorithms use delay measurements alone to infer router queue occupancyand act before heavy congestion occurs;

• Hybrid-based algorithms use techniques from both loss-based and delay-based algorithms;their rationale is that with more information, an end-to-end algorithm can infer thestate of the network more accurately and make better decisions.

We start by describing the principles of loss-based congestion control algorithms, focusingon both legacy and high-speed variants. Next, we describe the major delay-based conges-tion control proposals, and how they differ from the more traditional loss-based approaches.Finally, we describe the hybrid-based congestion control proposals.

2.1 Loss-based congestion control algorithms

Traditionally, TCP has used packet losses to detect network congestion. Many new algorithmsnow use delay information, but pure loss-based algorithms are still, by far, dominant in theInternet. We present the key concepts of the classic TCP congestion control algorithms, andtheir significance to Internet stability.

Tahoe was the first TCP version designed with congestion control [30]. One of the mainprinciples of its operation is that the rate of new packets sent into the network must beclose to the rate at which the receiver returns the acknowledgements. In versions earlier

9

Page 26: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

10 CHAPTER 2. RELATED WORK

than Tahoe, sources send multiple segments into the network until the advertised receiverwindow is full. This overly aggressive behavior may cause problems when there are slowerlinks between the sender and receiver. Upon reaching an upstream router connected to alower capacity link, a packet must be buffered and wait for transmission. Since the bufferspace is finite, it is possible that the router overflows its queue, and multiple packets willbe dropped. It was shown in [30] how that approach could severely affect the throughput ofTCP connections.

Tahoe includes three new algorithms. They are slow-start, congestion avoidance, and fastretransmit. Tahoe also adds a new window to the sender’s TCP, called the congestion window.A TCP sender must not transmit more than the minimum of the congestion window and theadvertised receiver window, i.e.

window = min(cwnd, rwnd),

where cwnd is the congestion window, and rwnd is the advertised receiver window.

When a new connection is established with another host, the congestion window is set to onesegment size. Each time an ACK is received, the sender’s congestion window is increased byone segment, resulting in an exponential increase of the window. This is slow-start. Sometime after, if the network capacity is reached, an intermediate router will start droppingpackets. Tahoe detects packet loss using two different mechanisms: retransmission timeout(RTO), and the arrival of three duplicate acknowledgements. If packet loss is detected throughthree duplicate ACKs, Tahoe enters the fast retransmit phase, where the (presumably) lostpacket is resent. Eventually, either one of the two loss mechanisms will be activated. Whenthis happens, the sender will realize that its congestion window has become too large, andwill react accordingly. Note that one of TCP Tahoe’s fundamental assumptions is that thepacket loss rate is very small, therefore packet loss could be inferred as a strong indicator thatcongestion had occurred on the network, and the sender should decrease its sending rate.

Upon detecting congestion on the network, the current congestion window is halved andsaved in a variable called slow-start threshold (ssthresh). The sender sets its congestionwindow to one segment size and begins slow-start, increasing the window exponentially.After the sender’s congestion window reaches ssthresh, cwnd is increased approximately byone segment per RTT (1/cwnd each time an acknowledgement is received). This is congestionavoidance. The congestion avoidance algorithm tries to increase the congestion window in amore conservative manner than slow-start. This results in an additive increase, opposed toslow-start’s exponential increase.

Tahoe, even with its limitations, was a major breakthrough in congestion control. It hasplayed a critical part in the prevention of congestive collapses on the Internet, and hasset the guidelines and principles for which virtually every new TCP implementation shouldrespect or, at least, consider.

Tahoe was followed by Reno [4]. Reno retains most of Tahoe’s characteristics, but introducestwo changes that improve Tahoe’s performance considerably. Reno changes how Tahoe reacts

Page 27: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.1. LOSS-BASED CONGESTION CONTROL ALGORITHMS 11

0

2

4

6

8

10

12

14

16

18

0 2 4 6 8 10 12 14 16 18

cw

nd

(p

ackets

)

time (RTTs)

Retransmission Timeout

Slow Start

Congestion Avoidance

3 duplicate ACKs

Congestion WindowSlow start threshold

Figure 2.1: Reno’s congestion window. Based on [74].

to the detection of three duplicate acknowledgements, and how it recovers from it. The keyidea is that Reno should not fall back to slow-start when packet loss is detected throughduplicated ACKs. Instead, it should continue sending data, albeit at a slower pace. Themain insight is that since the sender is still receiving ACKs, the network is still deliveringpackets, and so the congestion experienced is not yet heavy. Therefore, it should be able totake advantage of this situation and proceed carefully, but it should not stop sending packetsaltogether. As such, Reno introduced a new mechanism called called fast recovery.

Upon receiving three duplicate ACKs, Reno will activate fast retransmit, and resend the lostsegment right away. This behavior is identical to Tahoe. However, after fast retransmit,Reno will activate fast recovery. In fast recovery, the value of half the congestion window issaved in ssthresh, and the new value of the congestion window is set to ssthresh plus threetimes the segment size. This is safe to do, because three duplicate ACKs mean that threepackets have left the network, and the principle of conservation of packets [30] is respected.Whenever a duplicate ACK arrives, the congestion window is inflated by the segment size(again, meaning that another packet has left the network). If the new value of the congestionwindows allows it, the sender will transmit a packet. The algorithm will proceed until anew ACK arrives that acknowledges new data. When fast recovery finishes, the congestionwindow will be set to ssthresh, and Reno will resume normal congestion avoidance. Toillustrate, we show a possible evolution of the congestion window under TCP Reno in Figure2.1. Note that in this figure only illustrates the slow-start and congestion avoidance phases.The effects of the fast retransmit and fast recovery algorithms in the congestion window arenot shown. Additionally, note that at any time, if the retransmission timer expires, Reno willenter slow-start, exactly like Tahoe.

Page 28: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

12 CHAPTER 2. RELATED WORK

Reno is a major improvement over Tahoe regarding single packet loss in a window. However,like Tahoe, it does not behave well when multiple packet losses occur within a single windowof data.

NewReno is proposed in [22] to enhance Reno’s fast retransmit and fast recovery algorithms.Reno assumes that only one segment is lost from a window of data. However, packets fre-quently arrive in bursts at the routers [43], and therefore packet drops will also occur inbursts. This is common on routers with drop tail queues. So, when a bottleneck routerqueue overloads, often more than one segment is lost from a window of data. In this setting,Reno will not handle multiple packet drops adequately.

A fundamental problem with Reno, which makes it unsuitable to handle multiple packetdrops, is that every time an ACK that acknowledges new data arrives, it will leave fastrecovery. In a scenario where multiple packets are dropped, Reno will call fast retransmitand fast recovery multiple times. Each time this happens, the congestion window and slow-start threshold will be further reduced. This leads to poor throughput and can, sometimes,trigger an RTO due to ACK starvation, forcing the sender back to slow-start.

The novel idea in NewReno is the introduction of partial acknowledgements. A partialacknowledgement will acknowledge only some of the segments before the sender had detectedpacket loss. In Reno, the first partial acknowledgement will result in the termination of fastrecovery. NewReno, on the other hand, will use the partial acknowledgement as a signal thatanother packet was lost, and react accordingly.

Upon detecting three duplicate acknowledgements, NewReno will save the sequence numberof the last transmitted segment in a variable called recover. What follows is a behavior nearlyidentical to Reno. NewReno will enter fast retransmit, send the lost packet, and invoke fastrecovery. However, when a partial acknowledgement arrives at the sender, NewReno willretransmit the segment matching the partial acknowledgement, but it will continue in fastrecovery. Only after the sequence number in recover is acknowledged, will fast recovery end.

NewReno is still nowadays one of the most popular TCP variants. Together with Reno andSACK [48], which extends TCP’s capabilities to selectively acknowledge individual segments,and therefore improve its ability to recover from multi-window segment losses, it is commonlyregarded as the de facto TCP version.

High-speed TCP congestion control algorithms

It is widely recognized that TCP’s conservative AIMD behavior scales poorly in networkswith high bandwidth-delay products. One possible strategy to improve link efficiency is tomake TCP more aggressive. That is, increase the congestion window faster, and upon packetloss, use a smaller decrease factor. Several new high-speed TCP variants were proposed assimple changes to TCP’s AIMD mechanism. Many high-speed TCP variants have two distinctmodes of operation. In high-speed mode, a more aggressive behavior is used to achieve betterefficiency. In compatibility mode, standard TCP’s behavior is copied to maintain fairness with

Page 29: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.1. LOSS-BASED CONGESTION CONTROL ALGORITHMS 13

legacy TCP flows. The change between high-speed and legacy modes is commonly controlledby the congestion window size. Unless stated, we only describe the high-speed modes of TCPvariants.

HighSpeed TCP for Large Congestion Windows [19] modifies TCP’s AIMD mechanism tomake it more suitable for high-speed networks. HighSpeed TCP is designed to have a dy-namic behavior that is dependent on the state of the network. In low packet loss conditions,HighSpeed TCP behaves more aggressively than regular TCP to achieve high network utiliz-ation. However, when the packet loss rate is higher than 10-3, HighSpeed TCP’s behavior isidentical to regular TCP. This way, HighSpeed TCP can maintain fairness to regular TCP anddoes not increase the risk of congestion collapse in networks with moderate to high packetloss rates. Also, if the current congestion window is smaller than a predefined threshold,Low_Window, then HighSpeed TCP falls back to the regular TCP behavior.

The HighSpeed TCP response function is defined by new additive increase and multiplicat-ive decrease parameters, a(cwnd) and b(cwnd), where cwnd is the congestion window. Inthe congestion avoidance phase, the growth of the congestion window is determined by thefollowing equations:

When an ACK is received,

cwnd← cwnd+a(cwnd)

cwnd

When congestion is detected,

cwnd← cwnd− b(cwnd)× cwnd

If the value of cwnd is less than Low_Window, HighSpeed TCP behaves exactly like TCP,so a(cwnd) = 1 and b(cwnd) = 0.5. Conversely, when cwnd is larger than Low_Window,a(cwnd) becomes larger and b(cwnd) becomes smaller as cwnd increases. This behavior allowsHighSpeed TCP to grow its congestion window and recover from losses faster than TCP,making it more suitable for high-speed links. HighSpeed TCP is now an Internet RFC, butit has not yet seen significant adoption. In Figure 2.2, we show the evolution of a congestionwindow under HighSpeed TCP. Notice the more aggressive increase behavior of HighSpeedTCP’s congestion window. Also, notice that when a packet drop occurs, HighSpeed TCP’scongestion window reduces from approximately 1300 packets to around 950 packets, instead of650 packets, as it would be expect with New Reno. The simulation parameters are describedin table 2.1.

Scalable TCP [34] builds upon the work of HighSpeed TCP. Its congestion window followsa Multiplicative Increase Multiplicative Decrease (MIMD). Like HighSpeed TCP, ScalableTCP’s window will grow at a faster rate after it has reached a certain threshold. UnlikeHighSpeed TCP, however, Scalable TCP uses static increase and decrease parameters. Itshigh-speed congestion window growth is defined as follows:

Page 30: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

14 CHAPTER 2. RELATED WORK

Simulation parametersSources 1Link capacity (Mb/s) 100RTT 128Router queue 100% BDPMSS 1460

Table 2.1: Simulation parameters for analyzing the window dynamics of several end-to-endcongestion control algorithms

600

700

800

900

1000

1100

1200

1300

1400

0 50 100 150 200 250 300

cw

nd (

packets

)

time (s)

HighSpeed TCP congestion window

HSTCP

Figure 2.2: HighSpeed TCP’s congestion window

Page 31: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.1. LOSS-BASED CONGESTION CONTROL ALGORITHMS 15

When an ACK is received,

cwnd← cwnd+ 0.01

When congestion is detected,

cwnd← cwnd− [0.125 ∗ cwnd]

HighSpeed and Scalable TCP can both achieve good scalability and link utilization. However,they present some fairness concerns [49, 73], when flows with different RTTs compete fornetwork bandwidth. Recall that standard TCP is biased towards flows with smaller RTTs. Adisadvantage in both protocols is that their more aggressive nature makes them more proneto RTT unfairness [44].

H-TCP [41] seeks to maintain compatibility with already deployed versions of TCP in theInternet. It builds upon the principles of HighSpeed TCP but introduces several ideas toreduce the RTT unfairness that it suffers from. Like HighSpeed TCP, it adjusts its α andβ parameters to better utilize the available network capacity. Its unique innovative idea,however, consists of making the congestion window growth function dependent on the timesince the last packet was dropped, instead of the traditional method of increasing the conges-tion window every RTT. By making the congestion window growth independent of the RTT,H-TCP significantly improves throughput the fairness of flows with heterogeneous RTTs.

TCP-Westwood+ is proposed in [12]. It uses the idea of bandwidth estimation from earlierproposals such as [10] and [35]. The key insight is to use the arrival rate of acknowledgementsto infer the current network bandwidth. Unlike standard TCP, which halves the congestionwindow on packet loss, TCP-Westwood uses the bandwidth estimate to set the congestionwindow and slow-start threshold to more appropriate values. This makes TCP-Westwoodperform significantly better than Reno in heterogeneous networks. More recently, the au-thors of TCP-Westwood proposed a new variant called TCP-Westwood with agile probingand persistent noncongestion detection (TCPW-A) [65]. The proposed changes make TCP-Westwood more suitable for links with dynamic, high bandwidth-delay products. Moreover,its authors state that TCPW-A can achieve good efficiency, without compromising fairnessand stability. An interesting characteristic of this version is that it retains the desirable prop-erties of the original Westwood, i.e. good performance in heterogeneous networks. Therefore,it can show better behavior compared to other high-speed TCP loss-based variants that treatpacket loss exclusively as a sign of network congestion.

Another algorithm, called Binary Increase Congestion (BIC) is proposed in [73]. It uses anovel congestion window growth mechanism that views congestion control as a searchingproblem. The basic idea of the algorithm is as follows. A minimum window is defined as thecongestion window where packet loss is not yet detected. A target window is defined as themidpoint between the minimum window (i.e., the current congestion window, provided no

Page 32: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

16 CHAPTER 2. RELATED WORK

loss occurred yet) and a maximum window. Initially, the maximum window is a large pre-defined constant, or else it is the window at which packet loss was previously detected. If thecurrent congestion window is far from the target window, BIC will increase the congestionwindow faster. This is called the additive increase phase. When the congestion windowapproaches the target window, BIC will enter the binary search phase, where the window hasa logarithmic growth. If the congestion window reaches maximum window, BIC will enter themax probing phase, and the window will grow exponentially until the new maximum windowis found. This behavior sets BIC apart from other scalable protocols, such as HighSpeedTCPand ScalableTCP, where the growth rate is always fastest near the maximum window. Itsauthors argue that, compared to TCP, BIC’s design improves link efficiency, TCP friendliness,fairness, and protocol stability. BIC was made the default TCP congestion control algorithmin the 2.6.8 Linux kernel.

CUBIC is a less complex and more network friendly variant of BIC, proposed after someconcerns were raised about the aggressiveness of BIC in networks with low round-trip times orlow link speeds [26]. CUBIC eliminates BIC’s three distinct window growth phases. Instead,it uses a single cubic function that behaves in a similar way. Also, inspired on the ideasproposed in [41] and [27], CUBIC makes its congestion window growth rate independent ofRTT, and instead dependent on the time since the last congestion event.

More specifically, CUBIC increases its congestion window according to

Wcubic = C(t−K3) +Wmax,

where C is a scaling factor, t is the elapsed time from the last window reduction (i.e., thelast congestion event), Wmax is the window size just before the last window reduction, andK = 3

√Wmax + β/C, where β is the congestion window decrease factor, applied at the time

of the last congestion event. In figure 2.3, we compare the evolution of BIC’s and CUBIC’scongestion control windows. Note that the evolution of CUBIC’s congestion window seeksto approximate that of BIC’s. However, it achieves this by using a simple cubic function tocontrol its behavior, as opposed to BIC, which is a more complex algorithm due to its threedistinct phases. The simulation parameters are described in table 2.1.

CUBIC’s authors state that its cubic window function ensures intra-protocol fairness, andthat its RTT-independent congestion window increase function improves RTT fairness. Sincethe window growth rate is dominated by t, eventually all flows will converge to the samevalue, thus improving RTT fairness. Note, however, that this still doesn’t fully solve theRTT fairness challenge, as is discussed in section 3.2.

CUBIC distinguishes itself from many end-to-end congestion control algorithms proposed inthe literature by the fact that it is the default congestion control algorithm in the Linuxkernel, since version 2.6.19. As a consequence, it is one of the few proposals which hasseen extensive real-world testing. Nevertheless, CUBIC it is not without its critics. Someresearchers have pointed out that CUBIC can be overly aggressive to cross-traffic (e.g., VoIP

Page 33: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.2. DELAY-BASED CONGESTION CONTROL ALGORITHMS 17

1000

1050

1100

1150

1200

1250

1300

1350

1400

0 20 40 60 80 100 120 140 160

cw

nd (

pa

ckets

)

time (s)

BIC and CUBIC congestion windows

BICCUBIC

Figure 2.3: BIC’s and CUBIC’s congestion window dynamics

calls) [71], have very slow convergence times, or not converge at all, showing significantlythroughput unfairness in several network scenarios [42].

Satellite links

Several contributions have been proposed to address TCP’s specific problems on satellitelinks. The proposal in [3] uses dummy segments to probe for available networks resources;it also introduces two new algorithms to optimize link utilization. In [11], along with severalmechanisms to improve link efficiency, it is proposed that, to achieve better fairness, thewindow of flows with higher RTTs should increase faster than flows with smaller RTTs.

2.2 Delay-based congestion control algorithms

Delay-based algorithms mark a significant departure from the traditional loss-based approach.They are based around the premise that there are signs before network congestion occurs.So, they use network delay information, usually in the form of RTTs, to infer the state ofqueues at the routers. One advantage of the delay-based approach is that it is much morenetwork friendly than the loss-based counterpart, in the sense that network congestion can beproactively avoided. Therefore, delay-based schemes can avoid the typical congestion windowoscillation of loss-based algorithms, e.g., TCP NewReno’s familiar “saw effect”, and enablebetter link bandwidth utilization. However, as doubts still persist on the effectiveness ofdelay-based strategies, several authors argue that more research is needed [55].

Page 34: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

18 CHAPTER 2. RELATED WORK

Pure delay-based algorithms

There are several early proposals for congestion control algorithms based on delay informa-tion. In [67], RTT variations are used to control the congestion window size. The windowgrows according to Reno. But, if the value of the current measured RTT is higher thanthe average of the minimum and maximum measured RTTs so far, then the window size isdecreased. In [66], variations in the sending rate are used to control the window size. Thewindow is increased every RTT by one segment. The current throughput is then comparedto the throughput achieved with the previous window. If the difference in the sending rateis less than a certain threshold, then the window is decreased by one segment. In [31], thewindow size is adjusted based on both RTT and window size, according to the followingequation:

(WindowSizecurrent −WindowSizeold)× (RTTcurrent −RTTold)

If the result is positive, the window is decreased. If the result is zero or negative, the windowis increased by one segment size.

Vegas [10], is a seminal work on end-to-end delay-based TCP congestion control that hasdirectly influenced many delay-based proposals. Inspired on [66], Vegas measures the sendingrate to control the congestion window size. Additionally, Vegas presents new ideas to improvepacket loss recovery, and introduces changes to the slow-start mechanism to reduce packetloss.

More specifically, Vegas adjusts its congestion window by comparing the expected sendingrate to the actual sending rate. First, it obtains the expected sending rate through thefollowing formula.

Expected =WindowSize/BaseRTT,

where WindowSize is the current congestion window size, and BaseRTT is the minimummeasured RTT for the connection so far (i.e., the RTT of the network when it is not yetcongested). Second, Vegas calculates the actual sending rate per RTT. It records the sendingtime of a distinguished segment, at time TI . When the acknowledgement of the distinguishedsegment arrives, at time TF , the actual sending rate is calculated with

Actual = SentData/(TF − TI),

where SentData is the amount of data sent between TI and TF . Third, Vegas defines twothresholds, α and β, (α < β), which are the number of minimum and maximum packets thatshould be queued in routers. Finally, the difference between the expected and actual sendingrates,

Diff = Expected−Actual,

is compared with α and β. During congestion avoidance, ifDiff < α, the window is increasedby one segment; if Diff > β, the window is decreased by one segment; if α < Diff < β, the

Page 35: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.2. DELAY-BASED CONGESTION CONTROL ALGORITHMS 19

0

200

400

600

800

1000

1200

1400

0 50 100 150 200 250 300

cw

nd (

pa

ckets

)

time (s)

Vegas congestion window

Vegas

Figure 2.4: TCP-Vegas’ congestion window

window remains unchanged. Like Reno, Vegas starts the connection by calling slow-start,even though Vegas uses a less aggressive variant. Vegas exits slow-start when Diff fallsbelow a determined threshold.

Vegas can significantly reduce network congestion, and maintain the congestion windowaround the optimal value for a given link. Its linear growth, however, makes Vegas un-suitable for networks with high bandwidth-delay products. Furthermore, Vegas’ performanceis hindered when it co-exists with more aggressive loss-based TCP variants, such as Reno.We show the evolution a Vegas’ congestion control window in Figure 2.4. The simulationparameters are described in table 2.1. Note the absence of the characteristic “saw effect”of loss-based AIMD protocols, such as NewReno. Also notice that, since TCP Vegas is notdesigned for networks with high bandwidth-delay products, it takes some time until it fullyutilizes the link (approximately at t = 130 s).

FAST TCP, which can be considered an high-speed descendant of Vegas, is proposed in [69].Even though FAST TCP builds upon the principles of Vegas, it increases the congestionwindow more aggressively to achieve good efficiency in high-speed networks. FAST TCPupdates its congestion window according to the equation:

cwnd← min

{2× cwnd, (1− γ)cwnd+ γ

(baseRTT

RTTcwnd+ α

)}

where γ is an algorithm specific parameter, baseRTT is the minimum RTT measured sofar, RTT is the average RTT, and α is the number of packets that should be queued inthe routers along the network path. In essence, FAST TCP converges quickly to near themaximum window, and then, as the RTT increases, converges more slowly to the target

Page 36: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

20 CHAPTER 2. RELATED WORK

window. Moreover, FAST TCP implements a rate pacing technique to solve the burstinesseffects observed with very large congestion windows. The authors claim that FAST TCP canachieve excellent link utilization in high-speed networks, converges to fairness quickly, andis stable. Nevertheless, it has been observed that FAST TCP lacks its fairness properties insome scenarios [49, 44]. FAST can also be problematic when there are many flows competingfor bandwidth. Since each flow aims to queue α packets in the bottleneck router, the buffermay not be sufficient to hold all necessary n× α packets, as n increases.

Sync-TCP [72] is a novel delay-based congestion control algorithm, exploiting the effects ofnetwork synchronization. The key idea is to maintain the network congested long enough,so that all flows can receive the congestion signal. Therefore, after congestion is detected,Sync-TCP sources will maintain the congestion window unchanged for a short period of time,allowing all sources coordinate their behavior. Sync-TCP aims to address the issues of high-speed network efficiency, accurate RTT measurements, and RTT fairness.

Hybrid-based algorithms

Hybrid-based TCP congestion control algorithms have become an active topic of research inthe last decade. Their mixed approach promises increased link efficiency, while retaining someof the fairness and network friendliness properties of pure delay-based proposals. Moreover,their loss-based component allows hybrid-based algorithms to remain aggressive enough whilecompeting for bandwidth with pure loss-based algorithms.

TCP-Illinois [45] follows an AIMD algorithm, but uses delay estimates to set the increaseand decrease congestion window parameters. If TCP-Illinois does not detect queuing delay(i.e., network congestion), the increase parameter is set to the maximum value, makingthe congestion window grow quickly. As the queuing delay starts to build up, the increaseparameter is then gradually decreased, making the congestion window grow more slowly. Onpacket loss, the congestion window is updated according to the equation:

cwnd← (1− β)cwnd

If the current RTT is close to the maximum RTT measured so far, TCP-Illinois infers thatnetwork congestion caused the loss, and β ≈ 1/2. If the RTT is small, β will have a smallervalue, meaning that packet loss was probably due to a link error. Nevertheless, some concernshave been raised regarding TCP-Illinois’ scalability in networks with very high bandwidth-delay products due to its fixed maximum window increase parameter (10 packets per RTT),and in the presence of reverse path traffic [39]. In Figure 2.5, we show an example of thecongestion window for TCP-Illinois. We observe that the initial higher growth rate, followedby the slower growth rate (due to the increase in latency), creates the characteristic concavegrowth curve of TCP-Illinois. The simulation parameters are described in table 2.1.

Compound TCP is proposed in [63] to improve link efficiency and RTT fairness. It maintainstwo auxiliary congestion control windows. A traditional loss-based congestion window that

Page 37: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

2.2. DELAY-BASED CONGESTION CONTROL ALGORITHMS 21

600

700

800

900

1000

1100

1200

1300

1400

0 50 100 150 200 250 300

cw

nd (

pa

ckets

)

time (s)

TCP Illinois congestion window

Illinois

Figure 2.5: TCP-Illinois’ congestion window

follows Reno’s AIMD behavior, and a scalable delay-based window inspired in Vegas. Theresulting congestion window is the sum of the loss-based and the delay-based windows. If thenetwork link is under-utilized, the delay-based component will quickly increase the congestionwindow to use available bandwidth, using a behavior similar to [19]. When congestion startsto build up, the delay-based component will reduce the window. Under heavy network con-gestion, Compound TCP reverts to Reno behavior. Compound TCP authors claim that thedelay-based component allows it to achieve good link efficiency and RTT fairness. Moreover,since the throughput is lower bounded by the loss-based component, it solves Vegas’ unfair-ness issues with loss-based schemes. It has been observed, however, that Compound TCP canrevert to a Reno-like scaling behavior, even with light reverse traffic [39], and can suffer fromfairness and scalability issues in links with very high bandwidth-delay products [70]. Eventhough disabled by default, Compound TCP is present in Windows Server 2008, WindowsVista, and Windows 7.

The proposal in [33] is similar to Compound TCP in the sense that it also maintains a delay-based and a loss-based windows to achieve good link efficiency. The approach in [6] uses aVegas-inspired, delay-based, MIMD mechanism while on high-speed mode. In [36], the al-gorithm switches between Reno and delay-based HighSpeed TCP according to the measureddelay. The mechanism described in [62] aims to improve RTT fairness, and friendliness to-wards Reno, using delay measurements to control Reno’s AIMD parameters. The authors of[46] propose an algorithm that combines the delay measures from [62] and the rate estimatesfrom [12] to create an high-speed, RTT fair, and TCP friendly protocol. TCP-LP [37], a lowpriority TCP variant, introduces a novel one-way delay estimate technique. The proposal in[24], while not aiming for high-speed link efficiency, combines ideas from Reno and Vegas to

Page 38: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

22 CHAPTER 2. RELATED WORK

improve Reno’s behavior in heterogeneous networks.

Summary

In this chapter, we have introduced the three major approaches to Internet end-to-end, con-gestion control. The loss-based approach is the most common form of congestion controlin the current Internet. It uses packet loss to control its congestion window. However itcan be overly aggressive to the network, since it actively causes congestion in the network.The delay-based approach aims to solve this issue, by proactively avoiding congestion, withthe usage of delay information in the form of round-trip time estimates. However, when itcompetes with loss-based proposals in the same link, it can receive a much smaller band-width portion than loss-based sources. The hybrid-based approach, utilizes both packet lossand delay information, leading to proposals that are less aggressive to the network, whileimproving the fairness issues faced by pure delay-based approaches.

Page 39: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Chapter 3

Proposed mechanisms

In this chapter, we discuss the design challenges and considerations faced by delay-based andhybrid-based congestion control proposals. In addition, we present the supporting proced-ures and mechanisms devised during the course of our work, to be used within our proposals.Finally, we present two new congestion control mechanisms, detailing their rationale, imple-mentation issues, and expected results in terms of performance.

3.1 Introduction

In this work we explore current, and introduce new, delay-based techniques with the aimof improving existing loss-based, end-to-end, congestion control algorithms. We believe thatsuch techniques can bring significant performance advantages in numerous network scenarios.

After reviewing the state-of-the-art, several loss-based candidate algorithms were considered.One such choice is H-TCP [41], which introduces an RTT-independent window increase rule,and has predictable, window-dependent, congestion epoch durations. This last characteristicmakes H-TCP an interesting choice to mitigate the well-known issue of delay-based versusloss-based throughput allocation fairness issues. This is further discussed in [40]. Anotherinteresting algorithm is CUBIC [26]. As described before, CUBIC also has an RTT inde-pendent congestion window increase rule, and most importantly, is a very popular algorithmthat has seen extensive real-world testing. Due to its popularity, and also to the fact that noresearch has, to date, been carried out in exploring the application of delay-based techniquesto this algorithm, we chose CUBIC as a testbed to explore our ideas. Nonetheless, H-TCPremains an interesting topic for future work.

Accordingly, we propose two new end-to-end congestion congestion control algorithms: Max-delay CUBIC, a hybrid-based congestion control algorithm, and Min-delay CUBIC, apure delay-based congestion control algorithm. Note that, like most delay-based algorithms,Min-delay CUBIC will also decrease its congestion window when a packet drop is detected.Both algorithms build upon CUBIC, with the purpose of improving CUBIC’s performancein the metrics of fairness and network friendliness, while maintaining very good efficiency. A

23

Page 40: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

24 CHAPTER 3. PROPOSED MECHANISMS

key aspect of both proposals is their ease of implementation and deployment in the currentInternet. Recall that a very attractive characteristic of the end-to-end approach is thatonly the sender side must be upgraded. Therefore, both algorithms maintain backwardscompatibility with all TCP stacks currently deployed in the Internet. Additionally, we alsopresent the design challenges faced by delay-based congestion control algorithms and howthey can be tackled.

3.2 Design challenges and considerations

Delay-based congestion control algorithms, while theoretically a very attractive proposition,face additional design and implementation hurdles, when compared with their loss-basedcounterparts. Thus, care must be taken when designing an algorithm that uses delay inform-ation.

Enabling competing flows to consistently detect congestion

In a network with many competing flows, and therefore significant statistical multiplexing,there may be low correlation between the actual queuing delay and the queuing delay meas-ured by a flow [55, 8, 60]. Thus, at a given instant, some flows may detect that the networkis congested, while others may not, resulting in several flows decreasing their congestion win-dow and in others increasing it. In other words, not all competing flows may detect thesame queue dynamics at the bottleneck router. This results in reduced fairness, as flows arenot seeing the network in a consistent state. Some authors attribute this to sampling issues[55, 47], and to the bursty nature of window-based congestion control algorithms, such asTCP [71].

To gain insight on the sampling problem, consider a network with many competing flows.Recall that, usually, a delay sample consists of measuring the round-trip time of a givenpacket. Since the congestion window of each competing flow can become quite small, andtherefore there are few packets in transit, the number of delay samples gathered by each flowmay not be sufficient to detect queueing delay build-up (this is fundamentally an aliasingphenomenon). The issue is made worse by TCP versions that only measure queue delay atthe end of an RTT, such as [32, 63].

One key insight to deal with this problem is given in a recent proposal [72]. Its authorsargue that that flows, upon detecting congestion in the network, should not immediatelyreduce their congestion window. Instead, they should wait for a pre-determined amount oftime, and then reduce the congestion window. The effect of this seemingly simple idea inthe scenario above is that, while a given flow is freezing its congestion window, thus keepingthe network congested, the remaining flows will also detect that congestion is building up.Therefore, most flows will detect congestion and will reduce their congestion window. Inessence, flows keep the network congested long enough so that competing flows can have aconsistent view of the network. Further, in hybrid-based congestion control algorithms, it

Page 41: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.2. DESIGN CHALLENGES AND CONSIDERATIONS 25

Figure 3.1: Packet Arrival Time of Two Competing Flows [71]

is generally considered a good practice to switch to loss-based mode, when the congestionwindow is below a certain threshold. This way, a flow will activate delay-based mode onlywhen a sufficient number of RTT samples is available to detect queue delay build-up.

Additionally, recall that TCP, being a window-based congestion control algorithm, will typ-ically send data in bursts. This behavior will promote the aggregation of packets from thesame flow in the queues of bottleneck routers. In figure 3.1, the arrival of packets of twocompeting flows with the same base round-trip time is illustrated. In figure 3.1 (a), it isshown that, without pacing, packets from two different flows will have different views of thequeue dynamics at the bottleneck router. As discussed earlier, this behavior will ultimatelycause unfairness between flows.

This problem can be mitigated with the introduction of pacing [71, 2]. Pacing promotesflow synchronization, and even though synchronization can have undesirable effects on leg-acy TCP, it improves fairness in high speed TCP variants, and reduces data transmissionbursts [68]. Those are two very important characteristics when developing a high-speed TCPprotocol. Therefore, pacing can play a major role in improving fairness and enabling con-sistent round trip estimation, which is crucial in delay-based algorithms. In figure 3.1 (b),it is visible that, with pacing, the packets of the two competing flows become less clustered.This is important because it enables both flows to have a consistent view of the network.Nonetheless, the only two high-speed TCP variants implementing pacing are FAST TCP andSyncTCP.

Correctly estimating the round-trip propagation delay

One major challenge, faced by virtually all delay-based proposals, is how to determine thebaseline propagation delay for each flow. Since all flows depend on the value of the baseround-trip time for their operation, incorrectly estimating this value will cause unfairness.

Page 42: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

26 CHAPTER 3. PROPOSED MECHANISMS

0

50

100

150

200

250

0 50 100 150 200 250 300

Th

roug

hp

ut

(Mb/s

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 3.2: Throughput of 5 FAST TCP flows

This is a very common issue faced by delay-based proposals such as Vegas and FAST TCP.As Vegas-based congestion control algorithms strive to keep a number of packets queued atthe bottleneck router, later flows entering the network will measure a higher base round-triptime, causing them to acquire a bigger portion of the available bandwidth, i.e. causing asignificant unfairness. We illustrate this problem in figure 3.2. Notice that, compared toearlier flows, later flows have an higher throughput rate.

Later proposals, such as [72] and [15], introduce queue draining techniques, so all flows can,eventually, detect the correct base round-trip time. In [72], a waiting period after decreasingthe congestion window is proposed. The aim of such waiting period is to give the chancefor all flows to estimate their propagation delay. The authors argue that if no such waitingperiod is introduced, i.e. if a flow starts increasing its congestion window immediately afterdecreasing it, the effectiveness of the queue draining technique can be affected, as some flowsmay still be decreasing their congestion windows, while others may already be injecting newpackets into the network.

Regardless of the design decisions of the queue draining technique, a seemingly unavoidableconsequence is reduced link efficiency, as flows must decrease their congestion windows moreaggressively to empty the queues. Moreover, it is not guaranteed that these techniques willwork when many competing flows, or even competing protocols, coexist in the link, as is thecase of UDP-based traffic such as VoIP and on-line gaming. Nonetheless, such techniquesremain useful and can serve as a component of any delay-based algorithm that aims to bevery network friendly, and is willing to sacrifice some efficiency to achieve it.

Additionally, we propose a new approach to this problem. We believe that using the maximummeasured RTT, as opposed to the minimum RTT, has a number of interesting advantages.

Page 43: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.2. DESIGN CHALLENGES AND CONSIDERATIONS 27

Firstly, the maximum RTT is considerably easier to measure by newer flows, as it does notrequire any kind of queue draining mechanism. Secondly, it is indifferent to the presenceof cross-traffic, such as UDP. Finally, it allows the algorithm to keep packets queued at alltimes in the bottleneck router, allowing for 100% link efficiency. A potential drawback withthis approach is an initial short-term unfairness transient period, caused by a necessary linkprobing phase. However, this short-term unfairness can, in some scenarios, be a desirablecharacteristic, as new flows are given temporary increased priority. This will, for instance,speed up short-lived flows, such as those found in Web traffic. Moreover, note that the op-posite problem exists with the original CUBIC protocol: in some scenarios, long-lived flowscan deprive short-lived flows of achieving a fair throughput rate for an extended period oftime. Some authors consider this behavior of CUBIC very undesirable, compared to a formof denial of service, since the majority of real-world TCP flows are short to medium sized,and therefore a single long-lived flow can affect the performance of a large number of users[42]. Nonetheless, this short-term unfair transient period can be mitigated with the use ofseveral mechanisms, as discussed in sections 3.4.2 and 3.4.3.

Scalability

Traditionally, delay-based algorithms aim to keep a number of packets in the queue of thebottleneck router. This behavior has two interesting properties. First, it significantly im-proves RTT fairness, because a constant number of packets is queued, independently of theflow’s RTT. Second, it can keep good link efficiency at all times, provided an high-speedvariant is used. Such approach comes at a price, however. This procedure scales linearlywith the number of flows, and as the number of flows increases, so will the queue occupancyand delay. High-speed Vegas variants, such as FAST TCP, keep a large number of packetsqueued per flow (100 packets in the standard NS-2 FAST TCP implementation), in order toachieve good efficiency. An unwanted side effect is that, even with a small number of flows,the router queue may overflow. Thus, significant network instability and unfairness may en-sue. Note that, to some extent, this issue also affects many hybrid-based congestion controlalgorithms, such as [36, 6, 63, 33], as their delay-based component is usually Vegas-based.

To illustrate this issue, in figure 3.3, we show the throughput and packet drops of 32 competingFAST TCP flows, in a network with 250 Mb/s of link capacity. The bottleneck router queue issized to 100% of the BDP. The MSS is set to 1460 bytes, and the RTT is uniformly distributedbetween 25 ms and 160 ms. Additionally, in figure 3.4, we show the state of the queue atthe bottleneck router, where it can be seen that each new flow contributes to additionalqueue occupancy and delay. Finally, in figure 3.5, we show the packet drops caused by thecompeting FAST TCP flows. At t = 120 s, with only 24 flows, the router queue overflowsand packets start being dropped, reducing efficiency. Moreover, throughput instability andunfairness persist, even after the transient period. Note however, that the scalability of FASTTCP is essentially tied with the size of the router queues and its α parameter, and not witha particular number of concurrent flows.

To work-around this issue, a novel, scalable, mechanism is proposed in [40]. The key insight of

Page 44: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

28 CHAPTER 3. PROPOSED MECHANISMS

0

5

10

15

20

25

0 50 100 150 200 250 300

Th

roug

hp

ut

(Mb/s

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 32 Nodes

Figure 3.3: Throughput of 32 FAST TCP flows

this technique consists in using a generic AIMD algorithm (e.g., TCP NewReno), but uponreaching a pre-determined delay threshold, the algorithm reduces the congestion window.This method greatly increases scalability, but re-introduces the problems of RTT unfairnessof traditional AIMD-based algorithms. Moreover, the algorithm must use a queue drainingtechnique to estimate its propagation delay, and therefore can not achieve full link efficiency.

RTT fairness

It is a well known fact that TCP’s congestion window increase rule, by increasing the conges-tion window on each received ACK, introduces poor RTT fairness by design, as discussed insection 1.1. Some recent proposals aim to improve this aspect of TCP’s design. In loss-basedalgorithms, and AIMD delay-based algorithms, RTT fairness can be greatly improved byadopting a time-dependent congestion window increase rule. As it is the case with H-TCPand CUBIC - both loss-based proposals. In fact, this was one important factor in the choiceCUBIC as our base algorithm. Essentially, this technique makes a flow’s congestion windowgrowth independent of its RTT. Therefore, flows with different RTTs will converge to thesame congestion window. Even though this is a considerable improvement over NewReno,note that a flow’s throughput is given approximately by Throughput = cwnd/RTT . Thus,RTT fairness is not completely achieved, since flows with shorter RTTs will still attain higherthroughput rates than flows with longer RTTs.

We propose an additional strategy to tackle this problem, which is described in section 3.3.

Page 45: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.2. DESIGN CHALLENGES AND CONSIDERATIONS 29

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Ro

ute

r que

ue (

packe

ts)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 32 Nodes

Figure 3.4: Queue of the bottleneck router with the dynamics of 32 FAST TCP flows

0

10000

20000

30000

40000

50000

60000

0 50 100 150 200 250 300

Dro

pped p

ackets

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 32 Nodes

Figure 3.5: Packets dropped at the router with 32 FAST TCP flows

Page 46: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

30 CHAPTER 3. PROPOSED MECHANISMS

Adopted mechanisms

To summarize, we propose the adoption the following techniques:

• A pacing mechanism to reduce packet burstiness, improve flow synchronization, andimprove fairness;

• A simple delay-based synchronization mechanism, inspired in [72], to improve fairness;

• For min-delay CUBIC, a queue draining technique, inspired in [40] and [72] to estimatethe correct propagation delay;

• For max-delay CUBIC, a novel technique where the maximum RTT is used, removingthe need to estimate the propagation delay;

• A scalable threshold-based congestion window decrease rule, in order to improve scalab-ility;

• An empirically obtained, RTT-dependent, congestion window decrease rule, in order toimprove RTT fairness.

3.3 Supporting procedures and mechanisms

In addition to identifying several design challenges and considerations that our proposals musttake into account, we have also devised and explored a number of procedures that seek toimprove specific performance aspects of the proposed algorithms, which are discussed further.Note however, that these procedures are orthogonal to the algorithms themselves. Several ofproposed mechanisms could possibly be adapted to other algorithms with some effort. Othermechanisms, such as limited slow-start, can be adapted to any end-to-end congestion controlalgorithm.

Starvation detection

Starvation detection enables a CUBIC delay-based flow to detect that is competing with non-cooperative flows (e.g., loss-based flows), allowing it to compete more fairly for bandwidth.After extensive simulations, we observed that, in steady state, a Max-delay CUBIC flowwill follow a very predictable congestion window increase and decrease pattern. After a flowdecreases its congestion window, queueing delay will also predictably decrease. Shortly afterthe window decrease event, it is expected that the flow will increase its congestion window.However, if queuing delay remains high, then the flow will reduce its congestion windowtwice. This is a strong indicator that something in the network is not responding to queuingdelay signals - usually a sign of the presence of loss-based flows.

Using this information, during delay-based mode, a flow may monitor its congestion windowincrease and decrease behavior. If a flow detects that it has reduced its congestion window

Page 47: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.3. SUPPORTING PROCEDURES AND MECHANISMS 31

twice, without increasing it at least once in between, it will enter loss-based mode for a givenperiod of time, as this is a very strong indicator of the presence of loss-based flows in thenetwork. After this time, the flow will revert back to delay-based mode. Note that, in orderto prevent false positives and unnecessary flow aggressiveness, a flow will only activate thismechanism, after it has detected that is in steady-state. To detect if a flow is in steady-state,we draw inspiration from the “fast convergence” procedure, described in [26]. Steady-statecan be easily confirmed by observing two variables present in the original CUBIC protocol.Those variables are the current congestion window, and the congestion window, at the end ofthe last epoch (i.e., when it reached its maximum value). If the current congestion windowis larger than, or equal to, the previous epoch’s maximum congestion window, then a flowhas reached steady-state. This is further illustrated in the Starvation detection algorithm.

Starvation detectionInitialization:increasedCwnd← truestablePhase← false

For each ACK:if cwnd > prevMaxCwnd and stablePhase = false then

/* we have reached steady state */stablePhase← true

end if

When increasing cwnd:increasedCwnd← true

When decreasing cwnd:if increasedCwnd = false and stablePhase = true then/* starvation detected - do not decrease cwnd, and switch to loss-based mode */mode← lossBasedstablePhase← false

else/* decrease cwnd normally */increasedCwnd← false

end if

RTT fairness improvement

With the aim of further improving the well known RTT fairness challenge, as discussedin section 3.2, we propose the creation of a simple procedure. Essentially, the procedureconsists of a RTT-based normalization function that enables improved RTT fairness amongcompeting flows. The key insight stems from the observation that a flow’s throughput isgiven approximately by Throughput = cwnd/RTT . Therefore, if flows with longer RTTsconverge to a larger congestion window than flows with shorter RTTs, RTT fairness can beimproved. Note that this procedure can only be used with the max-delay approach, as itconflicts with the queue draining mechanism needed for the min-delay approach.

Page 48: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

32 CHAPTER 3. PROPOSED MECHANISMS

After considering possible strategies to tackle this issue, we devised two tentative solutions.The first approach consists in changing the congestion window increase rule. So, flows withshorter RTTs increase their congestion more slowly, and flows with longer RTTs increase theircongestion window faster. The second approach looks at the problem through the oppositeangle: instead of trying to manipulate the window during the increase phase, we do it duringthe decrease phase, by simply manipulating the value of the decrease parameter, β. Bydefault, CUBIC uses a β parameter with value of 0.2, meaning that the congestion window isdecreased by 20%, in each congestion event. Our goal consists of dynamically manipulatingthis value. Essentially, in order to improve fairness, flows with longer RTTs shall have asmaller decrease factor, and flows with shorter RTTs shall have a larger decrease factor.

We pursue the second approach. The main reason is that the first approach implies chan-ging the core CUBIC protocol, i.e. its congestion window increase rule. By doing this, wewould effectively create a different algorithm that would cease to be CUBIC. By choosing thesecond approach, the CUBIC algorithm remains unchanged, as only the decrease parameteris adjusted.

Note that this procedure is obtained empirically. We begin by defining a dynamic range ofsensible RTT values, in which our procedure operates. We chose to study the range between25 ms and 160 ms of base round-trip propagation delay, as we believe it describes the typeof RTTs found it many networks with high bandwidth-delay products. The reasons the leadus to believe these values are adequate for these types of networks are as follows. First, 160ms is the value chosen in the FAST TCP website [18] to characterize an high-speed, high-latency, transatlantic link. Further, we adopt the Hamilton Institute TCP benchmark suite[7], used to perform simulation studies of congestion control algorithms in networks with highbandwidth-delay products. This suite defines 25 ms and 160 ms as a low RTT and as a highRTT, respectively. Therefore, we chose to keep the existing values to perform our studies.

After numerous experiments with several parameters and extensive simulations, we arrive atan equation that, according to our tests, best improves the observed RTT unfairness. Theequation is as follows.

adjust(x) = 1− x2

x2 + 10

Recall that in order to improve RTT fairness, the decrease parameter, β, must be dynamicallycalculated according to the RTT. Therefore, we calculate β as follows.

β = adjust(scale(γ)),

where γ is the maximum round-trip time measured so far by each flow. Note that γ =

QdelayMax + baseRTT , where QdelayMax is the maximum queueing delay of the bottleneckrouter. Assuming the router’s buffers are provisioned according to the “well-known” buffersize estimation rule-of-thumb [64], then QdelayMax = 160 ms, since the buffer size is set to

Page 49: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.3. SUPPORTING PROCEDURES AND MECHANISMS 33

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

20 40 60 80 100 120 140 160 180

β

Propagation RTT (ms)

RTT Fairness improvement equation

Figure 3.6: RTT fairness improvement curve

100% of the BDP, with D = 160 ms. baseRTT ∈ [25, 160] ms, is the baseline round-trippropagation delay of flows. As such, γmin = 185 ms and γmax = 320 ms. β is constrainedbetween 0.15 and 0.40, for the reasons explained below. scale(. . .) is an auxiliary functionto convert the γ parameter to a meaningful value for the adjust(. . .) function. Essentially, itscales the γ parameter so the resulting value, β = adjust(scale(γ)), is contained within thedesigned range, [0.15, 0.40].

{x1 : adjust(x1) = βmax}{x2 : adjust(x2) = βmin}

scale(γ) = x2−x1γmax−γmin

· (γ − γmin) + x1,

where βmax is the desired maximum value of β, and βmin is the desired minimum value of β.Since the mechanism dynamically adjusts the value of β between 0.15 and 0.40, accordingto γ, flows with shorter RTTs can become less aggressive than the original CUBIC protocol(β = 0.4) . Conversely, flows with considerably longer RTTs can become slightly more ag-gressive than the original CUBIC protocol (β = 0.15). According to our observations, thisrange of values keeps the protocol aggressive enough to keep good link efficiency, in the case offlows with shorter RTTs, and at the same time, does not make the protocol overly aggressiveto the network with flows with longer RTTs.

Limited slow-start

Limited slow-start is proposed in [20] in order to improve performance for TCP connectionswith large congestion windows. Its authors argue that a flow’s congestion window, in networks

Page 50: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

34 CHAPTER 3. PROPOSED MECHANISMS

with huge bandwidth-delay products, can be increased too aggressively during slow-start -by several thousand packets in one round-trip time. For this reason, several thousands ofpackets can be lost from a window of data, resulting in reduced performance for the flow,and also increased strain on the network itself. Limited slow-start is essentially a gentlerslow-start aiming to reduce the number of dropped packets during the slow-start procedure.Its main idea is to increase TCP’s congestion window more slowly during the slow-start phaseand thus improving its performance, by reducing dropped packets. We consider it a usefuladdition, especially in protocols trying to keep 0% packet loss, as is the case of the Min-delayCUBIC approach, which is proposed in section 3.5.

Loss-based mode for small congestion windows

To improve the sampling concerns that may affect delay-based congestion control algorithms,as discussed in section 3.2, we propose the usage of a mechanism that enables an algorithm toswitch to loss-based mode, after its congestion window has fallen bellow a certain threshold.We adopt the value of the low_window parameter of HighSpeed TCP. This means that aflow will behave like the original CUBIC protocol whenever its congestion window is smallerthan 40 segments. An additional benefit of this mechanism is that, by keeping a minimumwindow threshold, flows can avoid total throughput starvation from competing loss-basedflows, as delay signals are ignored when the congestion window falls bellow low_window.

Note however, that when a large number of competing flows coexist in a link, the valueof the congestion window eventually falls bellow low_window for all flows. Under theseconditions, flows control their congestion windows using the loss-based CUBIC algorithm. Weenable this mechanism for all max-delay variants, with the exception of Max-delay CUBICwith NewReno, as this variant only uses delay information to improve efficiency, and isfundamentally a loss-based algorithm. Further, we do not enable this mechanism for themin-delay variant, because we intend it to be purely delay-based.

3.4 Max-delay CUBIC

3.4.1 The core algorithm

Rationale

The max-delay approach consists of a novel delay-based technique, where the maximummeasured RTT value is used as a threshold to coordinate the algorithm behavior. Thiscontrasts with the traditional method of using the minimum measured RTT, making it aninteresting approach. We argue that the maximum RTT value is easier to obtain than theminimum RTT, as discussed earlier. Another interesting characteristic is that the algorithmcan maintain packets queued at the bottleneck router at all times, thus enabling high linkutilisation.

Page 51: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.4. MAX-DELAY CUBIC 35

The main insight behind this approach is to make CUBIC react to earlier signs of congestionthan packet loss (i.e., increase in router queue occupancy) and thus avoid overflows in routerqueues. However, in order to measure the RTT at which a drop occurs, the algorithm mustinitially probe the network for the maximum RTT. It achieves this by keeping the standard,loss-based, slow-start mechanism, similarly to Microsoft’s Compound TCP [63], combinedwith an additional transient period where it remains is loss-based mode. After the initialloss-based mode, the algorithm will change to delay-based mode. While in the delay-basedmode, the algorithm adjusts the congestion window according to the rules described in theMax-delay CUBIC algorithm. δ is set to 500 ms, according to the recommendations in [72].

Max-delay CUBIC algorithm

Initialization:δ ← 500 /* Waiting period, in miliseconds */reduce← false /* If true, then reduce cwnd */threshold← 100 /* Delay threshold in miliseconds */

For each ACK:if now > waitT ime thenif reduce thencwnd← β · cwnd /* β is the decrease parameter */ssthresh← cwndwaitT ime = now + δreduce← false

else if rtt > maxRTT − threshold thenreduce← truewaitT ime← now + δ

else/* increase congestion window using CUBIC rule */

end ifelse/* leave cwnd untouched */

end if

On packet loss:cwnd← β · cwnd /* β is the decrease parameter */ssthresh← cwnd

For each acknowledgement packet, Max-delay CUBIC inspects the value of the current RTT.If the value of the RTT is above the defined threshold, Max-delay CUBIC will decreaseits congestion window, albeit not immediately. Instead, it will hold its current congestionwindow value for δ ms, following the guidelines discussed in section 3.2. After this time,Max-delay CUBIC finally decreases its congestion window. After decreasing the congestionwindow, it waits an additional δ ms until it starts inspecting the RTT again. This is donein order to allow for the routers reduce their queues and queue induced delay. Apart from

Page 52: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

36 CHAPTER 3. PROPOSED MECHANISMS

the additional delay-based control component, Max-delay CUBIC’s behavior is identical toCUBIC’s.

In essence, the algorithm maintains CUBIC’s window dynamics, albeit reacting earlier tosigns of congestion in order to prevent packet loss and reduce network delay.

Impact of the threshold parameter

The threshold parameter is an absolute delay value that signals when a flow should decreaseits congestion window. It represents the minimum distance that a flow should maintain fromthe maximum measured round-trip time. For instance, if the maximum measured RTT is300 ms, and the threshold value is set to 100 ms, then a flow reduces its congestion windowwhen the RTT value equals 200 ms.

Its value should be chosen according to the algorithm’s desired mode of operation. If a lar-ger value is used, e.g. 100 ms, the queues remain smaller, improving the performance ofdelay-sensitive applications, such as VoIP and interactive remote terminal sessions. However,a larger threshold value degrades short-term fairness. When new flows enter the networkand probe for the maximum RTT value, older flows, which are already in delay-based modeand assuming no short-term fairness improvement mechanism is being used, will react tonetwork congestion sooner, and decrease their congestion windows before the new flows do.Conversely, if a smaller value is used, short-term unfairness is improved, as the algorithmwill mimic more closely the behavior of the original CUBIC algorithm, while still attain-ing considerably better long-term fairness than CUBIC. Some packet loss may occur if thechosen threshold is too small. Essentially, choosing the threshold value involves a compromisebetween short-term fairness, which may, or may not, be important, and network friendliness,i.e. queue occupancy and delay.

Implementation issues

Initially, the main idea of the Max-delay CUBIC algorithm consisted in using the estimatedmaximum queuing delay. That is, calculating the difference between the maximum andminimum measured RTTs, and reducing the congestion window, when a percentage of themaximum queuing delay value was reached. Soon, it became apparent that such approachwould result in unnecessary effort, as finding the actual propagation delay requires a queuedraining technique to work properly. Such implementation of a queue draining techniquewould defeat the purpose of high link utilization, for the reasons explained before.

After this, we arrived at a threshold-based technique using only the maximum RTT value.The initial approach consisted of measuring the maximum RTT, and then reducing the con-gestion window, after the current RTT reached a percentage of the maximum RTT, e.g., 90%.The key idea was to reduce the congestion window before the maximum RTT was reached,and therefore avoid packet loss. This approach works very well with flows with the samepropagation delay. However, it has poor fairness performance with flows with heterogeneous

Page 53: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.4. MAX-DELAY CUBIC 37

RTTs. The key to understanding this issue lies in knowing that, if using a percentage of themaximum RTT as a threshold value, flows with different propagation delays will estimatedifferent target queueing delays. This will make flows react to different values of networkqueueing delay, causing unfairness, as flows reduce their congestion windows at differenttimes.

To illustrate, consider the following example. In a network with a maximum round-trippropagation delay of 160 ms, router queues sized with 100% of the BDP (i.e., 160 ms ofmaximum queuing delay), and two competing flows, let γ1 = 320 ms be the maximummeasured round-trip time of flow 1, with 160 ms of propagation delay, and γ2 = 185 msbe the maximum measured round-trip time of flow 2, with 25 ms of propagation delay.Therefore, the calculated threshold for flows 1 and 2 would be T1 = 0.9 ∗ γ1 = 288 ms, andT2 = 0.9 ∗ γ2 = 166.5 ms, respectively. This means that flow 1 will reduce its congestionwindow when Qdelay1 = QdelayMax − γ1 − T1 = 128 ms, and flow 2 will reduce its congestionwindow when Qdelay2 = QdelayMax − γ2 − T2 = 141.5 ms, where Qdelayi is the queue delayat which flow i will reduce its congestion window, and QdelayMax is the maximum queuingdelay at the bottleneck router (i.e., 160 ms in this scenario). It becomes apparent that theflow with the longer RTT will reduce its congestion window earlier than the flow with theshorter RTT, as Qdelay1 < Qdelay2 .

We understood that, in order to achieve fairness, flows must use the same target queuingdelay, regardless of their RTT. But, it is not desirable to estimate queuing delay using theminimum RTT, as this results in decreased efficiency. Therefore, the target threshold shallbe an absolute value, subtracted from the maximum RTT, i.e. Ti = γi − TQD, where TQD isa global reference queue delay threshold.

To illustrate this, using the previous example, consider that we now choose 10 ms as thereference threshold value. This means that a flow will reduce its congestion window 10 msbefore reaching the maximum measured RTT. Accordingly, T1 = γ1 − 10 ms = 310 ms,and T2 = γ2 − 10 ms = 175 ms. Therefore, Qdelay1 = QdelayMax − γ1 − T1 = 150 ms andQdelay2 = QdelayMax − γ2 − T2 = 150 ms. Consequently, both flows will reduce their conges-tion window, when the queue delay exceeds 150 ms. This greatly improves fairness amongcompeting flows with different RTTs, as Qdelay1 = Qdelay2 .

Expected results

Max-delay CUBIC does not rely on the base round-trip time for coordinating its behavior,which can be challenging to determine, as previously discussed in section 3.2. Instead, it usesthe maximum round-trip time, and eliminates the need to estimate the actual propagationdelay of flows. This way, it does not need to use any queue draining mechanism, makingthe algorithm more robust in the presence of cross-traffic, such as UDP. Since the algorithmdoes not need to drain router queues, it can keep packets queued at the bottleneck routerat all times, thus achieving very good link utilization. Additionally, after the initial networkprobing phase ends, it can maintain a 0% packet loss rate, and therefore be very network

Page 54: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

38 CHAPTER 3. PROPOSED MECHANISMS

friendly, by avoiding packet drops and reducing delays. This is a worthwhile characteristic, as(overly) delayed and dropped packets must be retransmitted, needlessly increasing the loadon the network. Finally, long term fairness is significantly improved over CUBIC.

3.4.2 Max-delay CUBIC with NewReno

Rationale

A different approach, inspired in Compound TCP [63] and TCP-Africa [36], is to quicklyramp up the congestion window using an aggressive window increase rule (i.e., CUBIC),while the latency remains below a certain delay threshold. After the threshold is reached,this means that the network is being fully utilized (100%) and the algorithm will change itsmode of operation to mimic NewReno. This dual mode of operation allows the algorithmto efficiently use the network, and to maintain backwards compatibility with legacy flows.This point is debatable, considering that several de-facto standard variants are currentlydeployed in the Internet. The key difference of this approach, compared to similar proposals,is that, to our knowledge, this is the first hybrid-based algorithm that doesn’t use a Vegas-inspired delay-based increase rule. It uses, instead, a more scalable delay-based AIMD rule(i.e., CUBIC), independent of the number of flows, making it theoretically more scalable.Contrarily to other Max-delay CUBIC variants, this version does not require the use of apacing mechanism, as it essentially is a loss-based proposal which only uses delay-basedsignals to assess if the network is currently under-utilized. Therefore, it does not need themore accurate delay information provided by pacing.

Essentially, this proposal maintains compatibly with legacy TCP flows, while keeping verygood network efficiency. The algorithm is presented in Max-delay CUBIC with NewRenoalgorithm.

Max-delay CUBIC with NewReno algorithm

For each ACK:if rtt > maxRTT − threshold thencwnd← cwnd+ 1/cwnd /* NewReno rule */

else/* increase congestion window using CUBIC rule */

end if

On packet loss:cwnd← β · cwnd /* β is the decrease parameter */ssthresh← cwnd

Expected results

Like the standard Max-delay CUBIC algorithm, this algorithm achieves very good networkefficiency. Moreover, an interesting characteristic is that, after the more aggressive delay-

Page 55: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.4. MAX-DELAY CUBIC 39

based phase ends, it switches its mode of operation to NewReno TCP, leading to a TCP-friendly behavior. Additionally, it eliminates the delay-based versus loss-based challengeof Max-delay CUBIC, without the need for additional mechanisms and procedures. Thisis because the algorithm immediately switches to (loss-based) legacy TCP, whenever thenetwork shows signs of increased delay, and therefore increased utilization. Finally, whenpaired with the round-trip time fairness improvement mechanism, it can show improvedfairness behavior, even during the legacy TCP loss-based phase. Note, however, that thisvariant inherits the issues of TCP NewReno, such as slow convergence in links with highbandwidth-delay products, and RTT unfairness.

3.4.3 Max-delay CUBIC with improved short-term fairness

Rationale

This is yet another approach, which uses the core max-delay algorithm. Its main benefit isshort-term fairness improvement, during the period where new flows (i.e., flows in loss-basedmode) enter and probe the network. Recall that, when choosing the Max-delay CUBIC delaythreshold parameter, there is a compromise between improved short-term fairness and long-term aggressiveness. If a larger value is used, Max-delay CUBIC will keep queues smaller,thus reducing queue occupancy and latency in the network. This is achieved at the cost ofreduced short-term fairness. If, instead, a smaller value is used, then Max-delay CUBIC hasbetter short-term fairness, but it will keep larger queues at the routers.

This variant strives to reduce the short-term fairness issue when new flows enter the network,while, at the same time, enabling the use of a threshold value that keeps queues small andimproves the performance of cross-network traffic, such as VoIP and Web traffic. In essence,this approach, while identical to Max-delay CUBIC, uses an additional mechanism that candetect non-cooperative (i.e., loss-based) flows entering the network and switch to loss-basedmode.

When the router queue occupancy is maximum, and therefore the maximum RTT is seenby a flow, the flow will switch its operation to loss-based mode, using NewReno’s congestionwindow increase rule. During loss-based mode, using a more conservative congestion windowincrease rule is preferred over CUBIC’s, because the more aggressive behavior of CUBICwill shorten congestion epochs. A congestion epoch is the period between two consecutivewindow reductions. Shorter congestion epochs impair coordination among flows, as the flowswill remain longer in the loss-based phase, thus creating difficulties to the loss-to-delay modetransition mechanism. After switching to loss-base mode, the flow will immediately scheduleto re-enter delay-based mode, after a pre-determined amount of time. After this time, theflow will freeze its congestion window and slowly reduce it until the measured RTT is belowthe window decrease threshold. This way, all flows can cooperatively schedule to re-activatethe delay-based mode. If, during this time, the flow detects that the maximum RTT has beenreached again, indicating there are still uncooperative flows in the network, it will remain inloss-based mode and reschedule to enter delay-based mode yet again. After non-cooperative

Page 56: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

40 CHAPTER 3. PROPOSED MECHANISMS

flows leave the network, all remaining flows will eventually change back to delay-base mode.The algorithm adjusts the congestion window according to the rules described in the Max-delay CUBIC algorithm with improved short-term fairness.

For each received acknowledgment, Max-delay CUBIC with improved short-term fairnesschecks if the current RTT is equal or greater than the maximum RTT seen so far. If thiscondition is true, then the algorithm switches to loss-based mode for a given period of time,which is 10 seconds in this case. The consequences of this behavior are twofold. First, it allowsa flow to probe the network for its initial capacity when it enters the network, until it reaches amaximum RTT value. Second, it allows the algorithm to switch to loss-based mode wheneverloss-based flows (e.g., NewReno flows) are present in the network because, eventually, themaximum RTT will be reached since loss-based algorithms ignore delay information. Afterthe loss-based period expires, the algorithm will enter the congestion window freeze state foranother 10 seconds. During this period, flows will not increase their congestion window, norwill they reduce it if the measured RTT is above the delay-threshold value - it will react toloss-based signals in order to avoid excessive network congestion, however. The freeze periodis performed so that all competing flows can coordinate their behavior and eventually switchback to delay-based mode. This is achieved by avoiding that the RTT reaches its maximumvalue - by not increasing the congestion window during the freeze period - and to improvefairness with loss-based flows - by ignoring delay signals during this period. After the freezeperiod expires, all flows will enter the probe decrease phase. During the probe decrease phase,all flows slowly decrease their congestion windows until the measured RTT value is below thetarget delay-threshold. When the RTT reaches the target value, the algorithm will follow thestandard Max-delay CUBIC algorithm rules. Note, however, that whenever the maximumRTT is reached or surpassed during the delay-based or probe decrease phases, the algorithmwill switch back to loss-based mode, because this is indicative that that are loss-based flowsin the link and therefore it should revert to a more aggressive behavior.

Expected results

This approach, like the core max-delay algorithm, has very good efficiency, improves fairness,and also shows improved network friendliness after the initial probing phase. Additionally,we consider this a very interesting approach, since it greatly improves two additional fairnessconcerns. First, the short-term unfairness period, caused by new flows probing the network.Second, the well known unfairness challenge caused by the coexistence of delay-based andloss-based algorithms in the same link. This last characteristic is comparable to the Max-delay CUBIC with NewReno variant, however, contrarily to the NewReno variant, it canrevert back its operation to delay-based mode. This enables considerably better networkfriendliness and fairness, since we break from the constraints of legacy TCP.

Page 57: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.4. MAX-DELAY CUBIC 41

Max-delay CUBIC with improved short-term fairness

Initialization:δprobe ← 10 /* Network probe waiting time, in seconds */probeChangeT imer ← 0 /* Probe state transition timer */probeEndT imer ← 0 /* Probe shutdown timer */

For each ACK:/* Change to loss-based made if the maximum RTT is reached */if rtt ≥ maxRTT thenmaxRTT ← rttif probe 6= on or rtt > prevMaxRTT thenprobe← onprobeChangeT imer ← now + δprobe /* assess network state in δprobe seconds */prevMaxRTT ← rtt

end ifend if

/* Continue in loss mode, until probe timer expires */if probe = on thenif now < probeChangeT imer thencwnd← cwnd+ 1/cwnd /* NewReno increase rule */

elseprobe← freezingprobeEndT imer ← now + δprobe

end ifend if

/* Do the actual network state assessment and see if can enter full delay-based mode*/if probe = freezing or probe = decreasing thenif rtt ≤ maxRTT − treshold thenprobe← off/* from this point on, follow Max-delay CUBIC algorithm */

else if now > probeChangeT imer thenif probe = freezing thenprobe← decreasingprobeChangeT imer ← now + δprobe

else/* probe = decreasing */cwnd← cwnd · decreaseFactorprobeChangeT imer ← now + δprobe

end ifelse/* leave cwnd untouched */

end ifelse/* probe is off, follow Max-delay CUBIC algorithm */

end if

Page 58: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

42 CHAPTER 3. PROPOSED MECHANISMS

3.5 Min-delay CUBIC

Rationale

The min-delay approach draws inspiration from the proposals in [41, 72]. Note that theseapproaches are not Vegas based, such as [69], and the delay components of [36, 6, 63, 33],among others. Instead, a delay threshold is used to control the congestion window dynamics.The use of a threshold, contrasted with the traditional approach of keeping a number ofpackets queued at the bottleneck router, allows these proposals to scale, independently ofnumber of flows present in the network, up to a theoretical limit [40]. Moreover, a queuedraining technique is used in order to assess the actual value of the propagation delay (baseRTT), as this approach requires it for correct operation.

Contrary to the max-delay approach, this method does not need to probe for the maximumRTT. This allows the algorithm to be pure delay-based, and be capable of achieving a 0%packet loss rate. The main drawback of this approach is a slight reduction in link efficiency,caused by the queue draining mechanism. The algorithm adjusts its congestion window ac-cording to the rules described in the Min-delay CUBIC algorithm. δ is set to 500 ms, as perthe recommendations in [72].

Apart from the queue draining technique, which is detailed next, Min-delay CUBIC followsthe same rules as Max-delay CUBIC. The major difference being that Min-delay CUBIC usesthe minimum measured RTT (base RTT) to control its operation and Max-delay CUBIC usesthe maximum RTT instead.

Implementation issues

Contrarily to Max-delay CUBIC, the Min-delay CUBIC algorithm follows a more standardapproach and, as a result, there were fewer issues to solve. Note however, that like Max-delayCUBIC, Min-delay CUBIC should use an absolute queueing delay threshold value, as dis-cussed before. Accordingly, the one issue that required more attention was the choice of thequeue draining technique. In [15] a very simple queue draining technique is proposed, but thismethod can be problematic with a large number of flows (e.g., more than 100 simultaneousflows). In [72], a more sophisticated technique is presented, that, according to its authors,should be more scalable. We decided to create a hybrid approach, consisting of ideas fromboth proposals. The basic principle of our approach is to combine the approach proposedin [15] while, at the same time, exploring the effects of flow synchronization through delaysignals, as proposed in [72]. The main challenge that affects the scalability of the proposalin [15] is that, when a larger number of flows exist in the link, they will not see the networkin a consistent state. This is because their operation is not synchronized, and consequentlysome flows will be increasing their congestion windows, while others will be decreasing theircongestion windows. Due to this lack of synchronization, it is likely that the queue willnever be properly drained when many flows compete in in the link. On the other hand, bysynchronizing flows, we can increase the chances of properly draining the queues, because

Page 59: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

3.5. MIN-DELAY CUBIC 43

Min-delay CUBIC algorithm

Initialization:δ ← 500 /* Waiting period, in miliseconds */reduce← false /* If true, then reduce cwnd */threshold← 100 /* Delay threshold in miliseconds */

For each ACK:baseRTT ← min(baseRTT, rtt)

β ← 0.8 · baseRTTrtt /* β calculated in order to drain queue [15] */

if now > waitT ime thenif reduce thencwnd← β · cwnd /* β is the decrease parameter */ssthresh← cwndwaitT ime = now + δreduce← false

else if rtt− baseRTT > threshold thenreduce← truewaitT ime← now + δ

else/* increase congestion window using CUBIC rule */

end ifelse/* leave cwnd untouched */

end if

On packet loss:cwnd← β · cwnd /* β is the decrease parameter */ssthresh← cwnd

Page 60: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

44 CHAPTER 3. PROPOSED MECHANISMS

flows will reduce and increase their congestion windows more closely. In essence, we cre-ate a queueing mechanism that, while remaining fairly simple, explores flow synchronizationin order to enable the proper draining of queues, even with a large number of competing flows.

Expected results

This variant presents very good network friendliness, being able to maintain a 0% percentpacket loss rate, including the more aggressive slow-start phase, even with many competingflows. Recall that a congestion control algorithm that strives to avoid packet loss presentsinteresting advantages, as (overly) delayed and dropped packets must be retransmitted, need-lessly increasing the load on the network. Furthermore, it can be considerably more scal-able than Vegas-based congestion control proposals, which usually require significantly largequeues to maintain network friendliness with a large number of simultaneous flows. Further,queues are kept small at all times, thus greatly improving the performance of delay-sensitivetraffic, such as VoIP, interactive remote terminal sessions, and even Web sessions, as theirperformance is strongly affected by network delay [59].

Moreover, it significantly improves the fairness performance of the original CUBIC protocol.Also, it shows improved short-term fairness compared to the Max-delay CUBIC algorithm.Recall that, without additional mechanisms, Max-delay CUBIC favors new flows for a definedperiod of time, as discussed in section 3.2. Since Min-delay CUBIC always remains in delay-based mode, this is solved by design.

Finally, it shows significantly improved efficiency over TCP NewReno, albeit having a slightloss of efficiency performance when compared with the original CUBIC protocol. We be-lieve such small loss of efficiency is worthwhile, considering the significant gains in all otherperformance metrics. Min-delay CUBIC can also show improved efficiency scalability, whencompared with other delay-based proposals such as FAST TCP, as they can overflow thequeues after a number of concurrent flows share the link, causing self-induced packet loss.This is illustrated in figure 3.5, where competing FAST TCP flows overflow the router’squeue, causing packet drops and reduced efficiency.

Summary

In this chapter, we have detailed the major challenges faced by delay-based and hybrid-basedcongestion control algorithms. We have described several new and existing mechanisms toimprove network performance in metrics such as efficiency, fairness, network friendliness, andscalability. Finally, we have incorporated the described techniques in an existing loss-basedcongestion control protocol: CUBIC. The result is two novel congestion control proposals.Max-delay CUBIC, an hybrid-based congestion algorithm, and Min-delay CUBIC, a puredelay-based congestion control algorithm.

Page 61: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Chapter 4

Simulation results

In this chapter, we study through simulation, the CUBIC, Max-delay CUBIC, and Min-delayCUBIC algorithms. Using the NS-2 simulator, we observe several performance aspects of thedifferent proposals. Namely, link efficiency, fairness, queue occupancy, delay, packet loss andscalability. During the present chapter, we compare and discuss the obtained results.

4.1 Methodology

Simulated network topology and parameters

We use the Network Simulator 2 (NS-2) to create a number of network scenarios. NS-2network simulations are defined using an OTcl script, which is then interpreted and run by NS-2. Using the network topology described in figure 4.1, we can define several parameters suchas link capacity, propagation delay, arrival and departure of flows in the network, accordingto the scenario being simulated. Some of these parameters can also be changed during thesimulations in order to emulate dynamic network conditions. In Annex A, we show anexample of an OTcl script used to define our network experiments.

In the defined network topology, Router 0 is the router where all source nodes connect,Router 1 is the bottleneck router, and Router 2 is the router where the sink nodes connect.During a simulation, each source node transmits data to its corresponding sink node. C0,the bandwidth capacity between Router 0 and Router 1, is set to two times the value of C1,which is the bottleneck’s router bandwidth. Unless stated otherwise, Delay0 and Delay1 areset to 1 ms. In the network scenarios where heterogeneous propagation delays are required,we set the value of the delay for each source node according to a uniform distribution.

NS-2 allows us to create an arbitrarily complex network, only limited by CPU and RAM.However, increasing the complexity of the network will result in increased hardware require-ments and in longer simulation times. Given our hardware constraints, we consider that thechosen network topology, while simple, is adequate to simulate all network scenarios.

45

Page 62: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

46 CHAPTER 4. SIMULATION RESULTS

Figure 4.1: Network topology used in NS-2 simulations

Performance metrics

In order to evaluate the performance of our algorithms, we measure six key performanceindicators, in one second intervals. They are throughput, fairness, link efficiency, packet droprate, queue occupancy, and round-trip time.

First, the efficiency of the algorithms is assessed. Efficiency is concerned with link utilization,therefore a high-speed TCP variant must be able to use the available capacity of networkswith high bandwidth-delay products. Link efficiency is measured at the bottleneck router(Router 1) by observing the link’s output rate. Second, fairness is measured. We expect thatflows competing for bandwidth in a bottleneck link eventually converge to a fair share of thebandwidth. Flow fairness is assessed under different scenarios, such as flows with differentRTTs. Fairness is measured according to Jain’s fairness index [13]:

F (x) =(∑xi)

2

n(∑x2i )

,

where xi is the measured transmission rate of the flow i in a time interval. The fairness indexis bounded between 0 and 1. A total fair allocation (all x′is are equal) has a fairness of 1;as the bandwidth allocation becomes more unfair, the fairness value decreases accordingly.Finally, network friendliness is studied by inspecting packet loss rates, queueing delay, androuter queue occupancy. Network friendliness is a desirable characteristic of a congestion con-trol algorithm, as it reduces network strain, and improves the performance of delay-sensitiveapplications, such as VoIP and interactive sessions. Packet drop rate is calculated at thebottleneck router using Lrate = Pd/Pa, where Pd is the number of total packets dropped atthe bottleneck router, and Pa is the number of total packet arrivals at the bottleneck router.Queue occupancy is calculated at the bottleneck router by observing the amount of packetscurrently buffered for transmission. Finally, round-trip time is measured at each sender node.

Evaluated algorithms

Page 63: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.1. METHODOLOGY 47

We mainly observe the performance aspects of our two proposals, which are Max-delay CU-BIC and Min-delay CUBIC. In some scenarios, we test additional Max-delay CUBIC sub-variants in order to highlight certain key design aspects, and how the choice of parametersinfluences the behavior of the algorithms. In addition, since we choose CUBIC as our testbedfor improvement, we also compare the performance of CUBIC with our proposals, wheneverrelevant. Finally, FAST TCP’s performance is also assessed in several scenarios in order tocontrast the behavior of a delay-based AIMD congestion control algorithm, such as Max-and Min-delay CUBIC, with a popular Vegas-inspired congestion control algorithm, namelyFAST TCP.

Implementation aspects

In order to assess our proposals, we have implemented the Max-delay CUBIC and Min-delayCUBIC algorithms in the NS-2 simulator. Furthermore, pacing was implemented by portingan earlier NS-2 patch [52], by David X. Wei, to the most recent NS-2 version. CUBIC’sNS-2 implementation was taken from CUBIC’s official page [14]. Finally, FAST TCP’s NS-2implementation was taken from [51].

Benchmark suite

We use the benchmark suite available in [7], as the basis for our simulations. The suite isessentially used to generate the simulation results, and to parse and plot the resulting data.This benchmark suite consists of a collection of shell scripts, and an example NS-2 dumbbellnetwork scenario, in the form of an OTcl script [53], which is run by NS-2. The suite usesthe Gnuplot graphing utility [25] to plot the simulation results. It can generate reports forseveral metrics, such as throughput, queue occupancy, and round-trip time. We choose thistest suite mainly because it has been used in peer-reviewed papers, e.g., [42], and thus it givesus more confidence in the obtained results. Unfortunately, the tool is somewhat limited and,without further improvements, is not suitable for our objectives. For instance, some of itslimitations include being able to simulate only two concurrent flows, not supporting fairnessmeasurements (using Jain’s Fairness Index), not being able to assess link efficiency, not beingable to dynamically change the network conditions, among other issues. Therefore, we haveextended and modified the suite to make it more useful to achieve our goals. This was donein the form of several new shell, Python [56], and Gnuplot scripts. In Annex B, we showsome example scripts created to extend the functionality of the benchmark suite.

Page 64: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

48 CHAPTER 4. SIMULATION RESULTS

Parameters EventsSources 1 t = 0 s start 1 sourceC0 (Mb/s) 2000 t = 3600 s simulation endsC1 (Mb/s) 1000RTT 160RTR1 queue 100% BDPMSS 1460

Table 4.1: Simulation parameters for testing the efficiency of a single flow in a network witha large BDP

4.2 Single flow efficiency

A fairly popular efficiency congestion control algorithm benchmark consists in observing howmuch data can be transferred, by a single flow, in a link with a significantly high bandwidth-delay product. Following this idea, in order to assess the efficiency of our proposals, wesimulate a 1 Gb/s transcontinental network pipe with 160 ms of round-trip propagationdelay value. Further parameters are specified in table 4.1.

In this scenario, we start a single file transfer between the two endpoints, with a simulationtime of one hour. We asses the link efficiency results for Max-delay CUBIC and Min-delayCUBIC. Further, we also test the Max-delay CUBIC with NewReno (MaxD CUBIC+NR)sub-variant, in order to observe if it can achieve its primary purpose: very good link util-ization. We then compare the results obtained for our proposals with CUBIC, our testbedalgorithm, and FAST TCP, a very popular congestion control algorithm, designed to be veryefficient in networks with large-bandwidth delay products. The amount of data that success-fully arrives at the receiver is used as the key performance indicator, presented in table 4.2.Additionally, in figure 4.2, we show the average link efficiency obtained during each of thefive tests.

Figure 4.2 shows that CUBIC and Max-delay CUBIC have excellent link efficiency, fullyutilizing the bottleneck link. Min-delay CUBIC has, as expected, a lower efficiency perform-ance. Nevertheless, we still regard Min-delay CUBIC’s efficiency as very acceptable (97%),considering that it must periodically drain the router queues in order to estimate its baselinepropagation delay.

FAST TCP shows a slightly inferior performance to what was expected (table 4.1 and figure4.2). We strongly believe that this is related to its α parameter. In order to achieve verygood link efficiency, FAST TCP’s α parameter may need manual tuning, depending on thenetwork conditions and concurrent number of flows. We observe that the default α parameterof NS-2, of 100 packets, is not sufficient for FAST TCP to fully utilize the link in this scenariowith a single flow.

CUBIC and Max-delay CUBIC with NewReno were able to transfer the largest amount ofdata, during the one hour of simulation (table 4.1). This is because both proposals do notuse a pacing mechanism, and therefore their slow-start behavior is more aggressive than

Page 65: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.3. RTT FAIRNESS IMPROVEMENT 49

0.9

0.95

1

CUBIC FAST MaxD CUBIC MaxD CUBIC+NR MinD CUBIC

Eff

icie

ncy

Algorithm

BW 1000 Mbps, RTT 160 ms, Buffer 100% BDP

Figure 4.2: Average efficiency of the five tested algorithms

CUBIC MaxD CUBIC+NR MaxD CUBIC FAST TCP MinD CUBIC418.57 GB 418.57 GB 418.53 GB 415.82 GB 406.86 GB

Table 4.2: Total data transferred between the two endpoints during the simulation

the slow-start behavior of a variant that adopts a pacing mechanism. This is the case ofMax-delay CUBIC, Min-delay CUBIC and FAST TCP. Recall that the reason why Max-delay CUBIC with NewReno does not use pacing is because it only uses delay information toswitch between its aggressive cubic growth phase and the legacy NewReno-based phase, andtherefore does not need very accurate delay information. Because of their more aggressivebehavior, both CUBIC and Max-delay CUBIC with NewReno were able to transfer more 40MB than Max-delay CUBIC (a difference of less than 0.01%).

4.3 RTT fairness improvement

To show the results of the RTT fairness improvement mechanism, proposed in section 3.3,we devise the following experiment. Two competing flows with a significant RTT differenceshare a 250 Mb/s link. Flows 1 and 2 have 160 ms and 25 ms of two-way propagation delay,respectively. Both flows are using the Max-delay CUBIC algorithm with the starvationdetection mechanism enabled, in order to mitigate short-term unfairness. The queue ofthe bottleneck router is set to 100% of the bandwidth-delay product. At t = 0 s in thesimulation, flow 1 starts transmitting at full speed. At t = 60 s, flow 2 enters the networkand also starts transmitting at full speed. The total simulation time is 10 minutes. Thesimulation parameters are summarized in table 4.3.

Figure 4.3 shows the results of the simulation with the RTT fairness improvement mechanismdisabled. In figure 4.3 (a), it can be seen that after the transient period finishes, the congestionwindows of both flows converge to the same value. This is the result of CUBIC’s RTT-independent congestion window increase rule. This design, while an improvement over legacy

Page 66: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

50 CHAPTER 4. SIMULATION RESULTS

Parameters EventsSources 2 t = 0 s start 1 sourceC0 (Mb/s) 2000 t = 60 s start 1 sourceC1 (Mb/s) 1000 t = 300 s simulation endsRTT 25; 160RTR1 queue 100% BDPMSS 1460

Table 4.3: Simulation parameters for testing the RTT fairness improvement mechanism

Parameters EventsSources 5 t = 0 s start 1 sourceC0 (Mb/s) 500 t = 10 s start 1 sourceC1 (Mb/s) 250 t = 20 s start 1 sourceRTT [25, 160] t = 30 s start 1 sourceRTR1 queue 100% BDP t = 40 s start 1 sourceMSS 1460 t = 300 s simulation ends

Table 4.4: Simulation parameters for testing the algorithm short-term dynamics

TCP, still has issues achieving RTT fairness, as discussed in section 3.2. For this reason,reduced throughput fairness is observed in figure 4.3 (b).

Figure 4.4 shows the results with the RTT fairness improvement mechanism enabled. Infigure 4.4 (a), it is visible that the congestion window of flow 2 is penalized due to its shorterRTT. This results in improved RTT fairness, as observed in figure 4.4 (b).

4.4 Short-term behavior

We simulate the progressive entry of five flows in the network. The purpose of this experimentis to observe how the several algorithms react during a period of network instability, andif, following this transient period, they eventually converge to a fair share of the availablebandwidth. The used parameters for this experience are specified in table 4.4.

4.4.1 Results for CUBIC

Figure 4.5 shows the throughput attained by five concurrent CUBIC flows. Some researchershave pointed that CUBIC can have a slow convergence behavior [42]. Our results confirmthis concern. In this experiment, we observe that it takes approximately 2 minutes, startingfrom the entry of Flow 5, at t = 40 s, for all five flows to converge to a reasonably fair valueof throughput. During this time, Flow 1 receives an unfair throughput advantage over flowsthat enter later in the network. Towards the end of the simulation, the flows show somechallenges maintaining a fair allocation of bandwidth. Jain’s Fairness Index, in figure 4.7,reflects this. Regarding link efficiency performance, shown in figure 4.6, we see that CUBIChas very good performance. Regarding queue occupancy, in figure 4.8, it can be observed

Page 67: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4. SHORT-TERM BEHAVIOR 51

0

500

1000

1500

2000

2500

3000

3500

4000

0 100 200 300 400 500 600

cw

nd (

pa

ckets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 2 Nodes

Flow 1Flow 2

(a) Congestion Window

0

50

100

150

200

250

0 100 200 300 400 500 600

Thro

ughput (M

b/s

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 2 Nodes

Flow 1Flow 2

(b) Throughput

Figure 4.3: 2 Flows with RTT fairness improvement mechanism disabled

Page 68: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

52 CHAPTER 4. SIMULATION RESULTS

0

500

1000

1500

2000

2500

3000

3500

4000

0 100 200 300 400 500 600

cw

nd (

pa

ckets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 2 Nodes

Flow 1Flow 2

(a) Congestion Window

0

50

100

150

200

250

0 100 200 300 400 500 600

Thro

ughput (M

b/s

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 2 Nodes

Flow 1Flow 2

(b) Throughput

Figure 4.4: 2 Flows with RTT fairness improvement mechanism enabled

Page 69: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4. SHORT-TERM BEHAVIOR 53

0

50

100

150

200

250

0 50 100 150 200 250 300

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 4.5: Throughput of 5 CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Effic

iency

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.6: Utilization ratio of the bottleneck link with 5 CUBIC flows

that CUBIC leads to a high queue occupancy during the entire simulation time. This may bean undesirable characteristic, as high queuing occupancy and delay can have adverse effectson delay-sensitive applications, such as web traffic and VoIP, as discussed before.

4.4.2 Results for Max-delay CUBIC

Max-delay CUBIC can have different short-term behaviors, depending on the chosen delaythreshold value. To show this, we test Max-delay CUBIC with a threshold of 100 ms and10 ms. Additionally, we test Max-delay CUBIC with short-term fairness improvement, usinga delay threshold of 100 ms to assess if it is possible to keep a larger threshold value, whilemitigating the short-term unfairness caused by newer flows entering the network.

Max-delay CUBIC with 100 ms threshold

In figure 4.9, it is clearly seen that new flows acquire bandwidth at the expense of olderflows. This is the opposite of what happens with CUBIC (figure 4.5), where the first flow

Page 70: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

54 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Fa

irne

ss In

de

x

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.7: Fairness of 5 CUBIC flows

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Route

r queue (

packets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.8: Bottleneck router’s queue occupancy with 5 CUBIC flows

Page 71: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4. SHORT-TERM BEHAVIOR 55

0

50

100

150

200

250

0 50 100 150 200 250 300

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 4.9: Throughput of 5 Max-delay CUBIC flows (100 ms threshold)

slowly releases bandwidth to newer flows. The behavior of Max-delay CUBIC is expected, asit is essentially a design decision - with its benefits and drawbacks - where newer flows aregiven temporary increased priority. We note that this behavior is, in part, shared with otherhybrid-based congestion control proposals that keep the standard slow-start mechanism, suchas Compound TCP [63]. This is further discussed in section 3.4.1. Nonetheless, after the lastflow enters the network, at t = 40 s, flows converge to a fair share of the bandwidth in less than40 seconds, as shown in figure 4.11. This is an improvement over CUBIC. However, we donote that it takes additional 50 seconds to converge to a very fair share of the bandwidth. Webelieve the reason for this is that, since we inherit CUBIC’s congestion window control rule,we also inherit its slower convergence behavior. Regarding link efficiency performance, we canobserve that, like CUBIC, Max-delay CUBIC has very good link efficiency. Furthermore, infigure 4.12, we see that after the transient phase, Max-delay CUBIC can maintain very goodlink efficiency while having lower queue occupancy when compared to CUBIC (figure 4.8).We note that this is usually a desirable characteristic, since it enables high link utilization,while remaining friendly towards delay-sensitive applications.

Max-delay CUBIC with 10 ms threshold

Figure 4.13 shows the throughput of five Max-delay CUBIC flows using a 10 ms delaythreshold. We observe that decreasing the value of the delay threshold to 10 ms resultsin different short-term fairness dynamics, with flows converging faster to a fair share of thelink’s bandwidth, as figure 4.15 indicates. The reason is that, when using a delay thresholdof 10 ms, delay-based flows react later to queueing delay signals. On the other hand, whenthe delay threshold is set to 100 ms, flows react earlier. The obtained results indicate that asmaller delay threshold results in short-term fairness improvement when older, delay-based,flows compete for bandwidth with newer, loss-based, flows. Additionally, we note that theachieved long-term fairness performance is slightly inferior to the long-term fairness perform-ance achieved when the delay threshold is set to 100 ms. The reason for this is that, asexplained before, the RTT fairness improvement mechanism was obtained empirically. When

Page 72: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

56 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Eff

icie

ncy

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.10: Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (100 msthreshold)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Fairn

ess I

nd

ex

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.11: Fairness of 5 Max-delay CUBIC flows (100 ms threshold)

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Route

r queue (

packets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.12: Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows (100 msthreshold)

Page 73: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4. SHORT-TERM BEHAVIOR 57

0

50

100

150

200

250

0 50 100 150 200 250 300

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 4.13: Throughput of 5 Max-delay CUBIC flows (10 ms threshold)

performing the experiments that led to the RTT fairness improvement procedure, the delaythreshold was set to 100 ms, and consequently it is optimized for this value. Therefore, inthis experiment, we can observe that the value of the delay threshold parameter also influ-ences the long-term fairness performance. We have performed additional simulations andhave verified that long-term fairness improves when the delay-threshold value is set closer to100 ms. However, note that, even though the RTT fairness improvement mechanism appearsto require a delay-threshold parameter close to 100 ms to operate in an optimal way, its long-term and short-term fairness performance is visibly improved over CUBIC, independently ofthe chosen delay-threshold parameter.

Regarding efficiency performance, in figure 4.14, we can observe that high utilization isachieved during the entire simulation. Figure 4.16 shows that, since the delay threshold valueis smaller, Max-delay CUBIC will reduce its congestion closer to the queue limit, resultingin higher queue occupancy. These results indicate that tuning the delay threshold value doesindeed affect short-term fairness dynamics, and also long-term network friendliness, such asqueue occupancy.

Max-delay CUBIC with short-term fairness improvement and 100 ms threshold

We have seen that using a larger delay threshold value with Max-delay CUBIC, e.g., 100 ms,will result in newer flows acquiring increased throughput at the expense of older flows. Thismay, or may not, be a desirable characteristic. One possible solution is to use a smaller delaythreshold value, as previously shown. However, this will result in higher queue occupancy,which may not be wanted. Max-delay CUBIC with short-term fairness improvement triesto enable the use of an higher delay threshold value in Max-delay CUBIC, while keepingcompatibility with loss-based flows and improving short-term fairness. In figure 4.17, wecan see that older flows do not immediately reduce their throughput in the presence of newflows, despite having a delay threshold value of 100 ms. This results in improved short-termfairness, as observed in figure 4.19, when compared to the short-term fairness of Max-delayCUBIC with the same threshold value (figure 4.15) . Figure 4.18 shows that this variant

Page 74: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

58 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Eff

icie

ncy

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.14: Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (10 msthreshold)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Fairn

ess I

nd

ex

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.15: Fairness of 5 Max-delay CUBIC flows (10 ms threshold)

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Route

r queue (

packets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.16: Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows (10 msthreshold)

Page 75: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.4. SHORT-TERM BEHAVIOR 59

0

50

100

150

200

250

0 50 100 150 200 250 300

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 4.17: Throughput of 5 Max-delay CUBIC flows with short-term fairness improvement(100 ms threshold)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Effic

iency

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.18: Utilization ratio of the bottleneck link with 5 Max-delay CUBIC flows (100 msthreshold)

can also present very good efficiency performance. Regarding queue occupancy, figure 4.20shows that, at approximately t = 50 s, flows start coordinating their behavior in order tobring down the measured round-trip time below the 100 ms delay threshold. Initially, flowsfreeze their congestion window, and then slowly reduce it, resulting in the queue slope from50 s to 60 s. After this period, the core Max-delay CUBIC algorithm is enabled. This can beconfirmed by observing that, after t = 80 s, the queue dynamics are identical to the queuedynamics of Max-delay CUBIC (figure 4.12).

4.4.3 Results for Min-delay CUBIC

Figure 4.21 shows that Min-delay CUBIC’s throughput convergence is considerably improvedover CUBIC (figure 4.5), as confirmed by the fairness index, shown in figure 4.23. In addition,figure 4.21 also shows that its slow-start mechanism is less aggressive than the slow-start

Page 76: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

60 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Fa

irne

ss In

de

x

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.19: Fairness of 5 Max-delay CUBIC flows with short-term fairness improvement(100 ms threshold)

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Route

r queue (

packets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.20: Bottleneck router’s queue occupancy with 5 Max-delay CUBIC flows with short-term fairness improvement (100 ms threshold)

Page 77: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.5. DYNAMIC SCENARIO 61

0

50

100

150

200

250

0 50 100 150 200 250 300

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Flow 1Flow 2Flow 3Flow 4Flow 5

Figure 4.21: Throughput trajectory of 5 Min-delay CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Effic

iency

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.22: Utilization ratio of the bottleneck link with 5 Min-delay CUBIC flows

used by CUBIC and the Max-delay CUBIC variants. This is due to the adoption of limitedslow-start, used to avoid packet loss. Recall that this algorithm strives to keep low queueoccupancy and delay. Therefore, in figure 4.22, we see that this variant does indeed sacrificeefficiency performance, due to its queue draining mechanism. This is confirmed in figure 4.24,where periods of zero queue occupancy are visible. Note that queues are kept small at alltimes, including the initial transitory period, resulting in zero dropped packets during theentire simulation.

4.5 Dynamic scenario

In this experiment, we observe the behavior of the CUBIC, Max-delay CUBIC, and Min-delay CUBIC algorithms in a dynamic network scenario. To achieve this, we simulate anumber of events that change the state of the network. Namely, the arrival and departureof flows, and changing the capacity of the bottleneck router. The main objective of this

Page 78: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

62 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

Fa

irne

ss In

de

x

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.23: Fairness of 5 Min-delay CUBIC flows

0

500

1000

1500

2000

2500

3000

3500

0 50 100 150 200 250 300

Route

r queue (

packets

)

time (s)

BW 250 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 5 Nodes

Figure 4.24: Bottleneck router’s queue occupancy router with 5 Min-delay CUBIC flows

Page 79: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.5. DYNAMIC SCENARIO 63

Parameters EventsSources 50 t = 0 s start 25 sourcesC0 (Mb/s) 2000 t = 300 s start 25 sourcesC1 (Mb/s) 1000 t = 600 s C1=500 Mb/sRTT (uniform) [25, 160] t = 900 s C1=1000 Mb/sRTR1 queue 100% BDP t = 1200 s stop 25 sourcesMSS 1460 t = 1500 s simulation ends

Table 4.5: Simulation parameters for testing the algorithm behavior in a dynamic networkscenario

test is to assess how the different algorithms adapt to changing network conditions. Furthersimulation details are specified in table 4.5.

4.5.1 Results for CUBIC

Figure 4.25 shows that CUBIC flows have difficulty reaching a stable throughput rate, res-ulting in impaired fairness performance. This is confirmed in Jain’s Fairness Index, in figure4.27. We believe the reason for CUBIC’s poor performance in this experiment is essentiallydue to the existence of flows with uniformly distributed RTTs, paired with unsynchronizedlosses at the bottleneck router. An unsynchronized loss is a congestion event that happenswhen the router queue overflows, but only a subset of flows are affected by it. Recall that,without pacing, packets tend to be clustered in the queues of routers. Thus, when a queueoverflows, the dropped packets may only belong to a small sample of the overall existing flowsin the link. This phenomenon is more likely to occur in routers with drop tail queues. Thismeans that some flows reduce their congestion window, while others do not, decreasing fair-ness performance. Furthermore, recall that, even though CUBIC has an RTT-independentcongestion window increase rule, flows with shorter RTTs still acquire an higher throughputrate, for the reasons discussed in section 3.2. Also, during the initial slow-start phase, flowswith shorter RTTs are much more aggressive than flows with longer RTTs, resulting in verydifferent initial throughput rates, aggravating the situation further. We have verified that ifwe run the simulation with all flows having identical RTTs, they eventually converge to a fairstate. However, a scenario where all flows have identical RTTs is not the most realistic one.

On the other hand, CUBIC’s aggressiveness, by keeping queues fully utilized at all times(figure 4.28) does result in very good efficiency, as it can be observed in figure 4.26. Thedrawback is increased network strain, resulting in many packets being dropped (figure 4.29),and high latency (figure 4.30), which may impair the performance of short-lived flows, andother types of cross-traffic, as discussed before.

4.5.2 Results for Max-delay CUBIC

Figure 4.31 shows that, initially, some flows acquire greater throughput rates compared toother flows. This is because Max-delay CUBIC uses the regular slow-start mechanism. As

Page 80: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

64 CHAPTER 4. SIMULATION RESULTS

0

20

40

60

80

100

120

140

160

0 200 400 600 800 1000 1200 1400

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.25: Throughput of 50 CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Effic

iency

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.26: Utilization ratio of the bottleneck link with 50 CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Fairness Index

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.27: Fairness of 50 CUBIC flows

Page 81: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.5. DYNAMIC SCENARIO 65

0

2000

4000

6000

8000

10000

12000

14000

0 200 400 600 800 1000 1200 1400

Rou

ter

qu

eu

e (

pa

ckets

)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.28: Bottleneck router’s queue occupancy with 50 CUBIC flows

0

10000

20000

30000

40000

50000

60000

70000

80000

0 200 400 600 800 1000 1200 1400

Dro

pped p

ackets

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.29: Packets dropped at queue of the bottleneck router with 50 CUBIC flows

0

50

100

150

200

250

300

350

400

450

0 200 400 600 800 1000 1200 1400

rtt (m

s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.30: RTT of 50 CUBIC flows

Page 82: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

66 CHAPTER 4. SIMULATION RESULTS

0

20

40

60

80

100

120

140

160

0 200 400 600 800 1000 1200 1400

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.31: Throughput of 50 Max-delay CUBIC flows

previously mentioned, slow-start causes flows with shorter RTTs to be much more aggressivethan flows with longer RTTs. Nevertheless, after this initial transient period, it can be seenthat flows eventually stabilize around a fair throughput rate. The same behavior is againobserved at t = 300 s, when additional 25 flows enter the network. Overall, we can see thatflows have considerably less oscillation when compared with CUBIC’s flows, being able toadapt to the changing network conditions without introducing too much network instability.Fairness is improved over CUBIC due to the use of the delay as an additional congestion signal,and the introduction of pacing. Both mechanisms contribute to the increased synchronizationamong flows, improving fairness, as discussed in section 3.2. In addition, by activating theRTT fairness improvement mechanism, we can bring the Fairness Index very close to 1, duringmost of the simulation (figure 4.33).

In figures 4.34 and 4.36, we may observe the dual-mode of operation of Max-delay CUBIC.The entry of new flows in the network match with spikes in both queuing delay and occu-pancy. After the initial probing phase, flows change their mode of operation to delay-basedmode, becoming less aggressive and avoiding further packet loss, as observed in figure 4.35.Recall that Max-delay CUBIC, contrarily to CUBIC, strives to keep queues with lower occu-pancy, achieving a more friendly behavior to cross-traffic applications, and improving theirperformance.

Regarding network efficiency, figure 4.32 shows that Max-delay CUBIC has good efficiency.We can observe that, since a 100ms threshold value was chosen, Max-delay CUBIC keepssmaller queues compared to CUBIC. Therefore, some efficiency spikes are visible in thissimulation. This is most evident at t = 900 s, when the link capacity is increased to itsoriginal value (1 Gb/s), and t = 1200 s, when 25 flows leave the network simultaneously.During these periods, the remaining flows need a few seconds to increase their sending rateuntil the link is fully utilized again, resulting in a slight loss of efficiency. This efficiencyreduction can be improved by decreasing the value of the delay threshold, as discussed insection 3.4.1.

Page 83: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.5. DYNAMIC SCENARIO 67

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Eff

icie

ncy

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.32: Utilization ratio of the bottleneck link with 50 Max-delay CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Fairne

ss In

de

x

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.33: Fairness of 50 Max-delay CUBIC flows

0

2000

4000

6000

8000

10000

12000

14000

0 200 400 600 800 1000 1200 1400

Route

r queue (

packets

)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.34: Bottleneck router’s queue occupancy with 50 Max-delay CUBIC flows

Page 84: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

68 CHAPTER 4. SIMULATION RESULTS

0

2000

4000

6000

8000

10000

12000

14000

16000

0 200 400 600 800 1000 1200 1400

Dro

pp

ed

pa

ckets

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.35: Packets dropped at the queue of the bottleneck router with 50 Max-delay CUBICflows

0

50

100

150

200

250

300

350

0 200 400 600 800 1000 1200 1400

rtt (m

s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.36: RTT of 50 Max-delay CUBIC flows

Page 85: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.6. SCALABILITY 69

0

20

40

60

80

100

120

140

160

0 200 400 600 800 1000 1200 1400

Thro

ug

hp

ut

(Mb

/s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.37: Throughput of 50 Min-delay CUBIC flows

4.5.3 Results for Min-delay CUBIC

Figure 4.37 shows that Min-delay CUBIC flows have a significantly more stable throughputtrajectory than CUBIC (figure 4.25). This results in improved fairness, as it can be observedin figure 4.39. Its long-term fairness performance is, however, inferior to Max-delay CUBIC’sfairness performance (figure 4.33). This is because Min-delay CUBIC must use a queue drain-ing procedure, and is therefore unable to use the RTT fairness improvement mechanism. Weobserve the effect of the queue draining technique in figure 4.40, which shows short peri-ods of queue occupancy reaching zero packets. We also see the effect of the queue drainingtechnique in figure 4.38, where many spikes corresponding to reduced efficiency are visible.Nevertheless, we can say that its fairness performance is significantly improved over CUBIC.In addition, the short-term convergence behavior, after new flows enter the network, is im-proved over CUBIC and Max-delay CUBIC. This is essentially due to the limited slow-startmechanism and by remaining in delay-based mode, as explained next. Recall that, duringslow-start, flows with shorter RTTs will obtain an unfair share of the available bandwidth,because they are able to increase their congestion window faster than flows with longer RTTs.By using limited-slow start, all flows increase their congestion window more slowly than withthe regular exponential slow-start. In addition, by always remaining in delay-based mode,flows can leave (limited) slow-start before overflowing the router queues, and start to con-verge earlier. The effect of Min-delay CUBIC on the queue of the bottleneck router is shownin figure 4.40 and in figure 4.41. Both figures confirm that Min-delay CUBIC can maintainsmall queues and zero packet loss, during the entire simulation.

4.6 Scalability

In this test, we assess the scalability of Max-delay CUBIC and Min-delay CUBIC with in-creasing simultaneous flows. For the Max-delay CUBIC algorithm, we set the delay thresholdto 100 ms, and test two sub-variants. One sub-variant enables the low_window recommend-

Page 86: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

70 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Eff

icie

ncy

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.38: Utilization ratio of the bottleneck link with 50 Min-delay CUBIC flows

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200 1400

Fairne

ss In

de

x

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.39: Fairness of 50 Min-delay CUBIC flows

0

2000

4000

6000

8000

10000

12000

14000

0 200 400 600 800 1000 1200 1400

Route

r queue (

packets

)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.40: Bottleneck router’s queue with 50 Min-delay CUBIC flows

Page 87: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.6. SCALABILITY 71

-1

-0.5

0

0.5

1

0 200 400 600 800 1000 1200 1400

Dro

pp

ed

pa

ckets

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.41: Packets dropped at the bottleneck router with 50 Min-delay CUBIC flows

0

50

100

150

200

250

0 200 400 600 800 1000 1200 1400

rtt (m

s)

time (s)

BW 1000 Mbps, RTT [25, 160] ms, Buffer 100% BDP, 50 Nodes

Figure 4.42: RTT of 50 Min-delay CUBIC flows

Page 88: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

72 CHAPTER 4. SIMULATION RESULTS

Parameters EventsSources [1, 2000] t = 0 s start n sourcesC0 (Mb/s) 2000 t = 1000 s simulation endsC1 (Mb/s) 4000RTT (uniform) [25, 160]RTR1 Queue 100% BDPMSS 1460

Table 4.6: Simulation parameters for testing the algorithm scalability with increasing flownumber

ation (section 3.3), which can improve delay sampling, but forces the algorithm to operatein loss-based mode when a large number of flows share the link. The second sub-variantdoes not enable this mechanism. This way, we can observe how the low_window mechanismaffects the scalability of Max-delay CUBIC.

In addition, we also test the CUBIC and FAST TCP algorithms in order to establish abaseline for comparison. Since concurrent flow scalability can be a challenge for delay-basedcongestion control proposals inspired in Vegas, we try to assess if our proposals, which adopta threshold-based mechanism, show improved scalability with a large population of flows.

We run several simulations, in a link with 2 Gb/s of bandwidth, for 1000 seconds. Furthersimulation details are described in table 4.6. We observe the metrics of link efficiency, fairness,packet loss rate, and queue occupancy. With the exception of the packet loss rate, all metricsare calculated in one second intervals, which are then averaged. The packet loss rate isgiven by Lrate = Pd/Pa, where Pd is the number of total packets dropped at the bottleneckrouter, and Pa is the number of total packet arrivals at the bottleneck router. We considerthe simulation interval of [100, 1000] s. This is done in order to allow for the algorithmsto stabilize, and to avoid counting packets dropped during slow-start. This way, we onlyaccount for the steady state behavior of the algorithms. The maximum number of flows is2000, which is approximately the maximum number of flows that can be simulated in ourhardware.

4.6.1 Efficiency

Figure 4.43 shows that all algorithms, with the exception of Min-delay CUBIC, have excellentlink efficiency scalability, being able to fully utilize the link in every test. However, weobserve a slight reduction of efficiency in the FAST TCP test with five flows. The reason forthis is that FAST TCP in some tests, and with this particular test being an example, cansometimes show an unstable behavior, which can result in loss of efficiency and fairness. Thisis further discussed in section 4.6.2. Finally, as expected, Min-delay CUBIC sacrifices someefficiency due to its queue draining mechanism, resulting in lower efficiency compared to otherproposals. We observe that Min-delay CUBIC seems to have a slight loss of efficiency whenthe number of flows becomes larger. We believe this may be related with the fact that, sincethere is a very large number of flows in the link, queuing delay will reach the threshold value

Page 89: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.6. SCALABILITY 73

0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

1 5 10 50 100 500 1000 2000

Eff

icie

ncy

Number of flows

BW 2000 Mbps, RTT [25, 160] ms, Buffer 100% BDP

CUBICFAST

MaxD CUBICMaxD CUBIC+lw

MinD CUBIC

Figure 4.43: Link efficiency ratio with varying number of concurrent flows

very quickly. This, in turn, forces flows to reduce their congestion windows more frequentlyin order to keep queues small, leading to the observed reduction in link efficiency.

4.6.2 Fairness

In figure 4.44, it can be observed that CUBIC’s fairness index decreases when the number offlow increases, stabilizing around 0.7. This is essentially due to the effect of unsynchronizedlosses and due to CUBIC not using an RTT improvement mechanism. Even though thisvalue is not very good, we note that it appears to be more scalable than FAST TCP, in thesense that it stabilizes around 0.7, independently of the number of concurrent flow numbers.

The fairness of FAST TCP, with up to 100 concurrent flows and with the exception of thefive flows experiment, is always very good. This is because flows are started simultaneously,and therefore the actual base RTT is known to all flows. If flows had been started at differenttimes, the unfairness issues related to base RTT estimation, described in section 3.2, wouldarise. We see that with five concurrent flows, the fairness of FAST is very low. This is becauseone flow acquires a much larger throughput rate compared to the remaining four flows. Wehave observed this behavior of FAST TCP in several other simulations, but we have not founda satisfactory explanation to this issue. With 500 flows, FAST TCP fairness performance issignificantly reduced. The reason is that, in this scenario, FAST TCP needs approximately270 flows to overflow the queues - recall that FAST TCP flow aims to queue α packets, perflow, in the queue of the bottleneck router. From this point on, FAST performance appearsto decrease as the number of flows increases.

Regarding Max-delay CUBIC, both variants show very good fairness performance up to500 simultaneous flows. With 1000 concurrent flows, we can see that the fairness of Max-delay CUBIC with the low_window mechanism is slightly decreased. As the number of flowincreases, the congestion window of each flow decreases. At one point, due to the existence ofmany flows in the link, some congestion windows will be smaller than low_window segments.

Page 90: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

74 CHAPTER 4. SIMULATION RESULTS

So, the loss-based congestion window control rule will override the delay-based rule, reducingfairness, due to the delay signals being ignored. However, fairness is still improved overCUBIC. Firstly, because of the fairness improvement mechanism; secondly because, if aflow starts behaving too aggressively, its congestion window will eventually be larger thanlow_window, re-activating the delay-based component. This, in turn, will not allow the flowto increase its congestion window arbitrarily, because its congestion window increase rule nowreacts to queuing delay signals. With 2000 flows and the low_window rule, Max-delay CUBICoperates almost entirely in loss-based mode. Since a large number of flows exist in the link, theprobability of their congestion window being smaller than low_window is greatly increased,causing flows to ignore delay signals. We can observe that the error rate increases, confirmingthat Max-delay CUBIC is working in loss-based mode (figure 4.46). However, fairness is stillimproved over CUBIC, for the same reasons given for the 1000 flows experiment. This alsoresults in reduced packet error rate compared to CUBIC, because flows with congestionwindows larger than low_window immediately reduce it after enabling delay-based mode.In contrast, we can see that the fairness performance of Max-delay CUBIC without thelow_window mechanism is consistently very good, confirming that, despite the usefulnessof the low_window mechanism, we must acknowledge that it can result in reduced fairnessperformance with a large number of flows. Still, we note that, although the low_window

mechanism can reduce fairness, the resulting performance it is still an improvement overCUBIC.

Min-delay CUBIC, with up to 100 simultaneous flows, shows less fairness performance thanMax-delay CUBIC and FAST TCP. This is because Min-delay CUBIC can not use the RTTfairness improvement mechanism, used by Max-delay CUBIC, and is not designed to beRTT independent, like FAST TCP. Recall, however, that one of the reasons for the goodperformance of FAST TCP, in this experiment, is that flows were started simultaneously.Nonetheless, Min-delay CUBIC’s fairness performance is improved over CUBIC’s in everytest. Additionally, Min-delay CUBIC shows a more consistent fairness performance whencompared with FAST TCP, indicating that it is more scalable in the fairness metric. This isobserved in the 500 concurrent flows test, and also with subsequent tests. We observe that thefairness performance of Min-delay CUBIC decreases with 2000 concurrent flows. We believethat, with a such large number of flows, and since Min-delay CUBIC strives to keep queuessmall, congestion windows may become too small to detect all congestion signals, resultingin sampling issues. This would cause some flows not to detect all congestion signals, leadingto a decrease in fairness.

4.6.3 Queue occupancy and packet loss

Figure 4.45 shows that CUBIC, being an high-speed loss-based congestion control algorithm,keeps the queue of the bottleneck router highly occupied. This more aggressive behavior isexpected, but it has several drawbacks. High queue occupancy deteriorates the performanceof numerous cross traffic applications, such as VoIP, Web, and many types of interactive

Page 91: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.6. SCALABILITY 75

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 5 10 50 100 500 1000 2000

Fairn

ess

Number of flows

BW 2000 Mbps, RTT [25, 160] ms, Buffer 100% BDP

CUBICFAST

MaxD CUBICMaxD CUBIC+lw

MinD CUBIC

Figure 4.44: Jain’s Fairness Index with varying number of concurrent flows

traffic, by increasing delay and jitter. Additionally, figure 4.46 shows that CUBIC eventuallyoverflows the queue, leading to an increase in packet loss. This is especially detrimental tointeractive applications, since lost packets must be retransmitted, causing a reduction in userexperience, who may perceive the network applications as “unresponsive”.

FAST TCP, also as expected, shows that the occupancy of the bottleneck queue increaseswith the number of concurrent flows (figure 4.45). Recall that each flow aims to keep 100packets queues in the bottleneck router’s queue. We notice that the test with one flowis an exception, in the sense that the queue occupancy is approximately equal to the testwith 50 concurrent flows. This is explained by FAST TCP’s α parameter, which appearsto be insufficient to keep a stable queue usage behavior in this network scenario, with asmall number of flows. With only 1 concurrent flow, we notice that FAST TCP appearsto remain in its multiplicative increase phase, and therefore is not able to reach a stablebehavior. This behavior is eventually rectified with increasing the number of concurrentflows, as observed in the queue usage pattern (figure 4.45). Finally, in figure 4.46, we observethat, with many concurrent flows, FAST TCP shows a concerning packet loss rate. With sucha high packet loss rate, many types of network applications will be significantly degraded,due to the constant need to retransmit lost packets, which may even result in retransmissiontimeouts.

Max-delay CUBIC, with the low_window mechanism, keeps the queues small with up to500 concurrent flows (figure 4.45). In subsequent tests, the bottleneck router’s queue willgrow until its fully occupied. This effect of the low_window mechanism is detailed in section4.6.2. In figure 4.46, we observe that in the 2000 flow experiment, packet loss will eventuallyoccur. We note, however, that the packet loss rate is still inferior to CUBIC, and that it isconsiderably lower than FAST TCP’s packet loss rate.

Max-delay CUBIC, without the low_window mechanism, is able to keep queues small duringall tests (figure 4.45). This is because it is not constrained by the low_window rule, andtherefore it still responds to delay signals, when its congestion window falls below 40 segments.

Page 92: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

76 CHAPTER 4. SIMULATION RESULTS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 5 10 50 100 500 1000 2000

Qu

eu

e o

cup

pancy

Number of flows

BW 2000 Mbps, RTT [25, 160] ms, Buffer 100% BDP

CUBICFAST

MaxD CUBICMaxD CUBIC+lw

MinD CUBIC

Figure 4.45: Queue occupancy ratio with varying number of concurrent flows

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

1 5 10 50 100 500 1000 2000

Packet lo

ss r

ate

Number of flows

BW 2000 Mbps, RTT [25, 160] ms, Buffer 100% BDP

CUBICFAST

MaxD CUBICMaxD CUBIC+lw

MinD CUBIC

Figure 4.46: Packet error rate with varying number of concurrent flows

Because of this, and even though it is not very apparent in figure 4.46, Max-delay CUBICis able to keep a 0% packet loss rate during all tests. These results are interesting, becausethey indicate that Max-delay CUBIC can have excellent network efficiency (figure 4.43), whilebeing very friendly to cross-traffic applications, even with a large number of concurrent flows.

Figure 4.45 shows that Min-delay CUBIC fulfils its primary goal of being very networkfriendly, even with a very large number of concurrent flows. This is achieved by keeping thebottleneck’s router queue small during all tests. These results indicate that, by adopting adelay-based AIMD rule, as opposed to a Vegas-based mechanism, a congestion control pro-posal can show improved flow scalability behavior. This results in a 0% packet loss rate inall tests, as it is observed in figure 4.46. Similarly to Max-delay CUBIC, Min-delay CUBICshows that it can be very friendly to cross-traffic.

Summary

In this chapter, we have provided a detailed simulation study of the CUBIC, Max-delay CU-

Page 93: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

4.6. SCALABILITY 77

BIC, and Min-delay CUBIC algorithms in different network scenarios with high bandwidth-delay products. The tests assessed the performance of the different algorithms, mainly inthe metrics of efficiency, fairness, queue occupancy, packet loss rates, and scalability. Theresults show that, in the simulated scenarios, Max-delay CUBIC and Min-delay CUBIC cansignificantly improve the performance of the original CUBIC algorithm.

Page 94: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

78 CHAPTER 4. SIMULATION RESULTS

Page 95: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Chapter 5

Conclusions and future work

TCP’s congestion control algorithms are widely recognized to perform poorly in next-generationnetworks. More specifically, TCP’s major current challenges include efficiency in networkswith high bandwidth-delay products, bandwidth allocation fairness, and performance in het-erogeneous networks. In this context, it is easy to understand why better congestion controlalgorithms that can respond to the challenges presented by next-generation networks are veryimportant.

In this thesis we make an overview of the two major approaches to network congestion control:the end-to-end and the router-assisted approaches, with a greater focus in the end-to-end ap-proach. The end-to-end approach treats the network as a black-box and can only use implicitnetwork signaling, i.e. loss and delay variations, to adjust the congestion window size. Themain advantages of the end-to-end approach are simplicity and scalability, but these schemesare unable to know the true state of the network, having limited performance compared torouter-assisted proposals. On the other hand, in the router-assisted approach sources canreceive explicit feedback signals from routers in the network. Thus, these algorithms achievesuperior performance and stability compared to end-to-end algorithms. Although the router-assisted approach can show superior results, network researchers have mainly focused onend-to-end algorithms, due to the complexity of deploying router-assisted algorithms in theInternet.

Due to its ease of implementation, deployability, and compatibility with existing TCP con-gestion control proposals in the current Internet, we have focused our work in the end-to-endapproach. Accordingly, we explore current and introduce new delay-based techniques withthe aim of improving an existing loss-based, end-to-end, congestion control algorithm. Wehave chosen CUBIC, the default congestion control algorithm in the Linux operating system,as a testbed to explore our ideas, due to its popularity and also to the fact that no researchhas, to date, been carried out in exploring the application of delay-based techniques to thisalgorithm.

We have proposed two novel end-to-end congestion congestion control algorithms. They areMax-delay CUBIC, a hybrid-based congestion control algorithm, and Min-delay CU-

79

Page 96: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

80 CHAPTER 5. CONCLUSIONS AND FUTURE WORK

BIC, a pure delay-based congestion control algorithm. Both algorithms build upon CUBIC,with the purpose of improving CUBIC’s performance in the metrics of throughput fairness,queue occupancy and delay, and packet loss rates, while maintaining very good efficiency.Additionally, we have implemented our proposals in the Network Simulator 2 in order toevaluate the performance of our proposals and compare them to other end-to-end congestioncontrol proposals.

Using the NS-2 tool, we have provided an in-depth study of the performance of CUBIC,Max-delay CUBIC, and Min-delay CUBIC algorithms in several next-generation networkscenarios. Our simulation results confirm that an existing loss-based, end-to-end, algorithmcan leverage delay information to achieve superior performance in next-generation networks,especially in terms of fairness and scalability.

As future work, we would like to conduct further testing of our algorithms in additionalnetwork scenarios. For instance, we would like to simulate networks with more complextopologies, such as parking lot topologies, and with more complex network traffic patterns.

Additionally, we would also like to apply the devised procedures and mechanisms to otheralgorithms. One such algorithm is H-TCP, which introduces an RTT-independent windowincrease rule, and has predictable, window-dependent, congestion epoch durations. This lastcharacteristic makes H-TCP an interesting choice to mitigate the well-known issue of delay-based versus loss-based throughput allocation fairness issues. This could possibly lead to anhybrid algorithm that mainly uses delay information, but could significantly improve fairnesswhen sharing network resources with pure loss-based algorithms.

As a final topic of future work, we would like to implement our algorithm in the Linux kernel.We have already conducted a preliminary study of the congestion control architecture in Linuxand believe that it would be possible to implement our ideas. This would allow us to test ouralgorithms in real-world scenarios and observe if our results hold in a more concrete setting.

Page 97: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Appendix A

OTcl script example

# modi f ied to use David Wei ’ s Linux code

# D.J .Le i t h , Hamilton I n s t i t u t e , 2007 .# J. Poupino, I n s t i t u t o Super ior Tecnico, 2010 .

# l o c a l f unc t i on s

proc pr in t t cp { fp tcp s ink l a s t b y t e s ra t e T} {global ns dt tcpTick f l owI sAc t i v e

set now [ $ns now ]i f { $T >=1.0} {

set r a t e [ expr ( [ $ s ink set bytes_ ] −$ lastbytes ) ∗8 .0 /(1000000∗$T) ]set l a s t b y t e s [ $ s ink set bytes_ ]set T 0 .0

}

set cubic_rtt [ expr [ $tcp set cubic_rtt_ ] ∗$tcpTick∗1000 ]set var iance [ expr [ $tcp set rttvar_ ]/4 ∗$tcpTick∗1000 ]set cwnd [ $tcp set cwnd_ ]set id [ $tcp set f id_ ]

# var iance f i xi f { $var iance > 10000 . 0 } {

set var iance 0 . 0}

i f { $ f l owI sAc t i v e ( $ id ) == 1} {puts $fp " [ format ␣%. 2 f ␣$now ] ␣$cwnd␣ [ expr ␣ [ $tcp ␣␣ s e t ␣ srtt_ ]/8 ∗$tcpTick∗1000

] ␣␣ [ $s ink ␣ s e t ␣bytes_ ] ␣ $ra te ␣ $cubic_rtt ␣ $var iance "$ns at [ expr $now+$dt ] " p r i n t t cp ␣ $fp ␣ $tcp ␣ $s ink ␣ $ l a s t by t e s ␣ $ra te ␣ [ expr ␣$T+

$dt ] "}

}

81

Page 98: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

82 APPENDIX A. OTCL SCRIPT EXAMPLE

proc pr in t− t cp s ta t s { tcp s ink s t a r t t ime l a b e l } {global ns dt

set fp [open " $ l a b e l . o u t " w]set l a s t b y t e s 0$ns at $ s ta r t t ime " pr in t t cp ␣ $fp ␣ $tcp ␣ $s ink ␣0␣0␣0"

}

proc pr intqueue { fp_bytes fp_drops fp_queue_err numNodes} {global qmon fmon ns dt

set f c l [ $fmon c l a s s i f i e r ] ; # f low c l a s s i f i e r

for { set i 1} { $ i <= $numNodes} { incr i } {set drops$ i 0

set occupancy$i 0set byt e s$ i 0set l o s s $ i 0

set l o s s r a t e $ i 0

set f low [ $ f c l lookup auto 0 0 $ i ]i f { $f low != "" } {

set drops$ i [ $ f low set pdrops_ ]set occupancy$i [ expr [ $ f low set parr iva l s_ ] − [ $ f low set pdepartures_

]−[ $ f low set pdrops_ ] ]set byt e s$ i [ $ f low set bdepartures_ ]set l o s s $ i [ $ f low set pdrops_ ]i f { [ $ f low set parr iva l s_ ] > 1} { set l o s s r a t e $ i [ expr ( " [ $ f low ␣ s e t ␣

pdrops_ ] . 0 " ) / [ $ f low set parr iva l s_ ] ] }}

}

set now [ $ns now ]set l [ $qmon set pdrops_ ]set a r r i v a l s [ $qmon set parr iva l s_ ]i f {$numNodes == 1} {

set l o s s 2 $ l o s s 1set bytes2 $ l o s s 2}set s_drops " [ format ␣%. 2 f ␣$now ] ␣ $ l ␣ $ l o s s 1 ␣ $ l o s s 2 "set s_bytes " [ format ␣%. 2 f ␣$now ] ␣ [ $qmon␣ s e t ␣pkts_ ] ␣ [ $qmon␣ s e t ␣bdepartures_ ] ␣

$bytes1 ␣ $bytes2 "set s_queue_err " [ format ␣%. 2 f ␣$now ] ␣ $ l ␣ $ a r r i v a l s "

for { set i 3} { $ i <= $numNodes} { incr i } {set s_drops [ format "%s␣ [ s e t ␣ drops$ i ] " $s_drops ]set s_bytes [ format "%s␣ [ s e t ␣ by t e s $ i ] " $s_bytes ]}

puts $fp_drops $s_dropsputs $fp_bytes $s_bytes

Page 99: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

83

puts $fp_queue_err $s_queue_err

$ns at [ expr $now+$dt ] " pr intqueue ␣$fp_bytes ␣$fp_drops␣$fp_queue_err␣$numNodes"

}

proc pr int−qstat s {fname} {global nsglobal numNodes

set fp_bytes [open ${fname}_bytes .out w]set fp_drops [open ${fname}_drops.out w]set fp_queue_err [open ${fname}_qError.out w]

$ns at 0 . 0 " pr intqueue ␣$fp_bytes ␣$fp_drops␣$fp_queue_err␣$numNodes"}

# ge t parameters . . .set alphaopt_1 [ lindex $argv 0 ]set alphaopt_2 [ lindex $argv 1 ]set TCP_Name1 [ lindex $argv 0 ]set TCP_Name2 [ lindex $argv 1 ]set bw [ lindex $argv 2 ]set rtt_1 [ lindex $argv 3 ]set rtt_2 [ lindex $argv 4 ]set qmax [ lindex $argv 5 ]set endtime [ lindex $argv 7 ]set s tagge r1 [ lindex $argv 8 ]set s tagge r2 [ lindex $argv 9 ]set i t e r [ lindex $argv 10 ]set l a b e l [ lindex $argv 11 ]set ns l inux [ lindex $argv 12 ]set s c a l e [ lindex $argv 14 ]set lambda [ lindex $argv 15 ]set numcl ients [ lindex $argv 16 ]set numNodes [ lindex $argv 17 ]set dt 1 ; # sampling i n t e r v a l ( in seconds ) used when logg ing cwnd etcset tcpTick 0 .001remove−all−packet−headers ; # removes a l l except commonadd−packet−header Flags IP TCP ; # hdrs reqd for TCP

set ns [ new Simulator ]# TODO: eva l ua t e t h i s e xp re s s i on#ns−random 0Agent/TCP set ecn_ 0Agent/TCP set window_ 200000 ; # r e c e i v e r ’ s max window s izeAgent/TCP set packetSize_ 1460 ; # in bytesAgent/TCP set windowOption_ 1Agent/TCP set tcpTick_ $tcpTickAgent/TCP set timestamps_ 1Agent/TCPSink set timestamps_ 1

Page 100: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

84 APPENDIX A. OTCL SCRIPT EXAMPLE

# Limited s low− s tar t − do no s e t t h i s f o r min−delay va r i an t !#Agent/TCP s e t max_ssthresh_ 100

# Delay t h r e s h o l dAgent/TCP set cubic_delay_threshold_ 100

$defaultRNG seed [ expr $ i t e r ∗ 5000 ]#$defaultRNG seed [ expr 3 ∗ 5000]

# s e t up topo l o gy . . .

# rou t e r sset r1 [ $ns node ]set r2 [ $ns node ]set r3 [ $ns node ]

# l i n k r1−r2−r3$ns duplex− l ink $r1 $r2 [ expr $bw∗4 ]Mb 1ms DropTail#$ns dup lex− l ink $r2 $r3 [ expr $bw∗4 ]Mb 1ms DropTail

# s ink node#s e t ns ink [ $ns node ]

# add source nodesfor { set i 1} { $ i <= $numNodes} { incr i } {

set n( $ i ) [ $ns node ]set n_sink ( $ i ) [ $ns node ]

# delay i s RTT/2 − 3 . 3 i s used because each hop adds 1ms la t ency

# Random RTT (Uniform )# TODO: check RNG seed s t a tu sset r t t s ( $ i ) [ expr int ( [ $defaultRNG uniform [ expr $rtt_2 − 1 ] [ expr $rtt_1 +

1 ] ] ) ]

# Dete rm in i s t i c RTT#i f { $ i <= [ expr $numNodes / 2 ] | | $ i == 1} {# set r t t s ( $ i ) $rtt_1#} else {# set r t t s ( $ i ) $rtt_2#}

# For long d i s t ance f i l e t r a n s f e r#set r t t s (1 ) 160

$ns duplex− l ink $n ( $ i ) $r1 [ expr $bw∗4 ]Mb [ expr $ r t t s ( $ i ) /2−3 ]ms DropTail$ns duplex− l ink $n_sink ( $ i ) $r3 [ expr $bw∗4 ]Mb 1ms DropTail

}

$ns duplex− l ink $r2 $r3 [ expr $bw ]Mb 1ms DropTail

Page 101: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

85

# error modelset em [ new ErrorModel ]$em uni t pkt# Fiber op t i c l i n k error ra t e$em set rate_ 0 .0000001 # 10^−7$em ranvar [ new RandomVariable/Uniform ]$em drop−target [ new Agent/Nul l ]$ns l ink− l o s smode l $em $r2 $r3

# Reduce b o t t l e n e c k l i n k capac i t y#$ns at 600 {# puts "Changing l i n k c a p a c i t y . "# s e t BDP_reduce [ expr $bw∗0.5∗1000∗160 /1500/8]# s e t q_reduce [ expr $qmax∗$BDP_reduce+1]# $ns bandwidth $r2 $r3 [ expr $bw∗0.5 ]Mb dup lex# $ns queue− l imit $r2 $r3 $q_reduce# $ns queue− l imit $r3 $r2 $q_reduce#}

#$ns at 900 {# puts "Changing l i n k c a p a c i t y . "# $ns bandwidth $r2 $r3 [ expr $bw ]Mb dup lex# $ns queue− l imit $r2 $r3 $q# $ns queue− l imit $r3 $r2 $q#}

#se t BDP [ expr $bw∗1000∗$rtt_1 /1500/8]# BDP assumes D = 160 msset BDP [ expr $bw∗1000∗160 /1500/8]puts "BDP:␣$BDP"set q [ expr $qmax∗$BDP+1]puts "qmax:␣$q"i f { $q <2} { set q 2}puts "queue␣ s i z e : ␣$q"$ns queue− l imit $r2 $r3 $q$ns queue− l imit $r3 $r2 $q

set qmon [ $ns monitor−queue $r2 $r3 "" ]set fmon [ $ns makeflowmon Fid ]$ns attach−fmon [ $ns l i n k $r2 $r3 ] $fmonpr in t−qstat s " q_$label "

# source nodes TCP parametersfor { set i 1} { $ i <= $numNodes} { incr i } {

i f { $ns l inux < 1} {Agent/TCP set windowOption_ $alphaopt_1

set tcp [ new Agent/TCP/Sack1 ]#set tcp [ new Agent/TCP/Fast ]

} else {set tcp [ new Agent/TCP/Linux ]

Page 102: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

86 APPENDIX A. OTCL SCRIPT EXAMPLE

$ns at 0 " $tcp ␣ se l e c t_ca ␣$TCP_Name1"}set s ink [ new Agent/TCPSink/Sack1/DelAck ]#set s ink [ new Agent/TCPSink/Sack1 ]

#set tcp [ new Agent/TCP]set tcp [ new Agent/TCP/Sack1 ]#set tcp [ new Agent/TCP/Fast ]#set tcp [ new Agent/TCP/Vegas ]#set tcp [ new Agent/TCP/Linux ]#$ns at 0 " $tcp ␣ se l e c t_ca ␣ i l l i n o i s "$tcp set f id_ $ i

# Spec i f y RTTs$tcp set cubic_real_baseRTT_ $ r t t s ( $ i )

$ns attach−agent $n ( $ i ) $tcp$ns attach−agent $n_sink ( $ i ) $s ink$ns connect $tcp $s inkset f t p [ [ set tcp ] attach−app FTP]

set s t a r t t ime 0

i f { $ i <= [ expr $numNodes / 2 ] | | $ i == 1} {set s t a r t t ime $s tagger1

} else {set s t a r t t ime $s tagger2

}

$ns at $ s ta r t t ime " [ s e t ␣ f tp ] ␣ s t a r t "$ns at $ s ta r t t ime " s e t ␣ f l owI sAc t i v e ( $ i ) ␣1"#i f { $ i <= [ expr $numNodes / 2 ] | | $ i == 1} {# #$ns at [ expr 1200 + $ i − 1 ] " [ s e t ␣ f tp ] ␣ stop "# #$ns at [ expr 1200 + $ i ] " s e t ␣ f l owI sAc t i v e ( $ i ) ␣0"# $ns at 1200 " [ s e t ␣ f tp ] ␣ stop "# $ns at [ expr 1200 + 1 ] " s e t ␣ f l owI sAc t i v e ( $ i ) ␣0"#} else {

$ns at $endtime " [ s e t ␣ f tp ] ␣ stop "$ns at $endtime " s e t ␣ f l owI sAc t i v e ( $ i ) ␣0"

#}

pr in t− t cp s ta t s [ set tcp ] [ set s ink ] $ s ta r t t ime tcp$ { i }_$label ; #dump cwndto f i l e

$ns at $ s ta r t t ime "puts ␣\" f low $ i s t a r t i n g . . . \""

i f {$alphaopt_1 >= 14 && $alphaopt_1 != 15} {$tcp set pace_packet_ 1#$tcp set cubic_fast_convergence_ 0

}}

Page 103: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

87

for { set i 0} { [ expr 0 + ( $ i∗10 ) ] < $endtime} { incr i } {set pr in t t ime [ expr $ i∗10 ]$ns at [ expr 0 + $pr int t ime ] " puts ␣ $pr int t ime "

}

$ns at [ expr $endtime + 2 . 0 ] " e x i t ␣0"

puts " S ta r t i ng ␣ S imu l a t i o n . . . "$ns run

Page 104: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

88 APPENDIX A. OTCL SCRIPT EXAMPLE

Page 105: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Appendix B

Example benchmark scripts

B.1 Jain’s Fairness Index calculation

#!/ usr / b in /env python

import os , math

def f i n dF i l e s ( path ) :f i l e s = os . l i s t d i r ( path )f l owF i l e s = [ ]for f in f i l e s :

i f ( f . s t a r t sw i t h ( ’ tcp ’ ) and f . endswith ( ’ . out ’ ) ) :f l owF i l e s . append ( f )

return f l owF i l e s

def getLabe l ( f l ows ) :l a b e l S t a r t = f l ows [ 0 ] . f i nd ( ’ r t t ’ )labelEnd = f l ows [ 0 ] . f i nd ( ’ . out ’ )

return f l ows [ 0 ] [ l a b e l S t a r t : labelEnd ]

def parseFlows ( f l ows ) :f l ow In f o = d i c t ( )for f low in f l ows :

f l owId = in t ( f low [ 3 : f low . f i nd ( ’ _rtt ’ ) ] )f = open ( f low , " rb" )for l i n e in f . r e a d l i n e s ( ) :

explodedLine = l i n e . s p l i t ( )time = f l o a t ( explodedLine [ 0 ] )# TODO: check f o r p r e c i s i on and rounding e r ro r sr a t e = f l o a t ( explodedLine [ 4 ] )# i f t h i s time entry doesn ’ t ye t e x i s t , c r ea t e i ti f (not f l ow In f o . has_key ( time ) ) :

f l ow In f o [ time ] = { f lowId : r a t e }else :

f l ow In f o [ time ] [ f l owId ] = ra t e

89

Page 106: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

90 APPENDIX B. EXAMPLE BENCHMARK SCRIPTS

f . c l o s e ( )

return f l ow In f o

def j a i nFa i rn e s s Index ( time , f l ow In f o ) :# ca l c u l a t e numeratornumRes = 0 .0# SUM( x i )for key in f l ow In f o [ time ] . keys ( ) :

numRes += f l ow In f o [ time ] [ key ]# SUM( x i )^2numRes = math . pow(numRes , 2)

# ca l c u l a t e denominatorn = f l o a t ( l en ( f l ow In f o [ time ] . keys ( ) ) )denRes = 0 .0# SUM( x i ^2)for key in f l ow In f o [ time ] . keys ( ) :

denRes += math . pow( f l ow In f o [ time ] [ key ] , 2)# n ∗ SUM( x i ^2)denRes ∗= n

i f ( denRes == 0) :f a i r n e s s = 0

else :f a i r n e s s = numRes / denRes

return f a i r n e s s

def createData ( outFi l e , f l ow In f o ) :for key in so r t ed ( f l ow In f o . i t e r k e y s ( ) ) :

ou tF i l e . wr i t e ( "%0.2 f ␣%0.6 f \n" % ( key , j a i nFa i rn e s s Index ( key , f l ow In f o ) ) )

# f ind out how many f l ow f i l e s e x i s tf l owF i l e s = f i n dF i l e s ( ’ . ’ )numFlows = len ( f l owF i l e s )

# parse f l ow in f of l ow In f o = parseFlows ( f l owF i l e s )

# ge t l a b e ll a b e l = getLabe l ( f l owF i l e s )

ou tF i l e = open ( " f a i rne s s_%s . out" % labe l , "wb" )createData ( outFi l e , f l ow In f o )ou tF i l e . c l o s e ( )

B.2 Link Efficiency calculation

#!/ usr / b in /env python

Page 107: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

B.2. LINK EFFICIENCY CALCULATION 91

import os , math , sys

def f i n dF i l e s ( path ) :f i l e s = os . l i s t d i r ( path )f l owF i l e s = [ ]for f in f i l e s :

i f ( f . s t a r t sw i t h ( ’q_ ’ ) and f . endswith ( ’ _bytes . out ’ ) ) :f l owF i l e s . append ( f )

return f l owF i l e s

def getLabe l ( f l ows ) :l a b e l S t a r t = f l ows [ 0 ] . f i nd ( ’ r t t ’ )labelEnd = f l ows [ 0 ] . f i nd ( ’ _bytes . out ’ )

return f l ows [ 0 ] [ l a b e l S t a r t : labelEnd ]

def getBW( l a b e l ) :return l a b e l [ l a b e l . f i nd ( ’bw ’ )+2: l a b e l . f i nd ( ’_’ , l a b e l . f i nd ( ’bw ’ ) ) ]

def g e t I t e r ( l a b e l ) :return l a b e l [−1]

def parseQueueStats ( queueStats ) :queueInfo = d i c t ( )f = open ( queueStats [ 0 ] , " r " )for l i n e in f . r e a d l i n e s ( ) :

explodedLine = l i n e . s p l i t ( )time = f l o a t ( explodedLine [ 0 ] )t o ta lByte s = in t ( explodedLine [ 2 ] )queueInfo [ time ] = to ta lByte s

f . c l o s e ( )return queueInfo

def totalGoodput ( queueInfo , time ) :return queueInfo [ time ]

def to ta lRate ( queueInfo , time ) :

i f (not queueInfo . has_key ( time ) ) :return 0

i f ( time == 0) :return queueInfo [ 0 ]

r a t e = queueInfo [ time ] − queueInfo [ time − 1 ] # B/sr a t e = ra t e ∗ 8 / 1000000 # Mbit/ s

return r a t e

def createGoodputSummary ( outFi l e , queueInfo , bw, i , simTime ) :

Page 108: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

92 APPENDIX B. EXAMPLE BENCHMARK SCRIPTS

maxTeoGoodputI = f l o a t (bw) ∗ 10∗∗6 / 8 . 0 ;maxTeoGoodput = f l o a t (maxTeoGoodputI ∗ simTime ) ;goodput = totalGoodput ( queueInfo , simTime − 1)print goodputprint maxTeoGoodpute f f i c i e n c y = f l o a t ( goodput / maxTeoGoodput )goodputM = goodput / 1024 .0 / 1024 .0goodputG = f l o a t ( goodputM / 1024 .0 )ou tF i l e . wr i t e ( "bw : ␣%s␣ i t e r : ␣%s ␣ e f f i c i e n c y : ␣%0.2 f ␣%d␣B␣−␣%d␣MB␣−␣%0.2 f ␣GB␣ (

t h e o r e t i c a l : ␣%d) \n" % \(bw, i , e f f i c i e n c y , goodput , goodputM , goodputG , maxTeoGoodput ) )

def adjustTeoGoodput ( simTime , time , bw) :maxTeoIGoodput = f l o a t (bw)

return maxTeoIGoodput

def createInstantGoodputSummary ( o u t f i l e , f l owIn fo , bw, i , time ) :r a t e = 0for t in range (1 , i n t ( time ) ) :

r a t e = tota lRate ( queueInfo , t )r e s u l t = "%i ␣%0.2 f ␣#%0.2 f \n" % ( t , min ( 1 . 0 , f l o a t ( r a t e / adjustTeoGoodput (

time , t , bw) ) ) , r a t e )print ( r e s u l t ) ,o u t f i l e . wr i t e ( r e s u l t )

# f ind out how many f l ow f i l e s e x i s tqueueF i l e s = f i n dF i l e s ( ’ . ’ )numFiles = len ( queueF i l e s )

# f ind SimTimesimTime = f l o a t ( sys . argv [ 1 ] )# parse f l ow in f oqueueInfo = parseQueueStats ( queueF i l e s )

# ge t l a b e ll a b e l = getLabe l ( queueF i l e s )#fi leName = l a b e l [ 0 : l en ( l a b e l )−2] + " . t x t "

t o t a l S t a t sOutF i l e = open ( " router_goodput−s t a t s . txt " , "ab" )createGoodputSummary ( to ta lS ta t sOutF i l e , queueInfo , getBW( l a b e l ) , g e t I t e r ( l a b e l

) , simTime )t o t a l S t a t sOutF i l e . c l o s e ( )instantStatsOutFi leName = " router−e f f i c i e n c y_ " + l a b e l + " . out"in s tan tS ta t sOutF i l e = open ( instantStatsOutFileName , "wb" )createInstantGoodputSummary ( in s tantSta t sOutF i l e , queueInfo , getBW( l a b e l ) ,

g e t I t e r ( l a b e l ) , simTime )in s tan tS ta t sOutF i l e . c l o s e ( )

Page 109: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

B.2. LINK EFFICIENCY CALCULATION 93

Page 110: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

94 APPENDIX B. EXAMPLE BENCHMARK SCRIPTS

Page 111: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

Bibliography

[1] 3GPP Technical Specification, TS 25.308, Version 5.9.0, Release 5. UE Radio Accesscapabilities definition.

[2] A. Aggarwal, S. Savage, and T. Anderson. Understanding the Performance of TCPPacing. In INFOCOM, pages 1157–1165, 2000.

[3] I. Akyildiz, G. Morabito, and S. Palazzo. TCP-Peach: A New Congestion ControlScheme for Satellite IP Networks. IEEE/ACM Transactions on Networking, 9(3):307–321, June 2001.

[4] M. Allman, V. Paxson, and W. Stevens. TCP Congestion Control. IETF RFC 2581,April 1999.

[5] A. Almeida and C. Belo. Analysis and Implementation of a Stateless Congestion ControlAlgorithm. In SPECTS’05, July 2005.

[6] A. Baiocchi, A. Castellani, and F. Vacirca. YeAH-TCP: Yet Another Highspeed TCP.In proc. International Workshop on Protocols for Fast Long-Distance Networks, Marinadel Rey, California, USA, February 2007.

[7] Hamilton Institute TCP benchmark suite. http://www.hamilton.ie/net/tcptesting.zip.

[8] S. Biaz and N.H. Vaidya. Is the Round-trip Time Correlated with the Number of Packetsin Flight? In In Proc. Internet Measurement Conference, pages 273–278. ACM Press,2003.

[9] B. Braden, D. Clark, and J. Crowcroft. Recommendations on Queue Management andCongestion Avoidance in the Internet. IETF RFC 2309, April 1998.

[10] L. Brakmo and L. Peterson. TCP Vegas: End to End Congestion Avoidance on a GlobalInternet. IEEE Journal on Selected Areas in Communications, 13(8):1465–1480, October1995.

[11] C. Caini and R. Firrincieli. TCP Hybla: a TCP enhancement for heterogeneous net-works. International Journal of Satellite Communications and Networking, 22(5):547–566, September 2004.

95

Page 112: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

96 BIBLIOGRAPHY

[12] C. Casetti, M. Gerla, S. Mascolo, M. Sansadidi, and R. Wang. TCP Westwood: End-to-End Congestion Control for Wired/Wireless Networks. Wireless Networks Journal,8:467–479, 2002.

[13] D. Chiu and R. Jain. Analysis of the Increase and Decrease Algorithms for CongestionAvoidance in Computer Networks. Computer Networks and ISDN Systems, 17(1):1–14,June 1989.

[14] BIC and CUBIC Web page. http://netsrv.csc.ncsu.edu/twiki/bin/view/Main/

BIC.html.

[15] D.J. Leith D.J., R.N. Shorten, G. McCullagh, L. Dunn, and F. Baker. Making AvailableBase-RTT for Use in Congestion Control Applications. Communications Letters, IEEE,12(6):429–431, June 2008.

[16] J. Kenney (editor). Traffic Management Specification, Version 4.1. ATM Forum Tech-nical Committee, AF-TM-121.000, March 1999.

[17] A. Falk and D. Katabi. Specification for the Explicit Control Protocol (XCP). IETFInternet Draft, November 2006.

[18] FAST SOFT Web page - Acceleration Demo. http://www.fastsoft.com/

dynamic-site-acceleration-demo/.

[19] S. Floyd. HighSpeed TCP for Large Congestion Windows. IETF RFC 3649, December2003.

[20] S. Floyd. Limited Slow-Start for TCP with Large Congestion Windows. EETF RFC3742, March 2004.

[21] S. Floyd and K. Fall. Promoting the Use of End-to-End Congestion Control in theInternet. IEEE/ACM Transactions on Networking, 7:458–472, 1999.

[22] S. Floyd and T. Henderson. The NewReno Modification to TCP’s Fast Recovery Al-gorithm. IETF RFC 2582, April 1999.

[23] S. Floyd and V. Jacobson. Random Early Detection Gateways for Congestion Avoidance.IEEE/ACM Transactions on Networking, 1(4):397–413, August 1993.

[24] C.P. Fu and S.C. Liew. TCP Veno: TCP Enhancement for Transmission over WirelessAccess Networks. Selected Areas in Communications, IEEE Journal on, 21(2):216–228,Feb 2003.

[25] Gnuplot Web page. http://www.gnuplot.info/.

[26] S. Ha, I. Rhee, and L. Xu. CUBIC: a New TCP-Friendly High-Speed TCP Variant.SIGOPS Operating Systems Review, 42(5):64–74, 2008.

Page 113: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

BIBLIOGRAPHY 97

[27] T. Hatano, M. Fukuhara, H. Shigeno, and K. Okada. TCP Congestion Control RealizingFairness with TCP Reno in High Speed Networks. IEIC Technical Report (Institute ofElectronics, Information and Communication Engineers), 103(198):19–24.

[28] IEEE. ANSI/IEEE 802.11, Wireless LAN Medium Access Control (MAC) and PhysicalLayer (PHY) Specifications, 1999.

[29] International Telecommunication Union (ITU-T). Recommendation G984.1 "Gigabit-capable Passive Optical Networks (GPON): General characteristics", March 2003.

[30] V. Jacobson. Congestion Avoidance and Control. In proc. ACM SIGCOMM, pages314–329, Stanford, CA, August 1988.

[31] R. Jain. A Delay-Based Approach for Congestion Avoidance in Interconnected Het-erogeneous Computer Networks. ACM Computer Communication Review, 19(5):56–71,October 1989.

[32] C. Jin, D. Wei, and S. Low. FAST TCP: Motivation, Architecture, Algorithms andPerformance. In proc. IEEE INFOCOM, Hong Kong, March 2004.

[33] K. Kaneko, T. Fujikawa, Z. Su, and J. Katto. TCP-Fusion: A Hybrid Congestion ControlAlgorithm for High-Speed Networks. In proc. International Workshop on Protocols forFast Long-Distance Networks, Marina del Rey, California, USA, February 2007.

[34] T. Kelly. Scalable TCP: Improving Performance in Highspeed Wide Area Networks.Computer Communication Review, 33(2):83–91, April 2003.

[35] S. Keshav. Packet-Pair Flow Control. IEEE/ACM Transactions on Networking, 1995.

[36] R. King, R. Baraniuk, and R. Riedi. TCP-Africa: An Adaptive and Fair Rapid IncreaseRule for Scalable TCP. In proc. IEEE INFOCOM, Miami, FL, March 2005.

[37] A. Kuzmanovic and E.W. Knightly. TCP-LP: Low-Priority Service Via End-Point Con-gestion Control. Networking, IEEE/ACM Transactions on, 14(4):739–752, August 2006.

[38] Cable Television Laboratories. Data-Over-Cable Service Interface Specifications 3.0 -SP-MULPIv3.0, October 2009.

[39] D. Leith, L. Andrew, T. Quetchenbach, and R. Shorten. Experimental Evaluation ofDelay/Loss-based TCP Congestion Control Algorithms. In proc. International Workshopon Protocols for Fast Long-Distance Networks, Manchester, UK, March 2008.

[40] D. Leith, J. Heffner, R. Shorten, and G. McCullagh. Delay-Based AIMD CongestionControl. In Proc. Workshop on Protocols for Fast Long Distance Networks, Los Angeles,USA, 2007.

[41] D. Leith and R. Shorten. H-TCP Protocol for High-Speed Long Distance Networks. Inproc. International Workshop on Protocols for Fast Long-Distance Networks, Argonne,Illinois, USA, February 2004.

Page 114: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

98 BIBLIOGRAPHY

[42] D. Leith, R. Shorten, and G. McCullagh. Experimental Evaluation of Cubic TCP.In proc. International Workshop on Protocols for Fast Long-Distance Networks, LosAngeles, USA, 2007.

[43] W. Leland, M. Taqqu, W. Willinger, and D. Wilson. On the Self-Similar Nature ofEthernet Traffic (Extended Version). IEEE/ACM Transactions on Networking, 2(1):1–15, February 1994.

[44] Y.T. Li, D. Leith, and N.R. Shorten. Experimental Evaluation of TCP Protocols forHigh-Speed Networks. IEEE/ACM Transactions on Networking, 15(5):1109–1122, 2007.

[45] S. Liu, T. Başar, and R. Srikant. TCP-Illinois: A Loss and Delay-Based CongestionControl Algorithm for High-Speed Networks. In proc. International Conference on Per-formance Evaluation Methodologies and Tools (VALUETOOLS), Pisa, Italy, October2006.

[46] C. Marcondes, M.Y. Sanadidi M., Gerla, and H. Shimonishi. TCP Adaptive Westwood –Combining TCP Westwood and Adaptive Reno: A Safe Congestion Control Proposal. InCommunications, 2008. ICC ’08. IEEE International Conference on, pages 5569–5575,May 2008.

[47] J. Martin, A. Nilsson, and I. Rhee. Delay-Based Congestion Avoidance for TCP.IEEE/ACM Transactions on Networking, 11(3):356–369, June 2003.

[48] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP Selective AcknowledgmentOptions. IETF RFC 2582, October 1996.

[49] S. Molnár, B. Sonkoly, and T. A. Trinh. A Comprehensive TCP Fairness Analysis inHigh Speed Networks. Computer Communications, 32(13-14):1460–1484, August 2009.

[50] The Network Simulator 2 (NS-2) Web page. http://www.isi.edu/nsnam/ns/.

[51] FAST TCP simulator module for NS-2 Web page. http://www.cubinlab.ee.unimelb.edu.au/ns2fasttcp/.

[52] A TCP pacing implementation for NS-2. http://netlab.caltech.edu/projects/

ns2tcplinux/ns2pacing/index.html.

[53] OTcl Web page. http://otcl-tclcl.sourceforge.net/.

[54] J. Postel. Transmission Control Protocol. IETF RFC 793, September 1981.

[55] R.S. Prasad, M. Jain, and C. Dovrolis. On the Effectiveness of Delay-Based Conges-tion Avoidance. In Second International Workshop on Protocols for Fast Long-DistanceNetworks, 2004.

[56] The Python Programming Language Web page. http://www.python.org/.

Page 115: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

BIBLIOGRAPHY 99

[57] K. Ramakrishnan, S. Floyd, and D. Black. The Addition of Explicit Congestion Noti-fication (ECN) to IP. IETF RFC 3168, September 2001.

[58] D. Reed, J. Saltzer, and D. Clark. Active networking and end-to-end arguments, 1998.

[59] M. Belshe (Google Research). More Bandwidth Does not Matter (much).http://sites.google.com/a/chromium.org/dev/spdy, 2010.

[60] S. Rewaskar, J. Kaur, and D. Smith. Why Don’t Delay-Based Congestion EstimatorsWork in the Real-World. Technical report, Department of Computer Science, UNCChapel Hill, July.

[61] K. Shi, Y. Shu, O. Yang, and J. Luo. Receiver Assistant Congestion Control in HighSpeed and Lossy Networks. In Real Time Conference, 2009. RT ’09. 16th IEEE-NPSS,pages 91–95, May 2009.

[62] H. Shimonishi, T. Hama, and T. Murase. TCP-Adaptive Reno for Improving Efficiency-Friendliness Tradeoffs of TCP Congestion Control Algorithm. In proc. InternationalWorkshop on Protocols for Fast Long-Distance Networks, Nara, Japan, February 2006.

[63] K. Tan, J. Song, Q. Zhang, and M. Sridharan. A Compound TCP Approach for High-speed and Long Distance Networks. In proc. IEEE INFOCOM, Barcelona, Spain, April2006.

[64] C. Villamizar and C. Song. High Performance TCP in ANSNET. ACM SIGCOMMComputer Communications Review, 24(5):45–60, October 1994.

[65] R. Wang, K. Yamada, M. Sanadidi, and M. Gerla. TCP With Sender-Side Intelligenceto Handle Dynamic, Large, Leaky Pipes. IEEE Journal on Selected Areas in Commu-nications, 23(2):235–248, February 2005.

[66] Z. Wang and J. Crowcroft. A New Congestion Control Scheme: Slow Start and Search(Tri-S). ACM Computer Communication Review, 21(1):32–43, January 1991.

[67] Z. Wang and J. Crowcroft. Eliminating Periodic Packet Losses in 4.3-Tahoe BSD TCPCongestion Control Algorithm. ACM Computer Communication Review, 22(2):9–16,April 1992.

[68] D. Wei. TCP Pacing Revisited. In INFOCOM, 2006.

[69] D. Wei, C. Jin, S. Low, and S. Hegde. FAST TCP: Motivation, Architecture, Algorithms,Performance. IEEE/ACM Transactions on Networking, 14(6):1246–1259, 2006.

[70] X. Wu. A Simulation Study of Compound TCP. Technical report, National Universityof Singapore, July 2008.

[71] X. Wu. Safely Speeding Up Bandwidth-greedy and Elastic Applications of the Internet.Technical report, National University of Singapore, April 2009.

Page 116: João Pedro Fontinha Poupino - ULisboa...The TCP congestion control algorithms have been successful since their inception in 1988, and have allowed the Internet to scale multiple times.

100 BIBLIOGRAPHY

[72] X. Wu, M.C. Chan, A.L. Ananda, and C. Ganjihal. Sync-TCP: A New Approach toHigh Speed Congestion Control. In Network Protocols, 2009. ICNP 2009. 17th IEEEInternational Conference on, pages 181–192, October 2009.

[73] L. Xu, K. Harfoush, and I. Rhee. Binary Increase Congestion Control (BIC) for FastLong-Distance Networks. In INFOCOM 2004. Twenty-third AnnualJoint Conference ofthe IEEE Computer and Communications Societies, volume 4, pages 2514–2524 vol.4,March 2004.

[74] G. Xylomenos, G. Polyzos, and P. Mahonen. TCP Performance Issues over WirelessLinks. IEEE Communications Magazine, 39(4):52–58, April 2001.