Impact of TCP Congestion Control on Bufferbloat in Cellular Networks Stefan Alfredsson 1 , Giacomo Del Giudice 2 , Johan Garcia 1 , Anna Brunstrom 1 , Luca De Cicco 2,3 , Saverio Mascolo 2 1 Karlstad University, Sweden, 2 Politecnico di Bari, Italy, 3 Ecole superieure d’Electricite, France {Stefan.Alfredsson, Johan.Garcia, Anna.Brunstrom}@kau.se, dggiacomo@gmail.com, ldecicco@gmail.com, mascolo@poliba.it Abstract—The existence of excessively large and too filled network buffers, known as bufferbloat, has recently gained attention as a major performance problem for delay-sensitive applications. One important network scenario where bufferbloat may occur is cellular networks. This paper investigates the interaction between TCP congestion control and buffering in cellular networks. Extensive measurements have been performed in commercial 3G, 3.5G and 4G cellular networks, with a mix of long and short TCP flows using the CUBIC, NewReno and Westwood+ congestion control algorithms. The results show that the completion times of short flows increase significantly when concurrent long flow traffic is introduced. This is caused by increased buffer occupancy from the long flows. In addition, for 3G and 3.5G the completion times are shown to depend significantly on the congestion control algorithms used for the background flows, with CUBIC leading to significantly larger completion times. I. I NTRODUCTION Long queues and additional buffering in the network can be used to increase link utilization and reduce download times. Recently there has, however, been a growing aware- ness within the networking community that too much buffering may cause problems for delay-sensitive appli- cations. Excessively large and often full buffers, referred to as “bufferbloat”, is now recognized as a serious problem in the Internet [1]. Widespread severe over-buffering has also been reported for several parts of the Internet [2], [3], [4], [5]. Bufferbloat results in significantly reduced responsive- ness of applications because of excess buffering of packets within the network. It causes both high latency and can also result in appreciable jitter [6]. This is particularly problematic for short TCP flows such as Web traffic or real-time interactive UDP traffic such as VoIP. When such traffic shares resources with greedy TCP transfers it ends up at the end of a full transmission buffer and experiences an increased delay that can severely deteriorate user per- formance [7]. Cellular networks are becoming an increasingly impor- tant Internet access technology. To accommodate varying data rates over time-varying wireless channels they are also normally provisioned with large buffers [8], [9]. The fact that cellular networks typically employ individual buffer space for each user [9], [10] in combination with a low level of user multitasking over cellular connections has in the past limited the impact of these buffers on user performance. However, with the emergence of more and more powerful smartphones, as well as the increasing use of cellular broadband connections for residential Internet access, multitasking over cellular connections is today becoming common. This makes bufferbloat in cellular networks an increasingly important problem. The recent study by Jiang et. al. [5] also confirm that bufferbloat can lead to round trip times (RTTs) in the order of seconds for cellular networks. The extent of buffer buildup is determined by the rate of incoming packets versus the rate of outgoing packets. Standard TCP congestion control probes the available bandwidth by injecting packets into the network until there is packet loss, which for tail-drop queuing happens when buffers are full. The way buffers fill up are thus highly dependent on the transport protocol behavior and varies between different TCP congestion control algorithms. For example, TCP CUBIC [11] aggressively probes for the available bandwidth leading to a high average buffer uti- lization, whereas TCP Westwood+ [12] clears the buffers when congestion episodes occur leading to, on average, a reduced buffer occupancy. In this paper we examine the interaction between the TCP congestion control algorithms used and bufferbloat in 3G/4G cellular networks. Three congestion control al- gorithms are considered: TCP NewReno, TCP CUBIC and TCP Westwood+. We present an extensive measurement study performed within the 3G (UMTS), 3.5G (HSPA+) and 4G (LTE) networks of one of the leading commercial providers in Sweden, involving more than 1800 individual measurements. In our measurements we study how the re- sponse time of a Web transfer is affected by varying levels of competing background traffic and how the congestion control algorithms used affect performance. Our results indicate that the 3G and 3.5G networks suffer from severe bufferbloat. When background traffic is introduced the Web response time sometimes increases by more than 500%. Furthermore, the congestion control algorithm used for the background flow has a significant impact on the Web response time. The more aggressive congestion control used by CUBIC roughly doubles the Web response time as compared to Westwood+. For low bandwidths (i.e. 3G) the congestion control version used by the Web flow also has a significant impact on perfor- mance. In the studied 4G network, bufferbloat is less of a problem. The remainder of the paper is organized as follows. Further background on the congestion control algorithms used as well as details on what sets our work apart from related measurement studies are described in Section II.