Why Multipath TCP Degrades Throughput Under Insufficient Send Socket Buffer and Differently Delayed Paths Toshihiko Kato, Adhikari Diwakar, Ryo Yamamoto, and Satoshi Ohzahata Graduate School of Informatics and Engineering University of Electro-Communications Tokyo, Japan e-mail: kato@net.lab.uec.ac.jp, diwakaradh@net.lab.uec.ac.jp, ryo-yamamoto@uec.ac.jp, ohzahata@uec.ac.jp Abstract— Recently, the Multipath Transmission Control Protocol (MPTCP) comes to be used widely. It allows more than one TCP connections via different paths to compose one Multipath TCP communication. Our previous papers pointed out that insufficient send socket buffer makes the throughput worse than that of single path TCP, when the subflows have different transmission delays. Although our previous papers gave the detailed analysis on the throughput degradation focusing on the relationship between the send socket buffer size and the delay, they did not clarify the reason of the throughput degradation. This paper investigates the Linux MPTCP software and the MPTCP communication details, and clarifies why the insufficient socket buffer degrades the MPTCP throughput. Keywords- multipath TCP; send socket buffer; head-of-line blocking. I. INTRODUCTION Recently, mobile terminals with multiple interfaces have come to be widely used. For example, most smart phones are installed with interfaces for 4G Long Term Evolution (LTE) and Wireless LAN (WLAN). In order for applications to use multiple interfaces effectively, Multipath TCP (MPTCP) is being introduced in several operating systems, such as Linux, Apple OS/iOS and Android. MPTCP is defined in three RFC documents by Internet Engineering Task Force. RFC 6182 [1] outlines the architecture guidelines for developing MPTCP protocols. It defines the ideas of MPTCP connection and suflows (TCP connections associated with an MPTCP connection). RFC 6824 [2] presents the details of extensions to the traditional TCP to support multipath operation. It defines the MPTCP control information realized as new TCP options, and the MPTCP protocol procedures. RFC 6356 [3] presents a congestion control algorithm that couples those running on different subflows. MPTCP has some problems when subflows are established over heterogeneous paths with different delay, such as an LTE network and a WLAN. TCP ACKnowledgment (ACK) segments from a path with longer delay return later than those from a shorter delay path. This causes a Head-of-Line (HoL) blocking, in which TCP data segments over a longer delay subflow block the window sliding while waiting for their ACKs [4]. In order to avoid this problem, the selection of the appropriate subflow is required. The function to select a subflow for transferring a data segment is called a scheduler, and several scheduler algorithms have been proposed so far. Originally, MPTCP implementation adopted the lowest Round-Trip Time (RTT) first and the round-robin schedulers [5]. However, both of them suffer from the HoL blocking. The opportunistic Retransmission and Penalization mechanism (RP mechanism) [6] [7] is used in the current MPTCP implementation as a default. When a data sender detects that new data cannot be sent out due to an HoL blocking over a specific subflow, it retransmits the oldest unacknowledged data through a subflow with the lowest RTT (opportunistic retransmission). At the same time, the subflow that occurred this HoL blocking is punished by halving its congestion window (penalization). The Delay Aware Packet Scheduling (DAPS) [8] and the Out-of-order Transmission for In-order Arrival Scheduling (OTIAS) [9] take account of subflow delays and schedule data segment sending for in-order receiving. The BLocking ESTimation scheduler (BLEST) [10] estimates whether a subflow will cause an HoL blocking and dynamically adapts scheduling to prevent blocking. Those schedulers improve the MPTCP performance compared with the original scheduler algorithm, and several studies report the results of MPTCP performance evaluation through heterogeneous paths [11]-[15]. However, those proposals of schedulers and the performance evaluation reports are focusing only on the receive socket buffer. While insufficient receive socket buffer invokes HoL blocking, the send socket buffer also gives some impacts on the TCP throughput. In our previous papers [16] [17], we pointed out that an insufficient size of send socket buffer provokes more serious throughput degradation than insufficient receive socket buffer. Although our previous papers analyzed the detailed behaviors of MPTCP by investigating the MPTCP and TCP level sequence numbers and windows, they did not discuss why such performance degradation happens. In this paper, we clarify the reason by investigating the Linux MPTCP software and the communication traces. The rest of paper consists of the following sections. Section 2 shows the details of MPTCP data transfer procedure and the related work on the MPTCP scheduler. Section 3 gives the results of performance evaluation in the case that an MPTCP connection provides poor throughput than a single TCP connection due to the insufficient send socket buffer. Section 4 shows the behavior of Linux MPTCP software in the case of limited send buffer, and discusses the reason of the performance degradation. Section 5 concludes this paper. 48 Copyright (c) IARIA, 2020. ISBN: 978-1-61208-796-2 INTERNET 2020 : The Twelfth International Conference on Evolving Internet