Proportional Bandwidth Allocation in DiffServ Networks Eun-Chan Park and Chong-Ho Choi School of Electrical Engineering and Computer Science, Seoul National University, Seoul, KOREA Email: {ecpark|chchoi}@csl.snu.ac.kr Abstract— By analyzing the steady state throughput of TCP flows in differentiated service (DiffServ) networks, we show that current DiffServ networks are biased in favor of those flows that have a smaller target rate, which results in unfair bandwidth allocation. In order to solve this unfairness problem, we propose an adaptive marking scheme, which allocates bandwidth in a manner which is proportional to the target rates of the aggregate TCP flows in the DiffServ network. This scheme adjusts the target rate according to the congestion level of the network, so that the aggregate flow can obtain its fair share of the bandwidth. Since it utilizes edge-to-edge feedback information without measuring or keeping any per-flow state, this scheme is scalable and does not require any additional signaling protocol or any significant changes to the current TCP/IP protocol. It can be implemented in a distributed manner using only two-bit feedback information, which is carried in the TCP acknowledgement. Using extensive simulations, we show that the proposed scheme can provide each aggregate flow with its fair share of the bandwidth, which is proportional to the target rate, under various network conditions. Index Terms— Proportional bandwidth allocation, fairness, Quality of Service, DiffServ networks, scalability I. I NTRODUCTION Differentiated service (DiffServ) architecture has been pro- posed in order to provide different levels of service to satisfy different service requirements in a scalable manner [1]. In DiffServ architecture, IP flows are classified and aggregated into different forwarding classes, marked with different levels of priority at the edges of a network and dropped with different dropping mechanisms at the core of a network. Therefore, DiffServ networks can provide Quality-of-Service (QoS) be- yond the current best-effort service. In DiffServ networks, a customer makes a contract with the service provider for the establishment of a service profile, called the Service Level Agreement (SLA). The service profile specifies the minimum throughput (also called the committed information rate (CIR) or target rate) that should be provided to the customer, even in the case of congestion. In order to assure the conditions specified in the SLA, the necessary components are the packet marking mechanism administrated by profile meters or traffic conditioners at the edge routers and the queue management mechanism operated at the core routers. The packet marking mechanism monitors and marks packets according to the profile at the edge of the network. If the measured flow conforms to the service profile, the packets belonging to this flow are marked with high priority (e.g., marked as IN) and This work was partially supported by the Institute of Information Technol- ogy Assessment and POSCO, Korea. receive assured service. Otherwise, the packets belonging to the non-conformant part of a flow are marked with low priority (e.g., marked as OUT ) and receive best effort service. The queue management mechanism, deployed at core routers, gives preferential treatment to high priority packets. During times of congestion, high priority packets are forwarded preferentially and low priority packets are dropped with a higher probability. The most prevalent profile meters are the Token Bucket (TB) marker and the Time Sliding Window (TSW) marker, and the most widely deployed queue management algorithm is RED with In/Out (RIO) [2], [3], [4]. Also, many mechanisms have been proposed to provide as- sured service [5]–[8], and there has been some recent research done on modelling TCP behavior in DiffServ networks [9], [10]. The previous studies performed in this area were focused on simply assuring the target rate. However, this assurance is not sufficient to satisfy the customer. Considering the fact that the target rate is determined by the terms of the SLA, and that the customer’s fee is calculated accordingly, the bandwidth should be allocated in proportion to the target rate, which we refer to as “proportional bandwidth allocation”. Note that the notion of proportional allocation of bandwidth is different from that of proportional fairness [11], [12]. When the target rates of aggregate flows are different, the assurance of relative throughput, as well as the assurance of minimum throughput, must both be considered. When the network is over-provisioned, the surplus bandwidth should be allocated to the aggregates in proportion to the target rates. When the network is over-subscribed, the service rates should also be allocated in proportion to the target rates, even if it is impossible to assure them completely. However, the existing mechanisms [5]–[8] do not offer any guarantees when it comes to dealing with surplus bandwidth or bandwidth deficit. Studies based on simulations [13], [14] have shown that assuring the throughput in DiffServ networks depends on several factors, such as the round-trip time (RTT), the target rate, and the existence of non-responsive flows. In order to reduce the effects of RTT and target rate on throughput, a few mechanisms have been proposed [6], [14], [15]. The main idea behind these mechanisms is that packets belonging to flows which send packets more aggressively should be preferentially dropped. However, the mechanism in [6] requires that a per- flow state should be conveyed and maintained at the routers, which causes a scalability problem. The algorithm in [14] needs to measure the RTT and requires an additional signaling protocol for the purpose of communicating between the edge routers. Similarly, the algorithm in [15] also needs to estimate 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004