Published in IET Communications Received on 27th February 2010 Revised on 3rd April 2011 doi: 10.1049/iet-com.2010.0163 ISSN 1751-8628 On unified quality of service resource allocation scheme with fair and scalable traffic management for multiclass Internet services G. Abbas A.K. Nagar H. Tawfik Intelligent and Distributed Systems Laboratory, Liverpool Hope University, Liverpool L16 9JD, UK E-mail: nagara@hope.ac.uk Abstract: This study concerns the problem of controlling multiclass (elastic, inelastic and unresponsive) Internet traffic without sacrificing quality of service (QoS) by adopting a unified ‘resource allocation and traffic management’ approach. The aim is to minimise the need for relying on dedicated QoS traffic control mechanisms in order to avoid spiralling complicatedness that, in practice, leads to ‘robust yet fragile’ Internet. In order to address this challenge, the authors first introduce an end-to-end non- convex network utility maximisation-based resource allocation algorithm to guarantee enhanced QoS to elastic and inelastic flows. Then, a pricing-based fair and scalable traffic management scheme, called Purge, is introduced to protect transmission control protocol-friendly traffic from unfairness attacks by unresponsive flows. Finally, the main contribution of this work, the unified algorithm, is developed by adapting Purge to complement link-control of the proposed resource allocation algorithm to enable it to enforce fairness while maintaining a scalable network core. The unified approach thus delivers QoS guarantees for multiclass traffic. 1 Introduction 1.1 Motivation The advent and phenomenal success of the Internet can be attributed to its simplicity principle: to be successful, the designs and architectures must reflect the simplest possible solutions. The fundamental design philosophy behind the connectionless, packet-switching, layered architecture is the scalability argument: no protocol or mechanism must be introduced into the Internet if it does not scale well. A key corollary to the scalability argument is the end-to-end argument: to maintain scalability, network core must be kept simple by pushing algorithmic complexity towards the edges of network whenever possible [1–3]. Perhaps, the best example of the Internet philosophy is the end-to-end best-effort congestion control of transmission control protocol (TCP) that resides primarily at end-hosts and typically follows some additive increase multiplicative decrease (AIMD) mechanism (see, e.g. [4]). Nevertheless, the traditional simplistic view, the end-to-end wisdom, falls short to provide fairness as well as security against malicious attacks due to its strict adherence to the trust- model. The original Internet architecture assumes that all end-hosts can be trusted [1, 5]. Accordingly, it becomes imperative to place some control in the network core for the purpose of traffic management and policing malicious flows. As a separate issue, user-demands for audio – visual applications, such as voice over Internet protocol (VoIP) and IPTV, have been rapidly growing in recent years. Supporting the new world of converged applications requires better than best-effort quality of service (QoS) assurance [6]. The QoS provisioning essentially entails the cooperation of several building blocks, shown in Fig. 1, at all relevant points in the Internet, which is another factor instigating a shift from the basic principles. On yet another front, various factors, such as occasional self-similar scaling in burst patterns of the Internet traffic and scale-free structure in the interconnection topology have caused a wider scientific community to believe that the Internet represents a complex system (see, e.g. [7, 8]). Consequently, a number of high-profile technical failures have come to be regarded as possible indications that the system is suffering from complex emergent behaviour. However, these claims have recently been invalidated in [5, 9], where it has been shown that few, if any, of the failures are consequences of emergent properties in the pure technical sense and that the fragilities are due to the Internet designs having become (unnecessarily?) complicated. That is, the Internet represents a ‘complicated system’ rather than a ‘complex system’ [9]. Yet interestingly, a common point in the prolonged contrasting debate on complexity has been the need for simplicity, which is an important reminder to revert to the increasingly-overlooked fundamental design principles. In response to the desired simplicity, it is imperative to review a crucial, yet often disregarded, fact that evolution of networks typically follows a spiral of ever-increasing complicatedness introduced by the essential integration of isolated QoS solutions [2, 7, 10]. For instance, each of the IET Commun., 2011, Vol. 5, Iss. 16, pp. 2371–2385 2371 doi: 10.1049/iet-com.2010.0163 & The Institution of Engineering and Technology 2011 www.ietdl.org