Received: 4 January 2019 Revised: 18 February 2019 Accepted: 7 March 2019
DOI: 10.1002/cpe.5262
SPECIAL ISSUE PAPER
Rendering differential performance preference through
intelligent network edge in cloud data centers
Dian Shen
1
Pengcheng Zhou
2
Yidan Gao
3
Xiaolin Guo
1
Runqun Xiong
1
1
School of Computer Science and Engineering,
Southeast University, Nanjing, China
2
Bytedance, Beijing, China
3
College of Letters and Science, University of
California, Berkeley, California
Correspondence
Runqun Xiong, School of Computer Science
and Engineering, Southeast University,
Nanjing, China.
Email: rxiong@seu.edu.cn
Funding information
National Key R.D Program of China,
Grant/Award Number: 2017YFB1003000;
National Natural Science Foundation of China,
Grant/Award Number: 61872079, 61572129,
61602112, 61502097, 61702096,
61320106007, 61632008 and 61702097;
International S.T Cooperation Program of
China, Grant/Award Number:
2015DFA10490; Natural Science Foundation
of Jiangsu Province, Grant/Award Number:
BK20160695 and BK20170689; Fundamental
Research Funds for the Central Universities,
Jiangsu Provincial Key Laboratory of Network
and Information Security, Grant/Award
Number: BM2003201; Key Laboratory of
Computer Network and Information
Integration of Ministry of Education of China,
Grant/Award Number: 93K-9; Collaborative
Innovation Center of Novel Software
Technology and Industrialization and
Collaborative Innovation Center of Wireless
Communications Technology
Summary
Sharing the network infrastructure, the performance of emerging distributed applications and
services in data centers is directly impacted by the network. As these applications are becoming
more and more demanding, it is challenging to satisfy their requirements of low latency, high
throughput, and low packet loss rate simultaneously. Prior approaches typically resort to flow
control or scheduling mechanisms, prioritizing flows according to their demands. However, none
of the methods can solely satisfy the various demands of data center applications. Addressing
this challenge, we propose tasch, a preference aware flow scheduling mechanism equipped
in the software network edge (ie, end-host networking). This mechanism utilizes multiple
separate queues for flows with different preferences, which guarantees low packet delay for
latency-sensitive flows and provides bandwidth guarantees for throughput-sensitive flows. A
coordinating algorithm is presented to share the network resource among multiple queues
with pareto-optimality. tasch is implemented as a thin and plugable kernel module in Linux
based hypervisors, which lies between the complicated physical network and tenants VMs.
Subsequently, based on the flow traces of real-world applications, extensive experiments were
conducted to verify the effectiveness of network management mechanism.
KEYWORDS
data center, flow scheduling, network edge, virtualization
1 INTRODUCTION
Cloud data centers are becoming the hosting platform for a wide spectrum of applications. Ranging from latency-sensitive applications such
as Web search to bandwidth-hungry ones such as data-parallel processing, these emerging applications are increasingly demanding in network
resources. For instance, latency-sensitive applications like Web search require the latency as low as 5 ms, and 10% more latency can result in 20%
revenue loss.
1
Data-parallel processing frameworks like Hadoop are throughput-intensive, with an extensive data transfer of more than 1 Gbps
in the shuffle phase. However, in current data centers, applications can experience 5× or more variations, leading to unpredictable performance.
Prior approaches typically resort to flow control or network scheduling mechanisms, prioritizing flows according to their preference, eg,
latency-sensitive or throughput-sensitive. For instance, Fastpass
2
proposed a centralized ‘‘zero-queue’’ data center network architecture where
each sender should ask a centralized arbiter for permission of transmission when sending packets. Thus, the queue build-up and related
Concurrency Computat Pract Exper. 2019;e5262. wileyonlinelibrary.com/journal/cpe © 2019 John Wiley & Sons, Ltd. 1 of 14
https://doi.org/10.1002/cpe.5262