Control of Multi-Resource Infrastructures: Application to NFV and Computation Offloading Yeongjin Kim , Hyang-Won Lee and Song Chong Samsung Electronics Department of Software, Konkuk University School of Electrical Engineering, KAIST E-mail: yj.kim@netsys.kaist.ac.kr, leehw@konkuk.ac.kr, songchong@kaist.edu Abstract—Network function virtualization (NFV) and Compu- tation offloading (CO) are state-of-the-art technologies for flexible utilization of networking and processing resources. These two technologies are closely related in that they enable multiple physical entities to process a function provided in a service, and the service (or end host) chooses which resources to use. In this paper, we propose a generalized dual-resource system, which unifies NFV service and CO service frameworks, and formulate a multi-path problem for choosing resources to use in NFV and CO services. The problem is reformulated as a variational inequality by using Lagrange dual theory and saddle point theory. Based on this formulation, we propose an extragradient-based algorithm that controls and splits the sending rate of a service. We prove that the algorithm converges to an optimal point where system cost minus service utility is minimized. Simulations under diverse scenarios demonstrate that our algorithm achieves high quality of service while reducing the system cost by jointly considering dual-resource coupling and service characteristics. I. I NTRODUCTION In traditional networks, most of the network functions, such as deep packet inspection (DPI), intrusion detection system (IDS), firewall and charging, are implemented exclusively on specific hardware. For instance, data accounting in LTE system is implemented in the P-GW, and thus, all the packets to be charged must pass through the P-GW. This paradigm possibly results in inefficient utilization of networking resources, as some nodes can be heavily congested while other nodes are idle, which degrades the quality of network service. Network function virtualization (NFV) addresses this issue by enabling network functions to be implemented virtually in several nodes in the network [1]. Through dynamic service chaining, a network service can be routed to any one of multiple candidate paths which have different characteristics of delay, throughput, and cost. Hence, NFV has several advantages in terms of resource efficiency, manageability, and scalability [2]. Net- work service providers in the U.S., AT&T and Verizon, have launched NFV-enabled core networks for multi-protocol label switching (MPLS), wide area network (WAN) optimization and secure connectivity in 2016 [3], [4]. Meanwhile, computation offloading (CO) enables end host with limited processing capacity to offload computation func- tions, such as transcoding and voice/image processing, to a more powerful machine in the edge or remote cloud, by using additional network bandwidths [5]. Consequently, multiple offloading options are available to the end host, such as processing the function locally or offloading the function to one of the edges or remote clouds. By utilizing processing resources in the cloud, the end host can reduce its computation delay/cost, or increase its computation throughput. The aforementioned NFV service and CO service are closely related in the context as follows. There can be multiple physical entities that can process a given function, e.g., net- work function for NFV service and computation function for CO service, and hence, the service (or end host) can choose which resources to use in order to process the function it provides [6]. This problem inherently contains the traditional routing problem but poses additional complexity since pro- cessing resource (such as CPU capacity) as well as networking resource (such as link capacity) must be considered [7]. For example, it may be hard to fully utilize a node having plenty of processing resources, if paths to the node have limited link capacities. Unlike the traditional routing where a path is determined based solely on networking resources, with NFV and CO services, a path should be determined in the form of a series of processing and networking (i.e., dual) resources. Then, by choosing an efficient path, resource utilization or quality of services (QoS) can be improved. Moreover, we can expect additional system cost reduction or QoS enhancement if the service can utilize multiple candidate paths, simultane- ously. There have been various studies for dual-resource sharing. Several metrics are proposed for efficiency and fairness of multi-resource sharing [8]–[10], which are the generalizations of existing metrics for single-resource sharing. Shin et al. [11] show that conventional TCP and active queue management (AQM) schemes can significantly lose throughput and suffer unfairness under processing-constrained networks, and pro- pose a new AQM scheme for a dual-resource environment. Li et al. [6] and Obadia et al. [12] propose virtual network func- tion (VNF) placing and single-path routing algorithms in NFV- enabled networks. Kwak et al. [13] propose a computation offloading policy of end user in cloud computing environment by jointly considering dual-resources of a local device. Zhao et al. [14] propose a load balancing algorithm for computation offloading in data centers by jointly considering dual-resources of cloud servers.