To Talk or to Work: Energy Eficient Federated Learning over
Mobile Devices via the Weight Qantization and 5G
Transmission Co-Design
Rui Chen
rchen19@uh.edu
University of Houston
Houston, Texas, USA
Liang Li
liliang_1127@outlook.com
Xidian University
Xi’an, China
Kaiping Xue
kpxue@ustc.edu.cn
University of Science and Technology
of China
China
Chi Zhang
chizhang@ustc.edu.cn
University of Science and Technology
of China
China
Lingjia Liu
ljliu@vt.edu
Virginia Tech
Blacksburg, Virginia, USA
Miao Pan
mpan2@uh.edu
University of Houston
Houston, Texas, USA
ABSTRACT
Federated learning (FL) is a new paradigm for large-scale learning
tasks across mobile devices. However, practical FL deployment over
resource constrained mobile devices confronts multiple challenges.
For example, it is not clear how to establish an efective wireless
network architecture to support FL over mobile devices. Besides,
as modern machine learning models are more and more complex,
the local on-device training/intermediate model update in FL is
becoming too power hungry/radio resource intensive for mobile
devices to aford. To address those challenges, in this paper, we try
to bridge another recent surging technology, 5G, with FL, and de-
velop a wireless transmission and weight quantization co-design for
energy efcient FL over heterogeneous 5G mobile devices. Briefy,
the 5G featured high data rate helps to relieve the severe commu-
nication concern, and the multi-access edge computing (MEC) in
5G provides a perfect network architecture to support FL. Under
MEC architecture, we develop fexible weight quantization schemes
to facilitate the on-device local training over heterogeneous 5G
mobile devices. Observed the fact that the energy consumption of
local computing is comparable to that of the model updates via 5G
transmissions, we formulate the energy efcient FL problem into
a mixed-integer programming problem to elaborately determine
the quantization strategies and allocate the wireless bandwidth for
heterogeneous 5G mobile devices. The goal is to minimize the over-
all FL energy consumption (computing + 5G transmissions) over
5G mobile devices while guaranteeing learning performance and
training latency. Generalized Benders’ Decomposition is applied to
develop feasible solutions and extensive simulations are conducted
to verify the efectiveness of the proposed scheme.
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for proft or commercial advantage and that copies bear this notice and the full citation
on the frst page. Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy otherwise, or republish,
to post on servers or to redistribute to lists, requires prior specifc permission and/or a
fee. Request permissions from permissions@acm.org.
Mobihoc ’21, June 03ś05, 2021, Shanghai, China
© 2020 Association for Computing Machinery.
ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00
https://doi.org/10.1145/nnnnnnn.nnnnnnn
CCS CONCEPTS
· Computing methodologies → Distributed artifcial intelli-
gence; Neural networks; · Theory of computation → Mixed
discrete-continuous optimization.
KEYWORDS
5G networks, federated learning, weight quantization, optimization
ACM Reference Format:
Rui Chen, Liang Li, Kaiping Xue, Chi Zhang, Lingjia Liu, and Miao Pan. 2020.
To Talk or to Work: Energy Efcient Federated Learning over Mobile Devices
via the Weight Quantization and 5G Transmission Co-Design. In Mobihoc
’21: ACM International Symposium on Theory, Algorithmic Foundations, and
Protocol Design for Mobile Networks and Mobile Computing , June 03ś05, 2021,
Shanghai, China. ACM, New York, NY, USA, 10 pages. https://doi.org/10.
1145/nnnnnnn.nnnnnnn
1 INTRODUCTION
Due to the incredible surge of mobile data and the growing com-
puting capabilities of mobile devices, it becomes a trend to apply
deep learning (DL) on these devices to support fast responding
and customized intelligent applications [7]. Recently, federated
learning (FL) is expected as a promising DL solution to provide an
efcient, fexible, and privacy-preserving learning framework on a
large scale of mobile devices. Under the FL framework [20], each
mobile device executes model training locally and then transmits
the model updates instead of raw data to an FL server. The server
would aggregate the intermediate results and broadcast the updated
model to the participating devices. Its potential has prompted wide
applications in various domains such as keyboard predictions [12],
physical hazards detection in smart home [35], health event de-
tection [3], etc. However, it faces signifcant challenges to deploy
FL over mobile devices in practice. First, although mobile devices
are gradually equipped with artifcial intelligence (AI) computing
capabilities, the limited resources (e.g., battery and storage capacity)
restrain them from training deep and complicated learning models.
Second, it is not clear how to establish an efective wireless network
architecture to support FL over mobile devices. Last but not least,
the power-hungry local computing and wireless communications
arXiv:2012.11070v1 [cs.NI] 21 Dec 2020