TY - JOUR
T1 - A Crowdsourcing Framework for On-Device Federated Learning
AU - Pandey, Shashi Raj
AU - Tran, Nguyen H.
AU - Bennis, Mehdi
AU - Manzoor, Aunas
AU - Hong, Choong Seon
N1 - Funding Information:
Manuscript received May 22, 2019; revised September 7, 2019, December 17, 2019, and January 13, 2020; accepted January 28, 2020. Date of publication February 12, 2020; date of current version May 8, 2020. This work was supported in part by the Institute of Information and Communications Technology Planning and Evaluation (IITP) funded by the Korea Government (MSIT) under Grant 2019-0-01287, in part by the Evolvable Deep Learning Model Generation Platform for Edge Computing, and in part by the National Research Foundation of Korea (NRF) funded by the Korea Government (MSIT) under Grant NRF-2017R1A2A2A05000995. This article was presented at the IEEE GLOBECOM 2019. The associate editor coordinating the review of this article and approving it for publication was L. Duan. (Corresponding author: Choong Seon Hong.) Shashi Raj Pandey, Yan Kyaw Tun, Aunas Manzoor, and Choong Seon Hong are with the Department of Computer Science and Engineering, Kyung Hee University, Yongin 17104, South Korea (e-mail: shashiraj@khu.ac.kr; ykyawtun7@khu.ac.kr; aunasmanzoor@khu.ac.kr; cshong@khu.ac.kr).
Publisher Copyright:
© 2002-2012 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to improve the global model. However, when the participating clients implement an uncoordinated computation strategy, the difficulty is to handle the communication efficiency (i.e., the number of communications per iteration) while exchanging the model parameters during aggregation. Therefore, a key challenge in FL is how users participate to build a high-quality global model with communication efficiency. We tackle this issue by formulating a utility maximization problem, and propose a novel crowdsourcing framework to leverage FL that considers the communication efficiency during parameters exchange. First, we show an incentive-based interaction between the crowdsourcing platform and the participating client's independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game's equilibria. Second, we formalize an admission control scheme for participating clients to ensure a level of local accuracy. Simulated results demonstrate the efficacy of our proposed solution with up to 22% gain in the offered reward.
AB - Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to improve the global model. However, when the participating clients implement an uncoordinated computation strategy, the difficulty is to handle the communication efficiency (i.e., the number of communications per iteration) while exchanging the model parameters during aggregation. Therefore, a key challenge in FL is how users participate to build a high-quality global model with communication efficiency. We tackle this issue by formulating a utility maximization problem, and propose a novel crowdsourcing framework to leverage FL that considers the communication efficiency during parameters exchange. First, we show an incentive-based interaction between the crowdsourcing platform and the participating client's independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game's equilibria. Second, we formalize an admission control scheme for participating clients to ensure a level of local accuracy. Simulated results demonstrate the efficacy of our proposed solution with up to 22% gain in the offered reward.
KW - Decentralized machine learning
KW - federated learning (FL)
KW - incentive mechanism
KW - mobile crowdsourcing
KW - stackelberg game
UR - http://www.scopus.com/inward/record.url?scp=85084913998&partnerID=8YFLogxK
U2 - 10.1109/TWC.2020.2971981
DO - 10.1109/TWC.2020.2971981
M3 - Journal article
AN - SCOPUS:85084913998
SN - 1536-1276
VL - 19
SP - 3241
EP - 3256
JO - IEEE Transactions on Wireless Communications
JF - IEEE Transactions on Wireless Communications
IS - 5
M1 - 8995775
ER -