{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T14:36:24Z","timestamp":1767969384431,"version":"3.49.0"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100018177","name":"Ericsson","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100018177","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000038","name":"NSERC","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100000038","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100004489","name":"Mitacs","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100004489","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Model. Perform. Eval. Comput. Syst."],"published-print":{"date-parts":[[2025,3,31]]},"abstract":"<jats:p>Communication overhead is a main bottleneck in federated learning (FL) especially in the wireless environment due to the limited data rate and unstable radio channels. The communication challenge necessitates holistic selection of participating clients that accounts for both the computation needs and communication cost, as well as judicious allocation of the limited transmission resource. Meanwhile, the random unpredictable nature of both the training data samples and the communication channels requires an online optimization approach that adapts to the changing system state over time. In this work, we consider a general framework of online joint client sampling and power allocation for wireless FL under time-varying communication channels. We formulate it as a stochastic network optimization problem that admits a Lyapunov-typed solution approach. This leads to per-training-round subproblems with a special bi-convex structure, which we leverage to propose globally optimal solutions, culminating in a meta algorithm that provides strong performance guarantees. We further study three specific FL problems covering multiple scenarios, namely, with IID or non-IID data, whether robustness against data drift is required, and with unbiased or biased client sampling. We derive detailed algorithms for each of these problems. Simulation with standard classification tasks demonstrate that the proposed communication-aware algorithms outperform their counterparts under a wide range of learning and communication scenarios.<\/jats:p>","DOI":"10.1145\/3703628","type":"journal-article","created":{"date-parts":[[2024,11,8]],"date-time":"2024-11-08T10:53:07Z","timestamp":1731063187000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["<scp>Clipper<\/scp>\n            : Online Joint Client Sampling and Power Allocation for Wireless Federated Learning"],"prefix":"10.1145","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-8552-5598","authenticated-orcid":false,"given":"Wen","family":"Xu","sequence":"first","affiliation":[{"name":"Electrical and Computer Engineering, University of Toronto, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1800-1322","authenticated-orcid":false,"given":"Ben","family":"Liang","sequence":"additional","affiliation":[{"name":"University of Toronto, Toronto, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3539-9624","authenticated-orcid":false,"given":"Gary","family":"Boudreau","sequence":"additional","affiliation":[{"name":"Ericsson Canada Inc, Ottawa, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9785-3530","authenticated-orcid":false,"given":"Hamza","family":"Sokun","sequence":"additional","affiliation":[{"name":"Ericsson Canada Inc, Ottawa, Canada"}]}],"member":"320","published-online":{"date-parts":[[2025,3,12]]},"reference":[{"key":"e_1_3_2_2_2","volume-title":"Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201917)","author":"Alistarh Dan","year":"2017","unstructured":"Dan Alistarh, Demjan Grubic, Jerry Z. Li, Ryota Tomioka, and Milan Vojnovic. 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201917)."},{"key":"e_1_3_2_3_2","volume-title":"Proceedings of the Conference on Machine Learning and Systems (MLSys\u201919)","author":"Bonawitz Keith","year":"2019","unstructured":"Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Kone\u010dn\u1ef3, Stefano Mazzocchi, Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, and Jason Roselander. 2019. Towards federated learning at scale: System design. In Proceedings of the Conference on Machine Learning and Systems (MLSys\u201919)."},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1137\/16M1080173"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9780511804441"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3369583.3392686"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2020.3042530"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2020.3024629"},{"key":"e_1_3_2_9_2","article-title":"Optimal client sampling for federated learning","author":"Chen Wenlin","year":"2022","unstructured":"Wenlin Chen, Samuel Horv\u00e1th, and Peter Richt\u00e1rik. 2022. Optimal client sampling for federated learning. Trans. Mach. Learn. Res. (2022).","journal-title":"Trans. Mach. Learn. Res."},{"key":"e_1_3_2_10_2","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS\u201922)","author":"Cho Yae Jee","year":"2022","unstructured":"Yae Jee Cho, Jianyu Wang, and Gauri Joshi. 2022. Towards understanding biased client selection in federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS\u201922)."},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNET.2020.3035770"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSAC.2022.3180807"},{"key":"e_1_3_2_13_2","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201919)","author":"Horv\u00e1th Samuel","year":"2019","unstructured":"Samuel Horv\u00e1th and Peter Richt\u00e1rik. 2019. Nonconvex variance reduced optimization with arbitrary sampling. In Proceedings of the International Conference on Machine Learning (ICML\u201919)."},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1561\/2200000083"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/OJCOMS.2024.3372893"},{"key":"e_1_3_2_16_2","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201920)","author":"Karimireddy Sai Praneeth","year":"2020","unstructured":"Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In Proceedings of the International Conference on Machine Learning (ICML\u201920)."},{"key":"e_1_3_2_17_2","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201919)","author":"Konstantinov Nikola","year":"2019","unstructured":"Nikola Konstantinov and Christoph Lampert. 2019. Robust learning from untrusted sources. In Proceedings of the International Conference on Machine Learning (ICML\u201919)."},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1145\/2640087.2644155"},{"key":"e_1_3_2_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2020.2975749"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM48880.2022.9796935"},{"key":"e_1_3_2_21_2","volume-title":"Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS\u201917)","author":"McMahan Brendan","year":"2017","unstructured":"Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Ag\u00fcera y Arcas. 2017. Communication-efficient learning of deep Networks from decentralized data. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS\u201917)."},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.5555\/1941130"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICC.2019.8761315"},{"key":"e_1_3_2_24_2","volume-title":"Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201919)","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201919)."},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM48880.2022.9796818"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSAIT.2022.3205475"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2020.3015671"},{"key":"e_1_3_2_28_2","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201919)","author":"Stich Sebastian U.","year":"2019","unstructured":"Sebastian U. Stich. 2019. Local SGD converges fast and communicates little. In Proceedings of the International Conference on Learning Representations (ICLR\u201919)."},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377454"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.5555\/3546258.3546471"},{"key":"e_1_3_2_31_2","volume-title":"Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201918)","author":"Wangni Jianqiao","year":"2018","unstructured":"Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. 2018. Gradient sparsification for communication-efficient distributed optimization. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS\u201918)."},{"key":"e_1_3_2_32_2","unstructured":"Han Xiao Kashif Rasul and Roland Vollgraf. 2017. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. Retrieved from https:\/\/github.com\/zalandoresearch\/fashion-mnist"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2020.3031503"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/PIMRC56721.2023.10293927"},{"key":"e_1_3_2_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2019.2961673"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/3298981"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2022.3166941"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.comnet.2024.110512"}],"container-title":["ACM Transactions on Modeling and Performance Evaluation of Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3703628","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3703628","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:09:42Z","timestamp":1750295382000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3703628"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,12]]},"references-count":37,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,3,31]]}},"alternative-id":["10.1145\/3703628"],"URL":"https:\/\/doi.org\/10.1145\/3703628","relation":{},"ISSN":["2376-3639","2376-3647"],"issn-type":[{"value":"2376-3639","type":"print"},{"value":"2376-3647","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,12]]},"assertion":[{"value":"2024-05-02","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-10-17","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-03-12","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}