[Paper] Accelerating Wireless Distributed Learning via Hybrid Split and Federated Learning Optimization

Published: (November 24, 2025 at 09:29 PM EST)
2 min read
Source: arXiv

Source: arXiv

Abstract

Federated learning (FL) and split learning (SL) are two effective distributed learning paradigms in wireless networks, enabling collaborative model training across mobile devices without sharing raw data. While FL supports low‑latency parallel training, it may converge to a less accurate model. In contrast, SL achieves higher accuracy through sequential training but suffers from increased delay. To leverage the advantages of both, hybrid split and federated learning (HSFL) allows some devices to operate in FL mode and others in SL mode. This paper aims to accelerate HSFL by addressing three key questions:

  1. How does learning mode selection affect overall learning performance?
  2. How does it interact with batch size?
  3. How can these hyperparameters be jointly optimized alongside communication and computational resources to reduce overall learning delay?

We first analyze convergence, revealing the interplay between learning mode and batch size. Next, we formulate a delay‑minimization problem and propose a two‑stage solution: a block coordinate descent method for a relaxed problem to obtain a locally optimal solution, followed by a rounding algorithm to recover integer batch sizes with near‑optimal performance. Experimental results demonstrate that our approach significantly accelerates convergence to the target accuracy compared to existing methods.

Subjects

  • Machine Learning (cs.LG)
  • Distributed, Parallel, and Cluster Computing (cs.DC)

Citation

arXiv:2511.19851

DOI

https://doi.org/10.48550/arXiv.2511.19851

Submission History

  • v1, Tue, 25 Nov 2025 02:29:22 UTC (1,487 KB)
Back to Blog

Related posts

Read more »