[Paper] DSD: A Distributed Speculative Decoding Solution for Edge-Cloud Agile Large Model Serving

Published: (November 26, 2025 at 01:47 PM EST)
1 min read
Source: arXiv

Source: arXiv - 2511.21669v1

Overview

Large language model (LLM) inference often suffers from high decoding latency and limited scalability across heterogeneous edge‑cloud environments. Existing speculative decoding (SD) techniques accelerate token generation but remain confined to single‑node execution. We propose DSD, a distributed speculative decoding framework that extends SD to multi‑device deployments through coordinated draft‑target execution.

Given the lack of prior work on simulating this paradigm, we first introduce DSD‑Sim, a discrete‑event simulator that captures network, batching, and scheduling dynamics. Building on insights from DSD‑Sim, we further design an Adaptive Window Control (AWC) policy that dynamically adjusts speculation window size to optimize throughput.

Experiments across diverse workloads show that DSD achieves up to 1.1× speedup and 9.7 % higher throughput over existing SD baselines, enabling agile and scalable LLM serving across edge and cloud.

Authors

  • Fengze Yu
  • Leshu Li
  • Brad McDanel
  • Saiqian Zhang

Paper Information

  • arXiv ID: 2511.21669v1
  • Categories: cs.LG, cs.DC
  • Published: November 27, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »

It’s code red for ChatGPT

A smidge over three years ago, OpenAI threw the rest of the tech industry into chaos. When ChatGPT launched, even billed as a 'low-key research preview,' it bec...