[Paper] A Dynamic PD-Disaggregation Architecture for Maximizing Goodput in LLM Inference Serving

Published: (November 25, 2025 at 09:27 PM EST)
1 min read
Source: arXiv

Source: arXiv

Abstract

To meet strict Service-Level Objectives (SLOs), contemporary Large Language Models (LLMs) decouple the prefill and decoding stages and place them on separate GPUs to mitigate the distinct bottlenecks inherent to each phase. However, the heterogeneity of LLM workloads causes producer‑consumer imbalance between the two instance types in such disaggregated architecture. To address this problem, we propose DOPD (Dynamic Optimal Prefill/Decoding), a dynamic LLM inference system that adjusts instance allocations to achieve an optimal prefill‑to‑decoding (P/D) ratio based on real‑time load monitoring. Combined with an appropriate request‑scheduling policy, DOPD effectively resolves imbalances between prefill and decoding instances and mitigates resource allocation mismatches due to mixed‑length requests under high concurrency.

Experimental evaluations show that, compared with vLLM and DistServe (representative aggregation‑based and disaggregation‑based approaches), DOPD improves overall system goodput by up to 1.5×, decreases P90 time‑to‑first‑token (TTFT) by up to 67.5 %, and decreases P90 time‑per‑output‑token (TPOT) by up to 22.8 %. Furthermore, our dynamic P/D adjustment technique performs proactive reconfiguration based on historical load, achieving over 99 % SLO attainment while using fewer additional resources.

Back to Blog

Related posts

Read more »

Friday Five — December 5, 2025

!1https://www.redhat.com/rhdc/managed-files/styles/default_800/private/number-1.png.webp?itok=pDWx13kK Red Hat to deliver enhanced AI inference across AWS Red H...

Terraform Project: Simple EC2 + Security Group

Project Structure terraform-project/ │── main.tf │── variables.tf │── outputs.tf │── providers.tf │── terraform.tfvars │── modules/ │ └── ec2/ │ ├── main.tf │...