Dynamic Local Persistent Volumes on Kubernetes via Open Service Broker

Published: (January 8, 2026 at 07:59 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Shared storage works well for many workloads, but once latency and I/O consistency start to matter, local disks become very attractive.

Kubernetes supports Local Persistent Volumes (Local PVs), but with a big limitation: Local PVs must be statically provisioned. That makes them hard to use in dynamic environments where workloads are created on demand. We ran into this problem while trying to expose local storage through an Open Service Broker interface.

Why Static Local PVs Are a Problem

  • PVs must exist before workloads request them
  • Capacity planning becomes manual
  • Automation pipelines break down

For service brokers and self‑service platforms, this is a non‑starter. Users expect storage to be provisioned dynamically.

The Approach We Took

Instead of fighting Kubernetes’ design, we worked around it. The key idea was to separate:

  • Scheduling decisions – still done by Kubernetes
  • Disk creation – done on the target node

The Provisioning Flow

At a high level, our workflow looked like this:

  1. A service broker receives a request for local storage.
  2. The broker submits a temporary “dummy” Kubernetes manifest with:
    • resource requirements
    • node affinity
  3. Kubernetes schedules the workload to a specific node.
  4. Once the node is known, the broker:
    • remotely creates the local disk
    • generates the corresponding Local PV object
  5. The real workload is deployed and bound to that PV.
  6. When the service is deleted, the local disk is cleaned up.

This gave us something that felt like dynamic provisioning, even though Local PVs remain static under the hood.

Why This Worked

  • Kubernetes still decides placement.
  • Disk creation happens only where needed.
  • No pre‑provisioning of unused capacity.
  • Storage lifecycle is tied to the service instance.

It’s not as elegant as a CSI driver, but for on‑prem and hybrid clusters, it proved to be a practical solution.

Trade‑offs and Lessons Learned

  • Requires node‑level access.
  • Cleanup must be handled carefully.
  • Failure paths need extra attention.

In exchange, we got predictable performance and a much better developer experience for stateful workloads.

When This Pattern Makes Sense

  • You control the cluster.
  • I/O performance matters.
  • Cloud block storage isn’t an option.
  • Service brokers are part of your platform.

For many internal platforms, this turned out to be “good enough” — and far better than manual PV management.

Open Source Implementation

We documented and open‑sourced this approach as part of a larger platform project:

👉 https://github.com/laoshanxi/app-mesh/blob/main/docs/source/success/open_service_broker_support_local_pv_for_K8S.md

If you’ve built dynamic storage workflows around Local PVs (or decided not to), I’d love to hear what worked — and what didn’t.

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...