Kubernetes v1.35: Mutable PersistentVolume Node Affinity (alpha)

Published: (January 8, 2026 at 01:30 PM EST)
5 min read

Source: Kubernetes Blog

PersistentVolume Node Affinity

The PersistentVolume node affinity API (see the Kubernetes documentation) has been available since Kubernetes v1.10. It is commonly used to indicate that a volume may not be equally accessible from every node in the cluster.

  • Previous behavior: The nodeAffinity field was immutable.
  • New behavior (v1.35 – alpha): The field is now mutable, enabling more flexible online volume management.

Why Make Node Affinity Mutable?

Kubernetes has traditionally treated the node affinity of a PersistentVolume (PV) as immutable.
For stateless workloads (e.g., Deployments) this isn’t a problem—changing a pod spec triggers a rollout that recreates the pods.
Stateful workloads, however, rely on PVs that cannot be recreated without risking data loss.

Why change it now?

  1. Evolving storage back‑ends – Many providers now offer regional disks and even support live migration from zonal to regional storage without disrupting workloads.
  2. New disk generations – Newer disks may only be attachable to a subset of nodes (e.g., newer instance types).

Both scenarios require the ability to update the PV’s node affinity so that pods are scheduled onto the correct nodes after the underlying storage changes.

Example 1 – Migrating from Zonal to Regional Disk

A PV that was originally bound to a specific zone:

spec:
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/zone
          operator: In
          values:
          - us-east1-b

After migrating the volume to a regional disk, the affinity should be relaxed to the region level:

spec:
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: topology.kubernetes.io/region
          operator: In
          values:
          - us-east1

Example 2 – Switching to a New Disk Generation

A PV that only works with generation‑1 disks:

spec:
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: provider.com/disktype.gen1
          operator: In
          values:
          - available

When the volume is upgraded to a generation‑2 disk, the affinity must be updated accordingly:

spec:
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: provider.com/disktype.gen2
          operator: In
          values:
          - available

The Bigger Picture

Making node affinity mutable is a first step toward more flexible, online volume management.
The change is simple—removing a validation check in the API server—but it opens the door for deeper integration with the Kubernetes ecosystem (e.g., automated migration tools, dynamic storage class updates, etc.).

While there is still a long road ahead, this capability already enables:

  • Seamless migration of volumes across zones/regions.
  • Alignment of PVs with evolving hardware requirements.
  • Reduced operational friction for stateful workloads.

Try It Out

This feature is intended for Kubernetes cluster administrators whose storage provider supports online updates. Updating a volume can affect its accessibility, so you must:

  1. Update the underlying volume in the storage provider first.
  2. Determine which nodes can access the volume after the update.
  3. Enable the feature and keep the PersistentVolume (PV) node affinity in sync.

Note: Changing PV node affinity alone does not modify the accessibility of the underlying volume.

Feature State

  • Alpha – disabled by default and may change in future releases.
  • To try it out, enable the MutablePVNodeAffinity feature gate on the API server and edit the PV’s spec.nodeAffinity field.
  • Typically only administrators can edit PVs, so ensure you have the appropriate RBAC permissions.

Race Condition Between Updating and Scheduling

PV node affinity is one of the few factors outside a Pod that can influence scheduling decisions.

  • Relaxing node affinity (allowing more nodes to access the volume) is safe.
  • Tightening node affinity (restricting access) introduces a race condition:
    • The Scheduler may still have the old PV cached.
    • It could schedule a Pod onto a node that can no longer reach the volume.
    • The Pod will remain stuck in the ContainerCreating state.

Current Mitigation (under discussion)

  • Kubelet‑level check: Fail Pod startup if the PersistentVolume’s node affinity is violated.
    • This change has not been merged yet.

Practical Guidance

  • After updating a PV, monitor subsequent Pods that use it to ensure they are scheduled onto nodes that can access the volume.
  • Do not immediately launch new Pods in a script right after the PV update; the Scheduler’s cache may be stale, leading to scheduling failures.

Summary: Enable MutablePVNodeAffinity only after you have updated the underlying storage and verified node accessibility. Keep an eye on Pod scheduling behavior, especially when tightening node affinity, until a kubelet‑level safeguard lands.

Future Integration with CSI (Container Storage Interface)

At present, the cluster administrator must manually:

  1. Modify the PersistentVolume (PV) node affinity, and
  2. Update the underlying volume in the storage provider.

These manual steps are error‑prone and time‑consuming.

Desired Workflow

  • Goal: Enable an unprivileged user to trigger storage‑side updates simply by modifying their PersistentVolumeClaim (PVC).
  • Mechanism: Leverage VolumeAttributesClass (or a similar CSI feature) so that:
    • The PVC change automatically propagates to the storage provider.
    • The PV’s node affinity is updated automatically when appropriate.
  • Benefit: No cluster‑admin intervention is required, reducing operational overhead and the risk of human error.

Next Steps

  • Investigate CSI support for dynamic updates via VolumeAttributesClass.
  • Prototype a controller that watches PVC changes and reconciles:
    • Storage‑side configuration.
    • PV node affinity.
  • Define RBAC policies that allow unprivileged users to modify PVCs while restricting direct PV edits.

We Welcome Your Feedback

As noted earlier, this is only a first step.

  • Kubernetes users – We’d like to learn how you use (or plan to use) PV node affinity. Is it beneficial to update it online in your case?
  • CSI driver developers – Would you be willing to implement this feature? How would you like the API to look?

Please share your thoughts via any of the following channels:

For any inquiries or specific questions related to this feature, feel free to reach out to the SIG Storage community.

Back to Blog

Related posts

Read more »