NVMe Memory Tiering Design and Sizing on VMware Cloud Foundation 9 Part 4: vSAN Compatibility and Storage Considerations
Source: VMware Blog
Recap of Parts 1‑3
We’ve covered a lot of ground in the first three parts of this series:
- PART 1: Prerequisites and Hardware Compatibility
- PART 2: Design for Security, Redundancy, and Scalability
- PART 3: Sizing for Success
There’s still a lot more to learn about Memory Tiering. In fact, vSAN often comes up in conversations about Memory Tiering because of their similarities and frequent compatibility questions, so let’s dive in.
Memory Tiering vs. vSAN OSA
When we first started working with Memory Tiering, the similarities to vSAN OSA were obvious:
- Multi‑tier approach – active data lives on fast devices, dormant data on less‑expensive devices.
- Reduced TCO – you don’t need expensive devices for dormant data.
- Deep vSphere integration – both are easy to implement.
Despite these parallels, early confusion surrounded compatibility, integration, and the possibility of enabling both features simultaneously. Below are the answers.
Can vSAN and Memory Tiering coexist?
Yes. You can enable vSAN and Memory Tiering on the same clusters at the same time.
The real limitation is that vSAN cannot provide storage to Memory Tiering – that configuration is not supported. Even though both solutions may use NVMe devices, they cannot share the same physical or logical device.
Why not?
- Sharing would create bandwidth contention.
- It could slow down memory performance just to “save” NVMe space (think of adding water to a half‑full fuel tank – don’t do it).
For production workloads, always use a dedicated physical or logical (hardware RAID) device exclusively for Memory Tiering.
Lab note: You can create several partitions for experimental purposes at your own risk. Part 5 will cover lab deployments in detail.
Summary
- vSAN and Memory Tiering CAN coexist.
- They CANNOT share drives or datastores.
- They operate independently but complement each other under the VCF umbrella.
- VMs may use a vSAN datastore and Memory Tiering simultaneously.
- Encryption can be applied at both layers, but they work at different levels.

Dedicated Device Requirement
We cannot use vSAN (or any other NAS/SAN solution) to back Memory Tiering. The device must be:
- Locally attached to the host.
- Dedicated – no other partitions or datastore usage.
There is a lab‑only scenario where a shared device might be used; this will be covered in Part 5.
Using Existing NVMe Devices
If you lack spare NVMe devices and cannot get a new CapEx request approved, you can repurpose devices from local datastores or vSAN provided you follow the correct procedure:
- Verify device suitability – it must be on the recommended list (Endurance class D, Performance class F or G). (See Part 1 of the blog series.)
- Remove the NVMe device from vSAN or the local datastore.
- Delete any leftover partitions from the previous datastore.
- Create a Memory Tiering partition on the clean device.
- Configure Memory Tiering on the host or cluster.

Caution: Ensure you can afford to lose the device from its original datastore and that it meets endurance, performance, and partition‑cleanliness requirements. Move or protect any data before reclaiming the device. Perform this step at your own risk.
Deployment Considerations for VCF 9
- No built‑in workflow during VCF deployment claims devices for Memory Tiering.
- vSAN auto‑claims devices during deployment.
If you’re deploying VCF in a greenfield environment, you may need to manually reclaim a device intended for Memory Tiering from vSAN using the steps above. We are working on improving this process in a future release.
What’s Next?
The next part of this blog series will cover different deployment scenarios, including:
- Greenfield
- Brownfield
- Lab environments
Stay tuned for Part 4, where we dive deeper into these use cases and provide hands‑on guidance.
Additional information on Memory Tiering
Discover more from VMware Cloud Foundation (VCF) Blog
Subscribe to get the latest posts sent to your email.
Stay tuned for the next episode!