SK hynix and SanDisk announce new High Bandwidth Flash — speedy HBF standard is targeted at inference AI servers

Published: (February 26, 2026 at 03:36 PM EST)
2 min read

Source: Tom’s Hardware

Data storage
Image credit: Getty Images

High Bandwidth Flash (HBF) Announcement

SK Hynix and SanDisk have jointly announced High Bandwidth Flash (HBF), a new memory standard aimed at inference AI servers. The initiative is part of a broader effort to address the growing bandwidth demands of modern data centers.

Why Existing NAND Isn’t Enough

Typical NAND chips in SSDs have steadily improved, with contemporary server‑grade units capable of 28 GB/s per drive【source】(https://www.tomshardware.com/pc-components/ssds/worlds-first-pcie-6-0-ssd-enters-mass-production-with-28gb-s-speeds-micron-9650-series-ssds-support-air-and-liquid-cooling). However, AI workloads require even higher throughput and lower latency, prompting the development of HBF.

Power Efficiency Concerns

Power consumption is a major consideration for large‑scale deployments. For example, a high‑end Micron 9650 SSD draws 25 W at full load. Scaling to exabyte‑level storage with tens of thousands of drives would result in substantial energy costs, making more efficient memory solutions like HBF attractive.

Possible Architecture

The announcement does not detail how HBF will integrate with existing systems. The phrase “supporting layer” suggests a few possibilities:

  • An on‑SSD cache that is significantly larger than current implementations.
  • A fast block‑storage device similar to Intel Optane, requiring OS or application modifications to fully exploit its performance.

Timeline and Standardization

No specific release date was provided, but the companies anticipate that demand for complex memory solutions, including HBF, will increase around 2030. The standard will be overseen by the Open Compute Project【source】(https://www.opencompute.org/).

Target Applications

HBF is being positioned for inference servers, where the outputs generated by AI models must be stored rapidly and efficiently. As AI usage expands, the storage requirements for these outputs are expected to grow exponentially.


Google Preferred Source

0 views
Back to Blog

Related posts

Read more »