Helping data centers deliver higher performance with less hardware

Published: (April 7, 2026 at 12:00 AM EDT)
6 min read
Source: MIT News - AI

Source: MIT News - AI

Improving Data‑Center Storage Efficiency

To improve data‑center efficiency, multiple storage devices are often pooled together over a network so many applications can share them. But even with pooling, significant device capacity remains under‑utilized due to performance variability across the devices.

MIT researchers have now developed a system that boosts the performance of storage devices by handling three major sources of variability simultaneously. Their approach delivers significant speed improvements over traditional methods that tackle only one source of variability at a time.

The system uses a two‑tier architecture:

  • Global controller – makes big‑picture decisions about which tasks each storage device performs.
  • Local controllers – run on each machine and rapidly reroute data if that device is struggling.

The method can adapt in real time to shifting workloads and does not require specialized hardware. When tested on realistic tasks like AI model training and image compression, it nearly doubled the performance delivered by traditional approaches. By intelligently balancing the workloads of multiple storage devices, the system can increase overall data‑center efficiency.

“There is a tendency to want to throw more resources at a problem to solve it, but that is not sustainable in many ways. We want to be able to maximize the longevity of these very expensive and carbon‑intensive resources,” says Gohar Chaudhry, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique. “With our adaptive software solution, you can still squeeze a lot of performance out of your existing devices before you need to throw them away and buy new ones.”

Chaudhry is joined on the paper by Ankit Bhardwaj, an assistant professor at Tufts University; Zhenyuan Ruan, PhD ’24; and senior author Adam Belay, an associate professor of EECS and a member of the MIT Computer Science and Artificial Intelligence Laboratory. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation.


Leveraging Untapped Performance

Solid‑state drives (SSDs) are high‑performance digital storage devices that allow applications to read and write data. For instance, an SSD can store vast datasets and rapidly send data to a processor for machine‑learning model training.

Pooling multiple SSDs together so many applications can share them improves efficiency, since not every application needs to use the entire capacity of an SSD at a given time. However, not all SSDs perform equally, and the slowest device can limit the overall performance of the pool.

These inefficiencies arise from variability in SSD hardware and the tasks they perform.

To utilize this untapped SSD performance, the researchers developed Sandook, a software‑based system that tackles three major forms of performance‑hampering variability simultaneously. “Sandook” is an Urdu word that means “box,” to signify “storage.”

  1. Device‑age and wear variability – Differences in the age, amount of wear, and capacity of SSDs that may have been purchased at different times from multiple vendors.
  2. Read‑write interference – The mismatch between read and write operations occurring on the same SSD. To write new data, the SSD must erase some existing data, which can slow down concurrent reads.
  3. Garbage collection – A process of gathering and removing outdated data to free up space. This process, which slows SSD operations, is triggered at random intervals that a data‑center operator cannot control.

“I can’t assume all SSDs will behave identically through my entire deployment cycle. Even if I give them all the same workload, some of them will be stragglers, which hurts the net throughput I can achieve,” Chaudhry explains.


Plan Globally, React Locally

To handle all three sources of variability, Sandook utilizes a two‑tier structure:

  • Global scheduler – optimizes the distribution of tasks for the overall pool.
  • Local schedulers – run on each SSD and react to urgent events, shifting operations away from congested devices.

Key mechanisms:

  • Rotating read/write assignments – The system reduces read‑write interference by rotating which SSDs an application can use for reads and writes, lowering the chance that reads and writes happen simultaneously on the same device.
  • Garbage‑collection awareness – Sandook profiles the typical performance of each SSD. When it detects that garbage collection is likely slowing operations, it temporarily reduces the workload on that SSD by diverting some tasks until garbage collection finishes.

“If that SSD is doing garbage collection and can’t handle the same workload anymore, I want to give it a smaller workload and slowly ramp things back up. We want to find the sweet spot where it is still doing some work, and tap into that performance,” Chaudhry says.

  • Weighted workload assignment – The SSD profiles also allow the global controller to assign workloads in a weighted fashion that considers each device’s characteristics and capacity.

Because the global controller sees the overall picture and the local controllers react on the fly, Sandook can simultaneously manage forms of variability that happen over different time scales. For instance, delays from garbage collection occur suddenly, while latency caused by wear and tear builds up over many months.


Evaluation

The researchers tested Sandook on a pool of 10 SSDs and evaluated the system on four tasks:

  1. Running a database
  2. Training a machine‑learning model
  3. Compressing images
  4. Storing user data

Results

  • Throughput improvements: 12 %–94 % over static methods, depending on the application.
  • Capacity utilization: Overall SSD capacity utilization increased by 23 %.
  • Performance close to hardware limits: SSDs achieved 95 % of their theoretical maximum performance without the need for specialized hardware.

Sandook demonstrates that intelligent, software‑only coordination can unlock the latent performance of existing storage hardware, extending its useful life and reducing the environmental impact of data‑center expansions.

## Future Directions and Impact

“Our dynamic solution can unlock more performance for all the SSDs and really push them to the limit. Every bit of capacity you can save really counts at this scale,” Chaudhry says.

In the future, the researchers want to incorporate new protocols available on the latest SSDs that give operators more control over data placement. They also want to leverage the predictability in AI workloads to increase the efficiency of SSD operations.

> “Flash storage is a powerful technology that underpins modern datacenter applications, but sharing this resource across workloads with widely varying performance demands remains an outstanding challenge. This work moves the needle meaningfully forward with an elegant and practical solution ready for deployment, bringing flash storage closer to its full potential in production clouds,”  
> — *Josh Fried*, software engineer at Google and incoming assistant professor at the University of Pennsylvania (not involved with this work).

## Funding

This research was funded, in part, by the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, and the Semiconductor Research Corporation.
0 views
Back to Blog

Related posts

Read more »

A philosophy of work

What makes work valuable? Michal Masny, the NC Ethics of Technology Postdoctoral Fellow in the MIT Department of Philosophyhttps://philosophy.mit.edu/, investig...