John Carmack muses using a long fiber line as as an L2 cache for streaming AI data — programmer imagines fiber as alternative to DRAM
Source: Tom’s Hardware

Image credit: Getty Images
John Carmick’s Fiber‑Line L2 Cache Idea
John Carmack recently tweeted about using a long loop of single‑mode fiber as an L2‑style cache for AI model weights, aiming for near‑zero latency and extremely high bandwidth. He notes that single‑mode fiber can achieve 256 Tb/s over 200 km, which means roughly 32 GB of data are “in flight” within the fiber at any moment, providing an effective 32 TB/s bandwidth.
The concept relies on the deterministic access patterns of neural‑network weights, allowing the fiber to act as a temporary storage medium for rapid weight retrieval.
Historical Context: Delay‑Line Memory
Several commenters compared the idea to delay‑line memory, a technology from the mid‑20th century that stored data as acoustic or electromagnetic waves in a medium (e.g., mercury). Although mercury proved problematic, the principle of using a physical propagation delay as storage is analogous to Carmack’s proposal.
Practical Considerations
- Power consumption: Optical transmission consumes far less power than keeping large DRAM arrays active, potentially offering energy savings.
- Cost and infrastructure: Deploying 200 km of fiber is expensive, and the required optical amplifiers and digital signal processors (DSPs) could offset some of the energy benefits.
- Alternative approaches: Carmack also suggested wiring many flash memory chips directly to AI accelerators, which would need a standardized interface but could be more feasible given current investment in AI hardware.
Limitations Highlighted by the Community
- The need for extensive fiber length and associated hardware.
- Energy overhead from amplifiers and DSPs.
- DRAM cost trends may improve, reducing the relative advantage of fiber.
- Some speculative ideas, such as using vacuum or space‑based lasers, were mentioned but are currently impractical.
Related Research
Several research projects have explored similar concepts of using non‑volatile or optical storage for AI workloads:
- Behemoth (2021) – USENIX FAST paper
- FlashGNN (2021) – IEEE Xplore
- FlashNeuron (2021) – USENIX FAST presentation
- Augmented Memory Grid (2025) – StartupHub AI news
These works demonstrate ongoing interest in alternative memory hierarchies for AI, suggesting that concepts like Carmack’s fiber cache could see practical implementation in the near future.