Managing Mini-Page Memory: The Buffer Pool Behind Bf-Tree
Source: Dev.to
Hello, I’m Maneshwar. I’m working on FreeDevTools online—a free, open‑source hub that brings together all dev tools, cheat codes, and TLDRs in one place so developers can find and use what they need without hunting across the web.
Recap
Yesterday we examined how mini‑pages and pinned inner nodes reshape the execution path of a B‑Tree, keeping hot records close to the CPU and removing contention from the critical path.
Today we zoom in on a less visible but equally critical piece:
How Bf‑Tree Manages Mini‑Page Memory
Once mini‑pages exist, the real challenge begins: they are variable‑sized, mutable, and highly concurrent. A traditional page‑based buffer pool simply doesn’t fit.
This post walks through how Bf‑Tree designs a specialized buffer pool for mini‑pages, and why a circular buffer turns out to be the right abstraction.
Why Mini‑Pages Need a Different Buffer Pool
Mini‑pages are not fixed‑size disk pages. They:
- Grow and shrink dynamically
- Are frequently accessed and modified
- Exist purely in memory until eviction
These properties introduce three fundamental challenges:
- Memory management – Track the exact location of every mini‑page while avoiding fragmentation.
- Hotness‑aware eviction – Decide which mini‑pages stay in memory and which are flushed to disk.
- Concurrency at scale – Allow many threads to allocate, evict, and reclaim memory safely while still saturating SSD bandwidth.
Traditional allocators and LRU‑style buffer pools struggle here. Bf‑Tree takes a different route.
The Core Idea: A Circular Buffer for Mini‑Pages
Bf‑Tree stores all mini‑pages inside a large circular buffer.
- Allocation → Append at the tail
- Reclamation → Remove from the head
When the buffer fills up, eviction moves forward linearly. This design is inspired by FASTER’s hybrid log, but adapted for mini‑pages and B‑Tree semantics.
Benefits
- Allocation is fast and sequential.
- Eviction is predictable.
- Fragmentation is minimized.
A naïve circular buffer would evict hot mini‑pages too aggressively, so Bf‑Tree refines the design further.
Three Regions, Not One: Protecting Hot Mini‑Pages
To avoid evicting frequently accessed mini‑pages, the circular buffer is divided using three moving pointers:
- Tail address
- Second‑chance address
- Head address
These pointers create two logical regions:

In‑Place Update Region (~90 %)
- Mini‑pages can be modified directly.
- Hot mini‑pages tend to stay here.
- Most updates happen in this region.
Copy‑on‑Access Region (~10 %)
- Mini‑pages nearing eviction reside here.
- On access, they are copied to the tail, giving them a “second chance”.
This mechanism prevents hot mini‑pages from drifting toward the head and being evicted, without requiring complex LRU bookkeeping.
Handling Growth, Shrinkage, and Reuse
Mini‑pages frequently resize as they absorb writes. Bf‑Tree handles this via multiple free lists, grouped by size class:
- Allocate a new mini‑page when a page grows or shrinks.
- Copy the contents to the new allocation.
- Return the old memory to the appropriate free list.
This keeps allocation fast and reduces fragmentation over time.
Circular Buffer API: Minimal but Sufficient
The buffer pool exposes a tight API tailored for mini‑pages.
Allocation
Memory is taken from:
- A free list (if one matches the size), or
- The tail (by advancing it).
If the tail gets too close to the head, allocation fails and eviction is triggered.
Eviction
Eviction always starts near the head:
- Dirty records are merged back into the leaf page on disk.
- The mapping table is updated.
- The head pointer advances.
Multiple threads can evict concurrently, but pointer advancement remains ordered.
Deallocation
Freed memory is returned to size‑specific free lists for reuse.
No paging, no per‑page locks, no LRU queues.
Performance‑Critical Optimizations
Because this buffer sits on the hot path, several low‑level optimizations matter.
Minimal Fragmentation
- Mini‑pages are packed back‑to‑back.
- Each allocation carries only 8 bytes of metadata.
- No alignment to page boundaries is required, keeping memory dense and predictable.
Huge Pages for TLB Efficiency
Mini‑pages may cross 4 KB boundaries. To avoid excessive page‑table walks:
- The circular buffer is backed by huge pages.
- This dramatically reduces TLB pressure.
Parallel Eviction with Ordered Progress
Eviction must advance the head sequentially, but the work itself can be parallel:
- Threads evict mini‑pages in parallel.
- The head pointer moves forward only after all earlier evictions complete.
This preserves correctness while still exploiting parallel I/O.
Why This Design Works
Mini‑pages already changed how leaf nodes behave. This buffer pool ensures they scale.
Together, the design:
- Avoids allocator fragmentation.
- Keeps hot mini‑pages resident without LRU overhead.
- Supports massive concurrency.
- Streams cold data efficiently to disk.
Most importantly, it aligns with Bf‑Tree’s philosophy:
Separate concerns and optimize each path independently.
- Inner nodes: fast, pinned, contention‑free traversal.
- Leaf pages: stable on‑disk structure.
- Mini‑pages: flexible, adaptive, in‑memory working set.
The buffer pool ties everything together, delivering a high‑performance, low‑contention foundation for Bf‑Tree’s mini‑page architecture.
# FreeDevTools
`r` pool is what makes that separation viable in practice.
[](https://hexmos.com/freedevtools/)
👉 **Check out:** [FreeDevTools](https://hexmos.com/freedevtools/)
Any feedback or contributions are welcome!
It’s online, open‑source, and ready for anyone to use.
⭐ **Star it on GitHub:** [FreeDevTools](https://github.com/HexmosTech/FreeDevTools)