Savior: Low-Level Design

Published: (February 13, 2026 at 02:04 AM EST)
8 min read
Source: Dev.to

Source: Dev.to

Introduction

I went back to the drawing board for interview preparation and to sharpen my problem‑solving skills. Software development is in a weird stage. Two weeks ago, when I saw my friend doing low‑level design, I thought that in the current stage it was meaningless. How wrong I was.

My friend started asking me about problem solving and critical thinking. I realized that I was killing my skills by only focusing on abstraction nowadays and learning new tools that I might never use in a new job. For that reason I did some research and saw that a pretty good number of developers are complaining about how inline suggestions in IDEs and AI tools kill their problem‑solving abilities.

I watched a video related to the current interview style by NeetCode and realized that nothing has changed much—problem‑solving skills remain the strongest skill set. That’s why I created a new repository: grinding‑go.

My friend and I have started interviewing each other for low‑level design and system design. I noticed my critical thinking had softened. While doing our first interviews I was mumbling and not giving details because my ideas were half‑formed. So I planned a strategy for this stage of my coding skills:

  1. Do low‑level design bare‑handed.
  2. Ask an LLM to find articles about the design.
  3. Have the LLM ask Socratic follow‑up questions to sharpen my problem‑solving skills.

Thus I went back to the drawing board to do low‑level design for problems that are abstracted away and are just big‑word alerts. I also realized again that algorithms and computational thinking never go away; they stay under abstractions in large codebases, and if I don’t keep my skills up‑to‑date I’ll struggle a lot.

I created the grinding‑go project in Go because I love this language for its simplicity and speed. I started reading the summary of 100 Go Mistakes and coding snippets that are in this repo via the accompanying website. After that book I would definitely say that I stopped treating Go like other languages because it has its own style—especially when it comes to error handling and concurrency.

Core Structure – LRU Cache

When it comes to my core structure, I used an LRUCache struct that contains a linked list and a mutex for thread‑safety and concurrency:

type LRUCache struct {
    mu       sync.RWMutex
    capacity int
    cache    map[int]*Node
    list     *LinkedList
}

The main purpose is to make a quick lookup with a map, and when we change the position of the least‑recently‑used item we use a linked list, which avoids an O(n) operation. Adding a new item into the LRU cache works like this:

if len(lru.cache) > lru.capacity {
    tailKey, err := lru.list.removeTail()
    // ...
}

When the capacity is exceeded we remove the tail of the linked list, which is an O(1) operation.

Cleaning Up Removed Nodes

While solving this problem I forgot to nil out the removed element’s pointers from the linked list. In Go this isn’t a “dangling pointer” like in C/C++, but it still matters because the removed node can hold references to other nodes, preventing the garbage collector from freeing memory that should be reclaimed. It can also cause logical bugs if you accidentally traverse stale references:

// clear dangling pointers
n.prev = nil
n.next = nil

Lessons Learned

  • Handling the neighbours of linked‑list elements is error‑prone; I was losing references.
  • I wrote everything in one file, which reminded me of solving problems in a single file first, then refactoring—an approach I first heard from ThePrimeAgen.
  • LRU Cache is everywhere in production. When people say solving these problems is “dead,” they’re wrong. In production and large codebases the same patterns and data structures are used with heavy abstraction, and if you don’t know the low‑level details it will be more difficult.

Rate Limiter – Token Bucket

When I started this problem I initially thought about the sliding‑window technique, but I had already used it in other projects and wanted to try Token Bucketing instead. After watching a video I implemented a RateLimiter struct that composes a TokenBucket:

type RateLimiter struct {
    buckets   map[string]*TokenBucket // maps a user/key to its bucket
    mu        sync.Mutex               // protects concurrent map access
    rate      float64                  // default rate for new buckets
    maxTokens float64                  // default burst size for new buckets
}

The TokenBucket struct also has its own mutex because it must handle its operations concurrently. The Allow method controls how many requests a client can make during a specific time frame:

func (tb *TokenBucket) Allow() bool {
    tb.mu.Lock()
    defer tb.mu.Unlock()

    now := time.Now()
    tb.lastSeen = now
    newTokens := now.Sub(tb.lastRefill).Seconds() * tb.refillRate
    tb.tokens += newTokens
    if tb.tokens > tb.maxTokens {
        tb.tokens = tb.maxTokens
    }
    tb.lastRefill = now

    if tb.tokens >= 1 {
        tb.tokens--
        return true
    }
    return false
}

After implementing this I felt satisfied, but the Socratic question revealed a missing piece: cleaning up idle buckets. Without cleanup, if millions of unique keys hit the limiter, those buckets sit in memory forever. I added a cleanUpLoop that periodically evicts stale entries:

func (rl *RateLimiter) cleanUpLoop(interval time.Duration, ttl time.Duration) {
    ticker := time.NewTicker(interval)
    defer ticker.Stop()

    for range ticker.C {
        rl.mu.Lock()
        for key, bucket := range rl.buckets {
            if time.Since(bucket.lastSeen) > ttl {
                delete(rl.buckets, key)
            }
        }
        rl.mu.Unlock()
    }
}

Rate‑Limiter Cleanup Loop (Alternative Implementation)

func (rl *RateLimiter) cleanUpLoop(interval, maxIdle time.Duration) {
    ticker := time.NewTicker(interval)

    for range ticker.C {
        rl.mu.Lock()
        for key, tb := range rl.buckets {
            if time.Since(tb.lastSeen) > maxIdle {
                delete(rl.buckets, key)
            }
        }
        rl.mu.Unlock()
    }
}

Token Bucket vs. Sliding Window

FeatureToken BucketSliding Window
RefillTokens refill at a fixed rate; each request consumes one token.Requests are counted within a moving time window.
Burst handlingAllows bursts up to the bucket size – useful for APIs like payments or webhooks.Provides smoother, stricter rate control; large bursts are not possible.
Memory usageLow – just a counter and a timestamp.Higher – must store timestamps or counters for sub‑windows.

Pub/Sub in System‑Design Interviews

As you probably know, Pub/Sub is a frequent topic when discussing micro‑services architecture. Its asynchronous nature and ability to decouple services make it a solid choice for handling heavy REST‑API workloads while keeping performance high.

My personal learning strategy is to write a naïve implementation first, then research and refine it while building. I found Hussein Nasser’s video on the subject extremely helpful—highly recommended for anyone who prefers concepts over tools.

Core Types

There are four core structs:

StructResponsibility
TopicGroups related messages.
SubscriberReceives messages.
BrokerKnows which subscribers care about which topics and routes messages to them.
MessageHolds the actual data.

The Broker acts as a middle‑man; its methods delegate to Topic methods and return their errors directly.

Unsubscribe Implementation

When I first wrote Unsubscribe, I held the broker’s mutex for the entire operation, which was unnecessary because the lock is only needed for the topic lookup. Holding the lock while performing topic‑level work blocked all other broker methods.

Corrected version:

func (b *Broker) Unsubscribe(topicName string, sub *Subscriber) error {
    b.mu.Lock()
    t, ok := b.topics[topicName]
    b.mu.Unlock() // release broker lock before touching the topic

    if !ok {
        return ErrTopicNotFound
    }
    return t.RemoveSubscriber(sub.id)
}

Broadcast – Avoiding the Closure Bug

In the Broadcast method I made sure to pass the subscriber as a parameter to the goroutine. Without this, the loop variable could be captured incorrectly, causing every goroutine to reference the last subscriber.

Since Go 1.22 the loop variable is scoped per iteration, which eliminates this specific bug, but passing the variable explicitly remains good practice for clarity and backward compatibility.

for _, sub := range t.subscribers {
    // Pass sub as a parameter to avoid closure capture issues
    go func(s *Subscriber) {
        select {
        case s.ch <- msg:
        default:
        }
    }(sub)
}

You can read more about this in the official Go blog post on loop variable scoping.

Personal Reflections

  • I opened a PR on my own project and asked a friend to review it. His feedback reminded me that tools are just that—tools. The focus should stay on architecture, not on flashy terminology.
  • I plan to keep this series going: after every three low‑level‑design write‑ups, I’ll share my naïve solutions and discuss the trade‑offs.
  • My friend and I are busy job‑hunting, but we’re also working on a project called verdis. In a previous blog post I mentioned our first podcast (recorded with a fridge camera). The second episode is already much better!
  • I’ve added a new technique to my “formula”: open‑source contribution + low‑level‑design problem solving. If you spot anything in my solutions, feel free to suggest improvements. If you have interesting technical challenges, please create issues in the grinding‑go repository.
0 views
Back to Blog

Related posts

Read more »