7 Mental Models That Made Me a Better Software Architect

Published: (February 23, 2026 at 08:36 PM EST)
9 min read
Source: Dev.to

Source: Dev.to

Introduction

Most software architects I know have read the same books, watched the same conference talks, and absorbed the same design patterns. And yet, some consistently make better architectural decisions than others.

The difference isn’t talent. It’s not even experience. It’s how they think.

Three years ago, I was a competent backend engineer who could design systems that worked. Today, I architect systems that last. The turning point wasn’t a new framework or a certification. It was discovering Charlie Munger’s concept of a “latticework of mental models” and realizing it applies to software architecture as powerfully as it does to investing.

Munger, Warren Buffett’s longtime business partner, argues that relying on a single discipline’s thinking tools is like fighting with one hand tied behind your back.

“You’ve got to have models in your head, and you’ve got to array your experience, both vicarious and direct, on this latticework of models.”

Below are seven mental models from outside software engineering that fundamentally changed how I approach system design.

1. Second‑Order Thinking

The Model – Second‑order thinking asks “And then what?” Most people stop at first‑order consequences. Better thinkers go two or three levels deep.

In Architecture – When a team proposed adding a caching layer to fix latency issues, first‑order thinking said “Great, faster responses.”

Second‑order thinking revealed a different picture:

  1. Cache invalidation would require a new eventing system.
  2. That eventing system would create ordering guarantees we’d need to maintain.
  3. Those ordering guarantees would constrain our future sharding strategy.
  4. The sharding constraints would limit our scaling approach for the next 18 months.

We didn’t skip the cache, but we chose a cache‑aside pattern with TTL‑based expiration instead of event‑driven invalidation—simpler, with fewer ripples.

How to apply it – Before any architectural decision, write down three levels of consequences:

LevelWhat to capture
First‑orderImmediate effect
Second‑orderReactions to the effect
Third‑orderReactions to the reactions

I keep a simple template for this in my decision docs.

2. “The Map Is Not the Territory” – Alfred Korzybski

The Model – Our representations of reality are not reality itself. Every map omits details. Every abstraction leaks.

In Architecture – I once spent two weeks designing what I thought was an elegant event‑driven microservices architecture. Beautiful diagrams. Clean separation of concerns. The architecture review went smoothly.

In production, the system buckled under a pattern nobody had mapped: cascading retry storms. Our diagrams showed happy‑path message flows but didn’t show what happens when three services simultaneously retry failed messages with exponential back‑off that accidentally synchronize.

Now I perform a “map audit.” For every architecture diagram, I ask:

  • What does this diagram NOT show?
  • What assumptions are baked into the boxes and arrows?
  • Where are the failure modes that exist in the territory but not on this map?

How to apply it – Add a “What This Diagram Doesn’t Show” section to every architecture document. List at least five items. You’ll be surprised how often the missing pieces are where production incidents live.

3. Inversion – “Invert, Always Invert” (Carl Jacobi, via Munger)

The Model – Instead of asking “How do I build a great system?” ask “How would I guarantee this system fails?” Then avoid those things.

In Architecture – Before designing our payment‑processing pipeline, I ran an inversion exercise with the team:

“How would we guarantee this system loses money?”

The answers were illuminating:

Guaranteed failureResulting architectural requirement
Process the same payment twice (idempotency failure)Idempotency keys
Accept payments when the ledger is down (consistency failure)Synchronous ledger writes
Make it impossible to audit what happened (observability failure)Structured audit logging
Deploy changes without a way to roll back (deployment failure)Blue‑green deployments with instant rollback

This approach consistently produces more robust architectures than starting with feature requirements.

4. Hanlon’s Razor

The Model – “Never attribute to malice that which is adequately explained by stupidity.” In system design, extend it: never attribute to attack what can be explained by confused usage.

In Architecture – Our internal API was being “abused” by a partner team making 10× the expected calls. My first instinct was to add aggressive rate limiting. Hanlon’s Razor made me pause.

Investigation revealed their service was retrying on every non‑200 response, including 404s for resources that legitimately didn’t exist. They weren’t abusing our API; our API was returning confusing responses.

The fix wasn’t rate limiting. It was:

  1. Clearer response codes with actionable error messages.
  2. A Retry-After header on genuinely retriable errors.
  3. An X-Not-Retriable: true header on permanent failures.

Traffic normalized within a day. No rate limiting needed.

How to apply it – When you see unexpected system behavior, assume confusion before malice. Design APIs and interfaces that make the right thing easy and the wrong thing obvious.

5. Margin of Safety

The Model – In investing, margin of safety means buying assets well below their intrinsic value to account for errors in your analysis. In engineering, it means building in capacity buffers for what you can’t predict.

In Architecture – I used to size systems for projected peak load plus 20 %. That’s not a margin of safety; it’s optimistic planning with a thin buffer.

Real margin of safety in architecture means:

  • Capacity: Design for projected peak, not 1.2×. The cost difference is usually trivial compared to a re‑architecture project.
  • Complexity: If a junior developer can’t understand the system from the docs in a day, your complexity margin is gone.
  • Dependencies: If removing any single dependency breaks everything, you have no margin.
  • Time: If your deploy pipeline takes 45 minutes and your SLA requires …

In philosophy: Among competing hypotheses, the one with the fewest assumptions should be selected.

In architecture: Among competing designs that meet requirements, choose the one with the fewest moving parts.

Example: ML‑Based Autoscaling vs. Simple Threshold Autoscaling

AspectML‑Based AutoscalerSimple Threshold Autoscaler
Components• Data pipeline to collect traffic metrics
• Training pipeline for the prediction model
• Model‑serving infrastructure
• Custom autoscaler that consumes predictions
• Fallback system for wrong predictions
• Aggressive scale‑up threshold
• Conservative scale‑down threshold
• Scheduled scaling rule for known patterns (e.g., Monday mornings, end‑of‑month processing)
CoverageHandles ~98 % of scenariosHandles ~95 % of scenarios
Failure ModesMore (complex)Five fewer failure modes
Operational OverheadRequires ML expertise; ~2 months to shipZero ML expertise; operational in 2 days
DecisionNot worth the added complexity for the marginal gainChosen as the pragmatic solution

How to Apply It

For every architectural component, ask:
What is the simplest version that solves 90 % of the problem?
Build that first. Add complexity only when you have evidence the simple version is insufficient.

The Model: Circle of Competence (Munger & Buffett)

Investing insight: Operate within your genuine expertise and be honest about its boundaries.

Translating to Architecture

ZoneCompetence LevelExample Technologies
InsideHigh confidenceJava microservices, PostgreSQL, REST APIs, basic Kubernetes
EdgeModerate confidenceEvent streaming with Kafka, gRPC
OutsideLow confidenceMachine‑learning infrastructure, real‑time data pipelines, multi‑region active‑active deployment

Migration Phases

  1. Phase 1 – Inside the Circle
    Migrate using technologies we already master.
    Result: High confidence, fast delivery.

  2. Phase 2 – Edge of the Circle
    Pair with learning opportunities; expand skill set.
    Result: Moderate confidence, built‑in skill development.

  3. Phase 3 – Outside the Circle
    Bring in specialists for the truly unfamiliar.
    Result: Honest about limits, reduces risk.

This sounds obvious, but teams often commit to architectures outside their competence because admitting “we don’t know how to build this” feels uncomfortable.

A Latticework of Mental Models

I’ve started exploring resources that catalog cross‑disciplinary thinking frameworks. One that resonated is KeepRule’s principles collection, which maps mental models from thinkers like Munger and Buffett to practical decision‑making contexts beyond investing.

Combining Models for Architectural Decisions

When evaluating a new proposal, I run through a quick checklist (≈ 30 minutes per major decision). It has saved months of rework.

Checklist ItemPrompt
Second‑order thinkingWhat are the downstream consequences through three levels?
Map vs. territoryWhat isn’t represented in this design?
InversionHow would we guarantee this fails?
Hanlon’s RazorAre we designing for how people will actually use this?
Margin of safetyWhere are our buffers, and are they sufficient?
Occam’s RazorIs this the simplest design that meets requirements?
Circle of competenceCan we actually build and maintain this?

Why Look Outside Software Engineering?

The biggest insight isn’t any single model; it’s that the best architectural thinking comes from outside architecture. Every model above originated in philosophy, mathematics, investing, or general reasoning—not in a software‑engineering textbook.

If you only read software‑engineering content, you’ll think only in software‑engineering patterns.
The architects who consistently make the best decisions are the ones reading widely: economics, psychology, biology, history.

Munger was right. You need a latticework. Start building yours.

Your Turn

What mental models from outside software engineering have improved your technical decision‑making?
I’m always looking to expand my latticework—drop your favorites in the comments.

0 views
Back to Blog

Related posts

Read more »

Dependency Injection Basics in C#

What is Dependency Injection? Dependency Injection DI is a design pattern that supplies a class with the objects it needs from the outside rather than creating...

You just need Postgres

!Cover image for You just need Postgreshttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads...