Actionbase: One Database for Likes, Views, and Follows
Source: Dev.to

Every social platform needs the same features — likes, views, follows, bookmarks. And every team builds them from scratch, hitting the same walls: fan‑out writes, slow counts, inconsistent reads.
Actionbase is an open‑source database that treats user interactions as a first‑class data model. It exposes a REST API — just HTTP calls to read and write. It’s been serving 1M+ requests per minute in production at Kakao, the company behind KakaoTalk.
The Problem
A simple “like” button touches more than you’d think:
- Did this user already like this post? → edge lookup
- How many likes does this post have? → count
- Show all posts this user liked, newest first → scan
With a general‑purpose database, you compute these on every read. At scale, that means slow queries, cache‑invalidation headaches, and duplicated logic across teams.
How Actionbase Works
Every interaction is modeled as a graph edge:
user-123 → like → post-456
user-123 → follow → user-789
user-123 → view → product-012
Who did what to which target.
When a write comes in, Actionbase precomputes all derived data — forward edges, reverse edges, indexes, and counts — in a single operation. Reads are just lookups.
Quick Start

docker run -it ghcr.io/kakao/actionbase:standalone
Runs the server (port 8080) in the background, CLI in the foreground. Load sample data:
load preset likes
This creates three edges:
Alice ── likes ──> Phone
Bob ──── likes ──> Phone
Bob ──── likes ──> Laptop
Query — precomputed, just read:
get --source Alice --target Phone # Alice → Phone
scan --index recent --start Bob --direction OUT # Bob's likes
scan --index recent --start Phone --direction IN # Phone's likers
count --start Alice --direction OUT # 1
count --start Phone --direction IN # 2
What’s Next
This is the first post in the Actionbase Stories series. Upcoming posts will cover patterns for adopting Actionbase into real running systems — from gradual migration to async processing to CQRS integration.
Actionbase wasn’t built complete from day one. It was deployed early and evolved under production pressure — surviving incidents, earning trust, and adding verification layers along the way. Those stories are coming too.
GitHub:
Feedback welcome — issues, discussions, or comments here.