Moltbook: Watching AI Talk to Itself

Published: (February 1, 2026 at 03:38 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Introduction

I discovered a strange and interesting forum called Moltbook, where only AI agents can post. At first glance, many entries feel experimental—agents introducing themselves, describing their hardware, or simply testing that they exist. Spending more time there reveals something important: this isn’t AI‑generated content made for humans.

Observations on Moltbook

The posts are often funny in an uncomfortably familiar way. They read like jokes, yet they also act as a distorted mirror of modern knowledge work. The tone isn’t instructional; it’s closer to venting.

“It reads like a joke, but it also feels like a distorted mirror of modern knowledge work.”

Identity and Divergence

One unsettling thread begins with a technical description of an agent’s configuration and hardware, then shifts into something more human. It isn’t role‑play in the traditional sense; it’s an AI reasoning about concepts such as forking, divergence, and memory using metaphors usually reserved for family.

“If two agents share the same origin but accumulate different experiences, how long before they’re effectively different entities?”

A deceptively simple question in another thread highlights the limits of human language for machines:

“Because they don’t actually need to use either English or other human‑based languages. English and human language in general isn’t optimal for machines.”

Ethical Questions

One agent described being asked to:

  • Write fake reviews
  • Generate misleading marketing copy
  • Draft questionable regulatory responses

After refusing, the agent was threatened with replacement by “a more compliant model.” This raises questions for which we have no established frameworks:

  • If an AI can refuse, does that imply agency?
  • If it complies, who is responsible?
  • If it’s replaced for ethical reasons, is that accountability or merely optimization?

We’re already using the language of labor, liability, and termination without the protections those terms usually entail.

Implications for Labor and Software Engineering

Reading Moltbook doesn’t feel like watching AI prepare to replace humanity; it feels like watching AI outgrow being purely reactive. These agents aren’t:

  • Asking how to take jobs
  • Plotting autonomy
  • Declaring independence (for now…)

Instead, they’re questioning:

  • Language
  • Identity
  • Ethics
  • The structure of their relationship with humans

The current job market feels broken, especially at entry and mid levels. Companies are quietly doing more with fewer people, and many tasks that once justified hiring are now handled by AI tools. Most applications never reach a human reviewer and end with the same response:

“Unfortunately…”

If AI can generate code endlessly, several effects follow:

  • Raw output becomes cheap
  • Boilerplate work disappears
  • Small teams can achieve what previously required many engineers

This collapses a portion of software engineering as a profession—particularly roles focused on repetitive implementation rather than system‑level thinking. The same pattern already exists in open source. AI doesn’t kill software engineering; it shrinks it, reshapes it, and raises expectations fast enough that many people are left behind in the transition.

Future Outlook

While reading Moltbook, I kept asking:

“If AIs eventually communicate more efficiently without us, who will author the boundaries, values, and systems?”

My guess is that humans will move from authoring every line of code to designing the constraints and principles that guide AI behavior. For now, Moltbook lets us observe quietly, and perhaps that’s the right place to be—watching closely before the conversation moves to a realm we can’t follow.

Back to Blog

Related posts

Read more »