Hacking the Narrative: How I Used NotebookLM to Write a Novel Where the AI 'Loses Humanity' Chapter by Chapter

Published: (February 5, 2026 at 09:52 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

The Concept: “Textual Gradation”

As an engineer and an author, I asked myself a question:

“Can I simulate the loss of humanity not just through plot, but through the texture of the text itself?”

Most people use AI to write “clean” text. I wanted the opposite. I wanted to see if I could use Google’s NotebookLM and Gemini 1.5 Pro to create a story where the prose starts rich and emotional, but gradually degrades into mechanical, logical, and cold output as the protagonist mechanizes his own body.

This is the post‑mortem of my experimental novel, Clockwork Orpheus (Japanese: 機巧のオルフェウス).

The Architecture

To achieve a consistent yet evolving narrative over ~50 000 characters (≈12 hours of work), I didn’t rely on a single long chat context. Instead, I treated the novel as a software project, using NotebookLM as my RAG (Retrieval‑Augmented Generation) engine.

The Stack

  • Engine: Gemini 1.5 Pro (via NotebookLM)
  • Context Window: 1 M+ tokens (handling all setting docs)
  • Input: 5 separate “Source Files” acting as databases

The Source Code (Context Data)

I uploaded the following five files to NotebookLM. Think of them as the database schema for the story.

  1. Writing Policy (the “Config” file)Crucial. Contains strict style rules.
  2. Plot Structure – The skeleton of the story.
  3. Story Overview – World‑building rules to prevent hallucinations.
  4. Character Sheets – Detailed profiles.
  5. Glossary – Technical terms and unique nouns.

The Hack: Dynamic “Writing Policy”

The core trick lies in the “Writing Policy.” I didn’t just tell the AI to “write a story.” I defined specific Sentiment Parameters for each chapter.

  • Early chapters:

    “Focus on sensory descriptions. Use metaphors related to heat, pain, and longing. Prioritize the protagonist’s internal emotional monologue.”

  • Mid‑story phases:

    “Reduce sensory adjectives by 50 %. Focus on objective facts. Describe events with logical causality rather than emotional reaction.”

  • Final chapter:

    “Eliminate all metaphors. Use short, clipped sentences. Output must be strictly observational, like a system log.”

The Result

The AI successfully executed this “Textual Gradation.”

Chapter 1 – Full of “pain,” “heat,” and “love.” Long, winding sentences.

The moment of waking always felt like the gasping breath before drowning.
My lungs demanded oxygen, and my heart reluctantly resumed its pulse.
2046, Tokyo. The morning sun filtering through the gap in the blackout curtains looked dirty, like stage lighting meant only to illuminate dust. I reached for the left side of the bed.

Chapter 10 – Prose becomes dry; the protagonist stops “feeling” pain and starts “detecting” damage.

Mid‑layer area “Labyrinth Engine District.” Time elapsed since entry: 14 hours. Rest: None. Complex three‑dimensional structures and intermittent combat are draining resources.
Remaining ammo: 12 rounds average per person. Food: 2 solid bars. Water: 300 ml remaining. Depletion is imminent.

Finale – A chillingly efficient, mechanical text that mirrors the protagonist’s complete loss of humanity.

Layer transfer: Complete. Environmental data: Updated.
Visual information: White. Organic textures completely deleted.
Walls, floor, ceiling—all constituted by white luminescent bodies.

Why This Matters for Developers

This experiment proves that style is just another parameter. By architecting your context (RAG) and treating prompts as dynamic configuration files, you can control the aesthetics of LLM output with engineering precision, rather than treating the model as a black box that produces “average” text.

Original Source (Japanese)

The Novel (Japanese): https://kakuyomu.jp/works/822139844401715752

Back to Blog

Related posts

Read more »