From 200K to 1M: How Claude Opus 4.6 Changed My AI Development Workflow Overnight

Published: (February 5, 2026 at 07:15 PM EST)
7 min read
Source: Dev.to

Source: Dev.to

A follow‑up to The AI Development Workflow I Actually Use

I wrote about my AI development workflow a couple of weeks ago. Task Master for structured tasks, Context7 for current docs, hand‑over documents between fresh chats, multiple AI perspectives before coding. That workflow shipped working software.

Today, a significant part of that workflow became optional.

Claude Opus 4.6 launched in Cursor on February 5 2026, with a 1 million‑token context window. I’d been using Opus 4.5 with its 200 K‑token limit for months. The jump to 1 M isn’t an incremental improvement—it changes what’s possible in a single conversation.

Below is what happened when I tested it on a real project.


The Project: ironPad

ironPad is a local‑first, file‑based project‑management system I’ve been building with AI. It’s a real application, not a demo.

LayerTech
BackendRust, Axum 0.8, Tokio, git2, notify (file watching)
FrontendVue 3, Vite, TypeScript, Pinia, Milkdown (WYSIWYG editor)
DataPlain Markdown files with YAML front‑matter
Real‑timeWebSocket sync between UI and filesystem

The codebase has ≈ 80 files across backend and frontend—large enough to exceed a 200 K‑token context.


The Old Way: 200 K Context

With Opus 4.5 (200 K tokens) my workflow looked like this:

  1. Break big features into 3‑5 tasks – the AI can only hold a few files at once.
  2. Write hand‑over documents between each chat – so the next session knows what happened.
  3. Carefully select which files to show – can’t load everything, so I’d pick the 3‑5 most relevant files.
  4. Repeat context‑setting every session – paste the hand‑over, re‑explain the architecture, point to the right files.

It worked; I shipped features. But the hand‑over system added friction—a workaround forced by the context limit.

[Image: AI context]


The New Way: 1 M Context

Today I opened a fresh chat with Opus 4.6 and said:

“Load the entire codebase into your context and analyze it.”

That’s it—no selective files, no hand‑over, no preamble. The AI proceeded to:

  • List the entire project structure – every directory, every file.
  • Read every source file – all Rust backend code, all Vue components, stores, configs, docs.
  • Hold all of it simultaneously – ~80 files, thousands of lines, two languages, multiple frameworks.

Then I asked:

“Are there any bugs or improvements we should make?”

What It Found

The AI identified 16 issues across the whole codebase—deep, cross‑file bugs that required understanding how multiple components interact.

Real bugs

#Description
1Auto‑commit silently broken – a background task checks a pending_changes flag, but nothing ever sets it to true. Auto‑commits never fire. Requires reading main.rs, git.rs, and every route handler.
2JavaScript operator‑precedence bugremote.value?.ahead ?? 0 > 0 evaluates 0 > 0 first, making push/pull buttons always show the wrong state.
3Port‑binding race condition – the server checks if a port is available, drops the connection, then tries to bind again. Another process could grab the port in between.
4Own saves trigger “external edit” dialogs – only one of eight write paths calls mark_file_saved(). The file watcher detects the app’s own saves and pops up “File changed externally. Reload?” for task and project saves.

Architectural improvements

  • Non‑atomic writes risking data corruption in three route files.
  • confirm() blocking the UI thread.
  • WebSocket reconnect using a fixed delay instead of exponential backoff.
  • 120 lines of duplicated task‑parsing logic.
  • Missing CORS middleware.
  • No path‑traversal validation on asset endpoints.
  • Debug console.log left in production code.

What It Fixed

I said:

“Can you fix all of these please?”

In a single session the AI:

  • Rewrote the auto‑commit system to simply try committing every 60 seconds (the existing commit_all() already handles “no changes” gracefully).
  • Fixed the port‑binding by returning the TcpListener directly instead of dropping and rebinding.
  • Made atomic_write() public and switched all write paths to use it (which also solved the mark_file_saved() problem automatically).
  • Added front‑matter helper functions and deduplicated the task‑parsing code.
  • Replaced the blocking confirm() with a non‑blocking notification banner.
  • Added CORS, path validation, exponential backoff for WebSocket reconnects.
  • Fixed the operator‑precedence bug.

[Image: Bug‑fixes graph]

Result: cargo check passes. Zero lint errors on the frontend. 14 issues fixed; 2 intentionally deferred (a large library migration and a minor constant duplication across files).

All of this was done in a single conversation—something that would have required dozens of hand‑overs with the old 200 K‑token limit.

What Actually Changed

Before: The Handover Tax

With a 200 K token context, every larger task or change incurred overhead, and we had to split it up into separate tasks. That overhead was the cost of the constraint. Good hand‑over systems made it manageable, but it was never free.

After: Direct Work

With a 1 M token context, the full code‑base audit looked like this:

Time for entire audit + fixes:
  Loading codebase:     ~2 min (AI reads all files)
  Analysis:            ~3 min (AI identifies 16 issues)
  Fixing all issues:   ~15 min (AI applies all fixes)
  Verification:        ~1 min (cargo check + lint)

  Total:               ~20 min
  Overhead:            ~0 min

The same work with a 200 K context would have required 5+ separate sessions, each needing its own hand‑over and limited to the files it could see at once. Some cross‑file bugs (like the auto‑commit issue) might never have been found because no single session would have had both main.rs and git.rs and all the route handlers in context simultaneously.


Does This Kill the Handover Workflow?

No. It just changes when you need it.

Still valuable

  • Collaborating with someone who needs to understand what you’ve done
  • Documenting decisions for your future self
  • Projects larger than 1 M tokens

No longer necessary

  • Splitting a feature into artificial micro‑tasks just to fit context
  • Writing hand‑overs between closely related tasks
  • Carefully curating which files the AI can see
  • Re‑explaining architecture every session

The hand‑over system moves from “required for every task” to “useful for session boundaries.” That’s a big shift.


The Broader Pattern

What I’ve noticed building ironPad is that each AI capability jump doesn’t just make existing tasks faster—it enables tasks that weren’t practical before.

  • Full code‑base audit wasn’t practical at 200 K. You could audit individual files, but finding bugs that span the entire system required a human to manually trace connections across files and then describe them to the AI. Now the AI sees everything.

  • Cross‑cutting refactors weren’t practical at 200 K. Changing how atomic writes work across six files, while also updating the file‑watcher integration and ensuring front‑matter helpers are available everywhere, is a single coherent change when you can see all the files. At 200 K it would be 3‑4 sessions with a risk of inconsistency.

  • Architecture‑level reasoning wasn’t practical at 200 K. The auto‑commit bug is a perfect example: AutoCommitState was created in main.rs, the mark_changed() method existed in git.rs, but no route handler had access to it. Understanding the full request flow from HTTP handler through the service layer is trivial when the whole code‑base is loaded.


What’s Next for ironPad

The project is open source; I released it 30 minutes ago on GitHub.

We’re also going open method—not just the code, but the process: how every feature was built with AI, what prompts worked, what didn’t, and how the workflow evolved from 200 K to 1 M context.

Because the tools keep getting better, but the process of using them well still matters. A 1 M context window doesn’t help if you don’t know what to ask for.


Try It Yourself

Core of what worked today:

  1. Load everything. Don’t curate files. Let the AI see the whole picture.
  2. Ask open questions first. “What’s wrong?” before “Fix this specific thing.” The AI found bugs I didn’t know existed.
  3. Let it work in batches. The AI fixed 14 issues in one session because it could see all the dependencies between them.
  4. Verify mechanically. cargo check and lint tools confirm correctness faster than reading every line.
  5. Keep your structured workflow for session boundaries. Hand‑overs and PRDs still matter for smaller tasks and larger projects; they just aren’t needed between every micro‑task anymore.

The context window went from a limitation you worked around to a space you fill with your entire project. That changes the game.

ironPad is being built in the open. Follow the project on GitHub:

https://github.com/OlaProeis/ironPad

Back to Blog

Related posts

Read more »

Checkout this amazing NPM package

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as we...