The best improvement I’ve made to my Cursor workflow
Source: Dev.to
tl;dr: let coding agents access runtime output
I’m pretty sure this isn’t novel, and I’m almost certain many people already do some flavour of this, but it’s been the single biggest unlock in my local AI‑assisted development with Cursor, and it’s so simple I want to share it.
If your coding agent can’t see what your program is doing, it can’t debug it
Cursor doesn’t have some magical ability to read everything happening on your machine. Unless you explicitly expose your output, the agent is effectively blind.
When I let it see my program’s output, I found that it could solve problems that used to take me 30 minutes in about 30 seconds and free up my time to spend on other areas of the startup.
The problem
My local dev setup has three moving parts:
- a Python FastAPI backend
- a React frontend
- a Python FastAPI auth router which speaks to WorkOS to allow authentication
All three emit logs, can fail independently, and may require debugging to get back up and running.
Cursor can run commands you ask for, or even suggest useful ones, but it cannot:
- attach to a terminal window you opened yourself
- read real‑time stdout/stderr
- watch a hot‑reloading server
- magically see your browser’s console logs
When the agent tries to “fix” something by killing your running server or client, it destroys your whole hot‑reloading setup, forcing you to restart everything and killing productivity.
The unlock
Coding agents can be more capable than you’d expect, and sometimes the best thing you can do is simply ask whether something you’re wondering about is possible. At one point I asked: “Can you read something inside a screen session?”
To my surprise, Cursor replied:
If you run
screen -X hardcopy, I can read the output.
I then asked Cursor to check the current screens that were running using screen -ls and fetch the last 50 lines of the server’s output. It did.
This gives the agent something to look at beyond just the code.
How I set it up
I use runnem, a tiny tool I built for a different reason entirely: to remember your run setup and let you switch between multiple projects seamlessly.
It runs each service inside its own screen session, which makes this unlock possible. You can use anything else that provides similar access:
- standalone
screen tmux- Docker (log files or
docker logs) - logging to a file on disk
The principle is universal.
With runnem, Cursor can:
- list sessions with
screen -ls - identify backend and frontend sessions
- dump the relevant session’s output to disk without interrupting them
- analyse logs while keeping everything up and ready to hot‑reload on the next code change
My .cursor/rules snippet
To guide the agent, I added the following to my .cursor/rules file:
# Example command to fetch the last 200 lines of the backend logs
screen -S 94383.runnem-flow-myna-server -X hardcopy /tmp/backend_logs_200.txt && \
tail -200 /tmp/backend_logs_200.txt
- The screen with the
-serversuffix is the backend. - The screen with the
-appsuffix is the frontend.
Sometimes I still explicitly tell the agent to “check the logs using screen -ls”, and that’s enough for the agent to get into a loop that fixes errors until the server is running again.
Do you need runnem?
No. Use any setup you like.
Run‑nem just makes the workflow very easy because:
- each service has its own named screen
- I run three services across two languages
- the agent needs to cross‑check all of them
- everything stays running with hot‑reload intact
I’ve been wanting an excuse to experiment with MCP, so I might try adding an MCP tool for runnem that lets Cursor talk to it more directly (e.g., exposing “get backend logs” as a single tool call). For now, this simple screen‑based setup has been more than enough.
The final takeaway
Once your agent can see your program’s output, it goes from:
editing files based on guesses
to something much more powerful:
responding to reality
If you’re running multiple services locally and want the agent to pull its weight:
- Expose the output.
- Give it vision.
It’s a tiny setup change, but it has completely transformed how I work with AI.
Are other coding agents doing this already?
I don’t have the bandwidth to experiment with other coding agents right now, so there may be similar setups built in, or ways to nudge you into a comparable workflow. If you’ve found a good way to give your agent visibility into browser console logs, I’d love to hear about it in the comments.
A more detailed post can be found here: