Level Up Code Quality with an AI Assistant
Source: Dev.to
Current State
This post showcases an actual codebase that has not been actively maintained for over five years but still runs a product that is actively used. It is business‑critical but did not have the necessary safety nets in place. Let’s go through the journey—prompts inclusive—on how to improve the code quality of this repository, one prompt at a time.
The project is a Django backend application that exposes APIs. A quick overview shows that there are tests and some documentation, but there is no consistent way to run and test the application.
The Journey
I am assuming you are running these commands using Claude Code (with Claude Sonnet 4 in most cases). The same approach works with any coding assistant; results will vary based on the model, prompts, and the codebase.
Setting up Basic Documentation and Some Automation
If you are using a tool like Claude Code, run /init in your repository and you will get a significant part of this documentation.
Can you analyse the code and write up documentation in README.md that
clearly summarises how to set up, run, test and lint the application.
Please make sure the file is concise and does not repeat itself.
Write it like technical documentation. Short and sweet.
Next, start setting up some automation (e.g., a just file) to make the project easier to use. This will take a couple of attempts to get right, but here is a prompt you can start with:
Please write up a just file. I would like the following commands:
`just setup` – set up all the dependencies of the project
`just run` – start up the application including any dependencies
`just test` – run all tests
If you require clarifications, please ask questions.
Think hard about what other requirements I need to fulfill.
Be critical and question everything.
Do not make code changes till you are clear on what needs to be done.
This will give you a base structure that you can modify quickly. If your README.md already describes a preferred way to run the application (locally vs. Docker), the just file will automatically use it; otherwise you’ll need to provide clarification.
Setting up pre‑commit for Early Feedback
Let’s start small and build on it.
Please setup pre‑commit with a single task to run all tests on every push.
Update the just script to ensure pre‑commit hooks are installed locally
during the setup process.
Keeping the context small and the tasks focused helps you move faster.
Curating Code‑Quality Tools
First, find good tools, create a plan, then execute it. Switch Claude Code to Plan mode (Shift + Tab twice) and ask:
What's a good tool to check the complexity of the Python code in this
repository and lint it to provide the team feedback as a pre‑commit hook?
The assistant will suggest a set of tools. In a large, tech‑debt‑laden codebase, you won’t get a green build immediately. Refine the request:
The list of tools you suggested sounds good.
The codebase currently has a very large number of violations.
I want the ability to incrementally improve things with every commit.
How do we achieve this?
Creating a Plan
After iterating on the previous prompt, you’ll receive a concrete plan. Before the assistant executes it, create a “save state” (think of it as a video‑game save) so you can roll back if something goes wrong. This also clears the context because everything is dumped to markdown files on disk.
Can you create a plan that is executable in steps?
Write that plan to `docs/code-quality-improvements`.
Try to use multiple background agents if it helps speed up this process.
Give the assistant a few minutes to analyse the code. In my case, the following files were created. The generated README.md notes that “tasks within the same phase can be executed in parallel by multiple Claude Code assistants, as long as prerequisites are satisfied.”
Overview
You are ready to hit /clear and clear out the context window.
Plan as tasks
- Phase 1 – sets up the basic tools
- Phase 2 – configures them
- Phase 3 – focuses on integration and automation
- Phase 4 – adds monitoring and improves code quality
Before executing the plan, commit it to docs/code-quality-improvement. This lets you track any changes that have been made. When executing the plan, do not check in the changes made to the plan; you can drop the plan at the end of the process.
If you want to keep the plan around as an artifact, you’ll need to ask Claude Code to use relative paths (it defaults to absolute paths when asking for files to be updated in the plan).
Executing the Plan
I would like to improve code quality and I have come up with a plan to do
so under `docs/code-quality-improvement`.
Can you analyse the plan and start executing it? The `README.md` has a
quick‑start section which tasks about how to execute different phases of the
plan. As you execute the plan, mark tasks as done to track state.
Note: Claude Code will add dependencies to
requirements-dev.txtand try to run things without installing them. It may also add non‑existent dependencies.
Stop the execution (press Esc) and use the following prompt to course‑correct:
For every pip dependency you add to `requirements-dev.txt`, please run
`pip install`.
Before adding a dependency to the dependency file, please check if it is
available on pip.
After Phase 1 & Phase 2
The following files are created and ready to be committed.
When the quality gates are added in Phase 3, run the command once to test that everything works and create another commit. After this, prompt Claude Code to integrate the lint steps into a simplified developer experience:
Please add `just lint` as a command to run all quality checks
Test the new lint command, then commit and ask Claude Code to proceed to Phase 4.
You might see Claude Code doubt a plan that it has created. The system is functional, but if you prefer more advanced checks, request it to push on with Phase 4 implementation.
Result After Phase 4
The repository now checks for code quality every time a developer pushes code:
- Pre‑commit hooks run linting before pushes.
- Quality checks fail if the changed files contain:
- Unformatted code
- Imports in the wrong order
flake8lint issues- Functions with high cyclomatic complexity
Only the files being touched are checked (we told Claude there is existing debt that needs to be reduced, so all checks will not pass by default).
Fixing Existing Debt
Tools like isort can highlight and fix many issues automatically. On most codebases this will touch almost every file. Issues that cannot be fixed automatically (e.g., wildcard imports) must be fixed manually.
Cost tip: If you have a large number of issues, using Claude Code may become expensive (potentially > $10 for a decent‑sized codebase). Consider switching to GitHub Copilot’s agent to reduce costs.
Suggested workflow
- Ask your coding assistant to run the lint command and fix the issues.
- If the assistant stops after a couple of attempts, tell it to keep running the task until there are no linting errors left.
- If your context file (
CLAUDE.md) does not describe how to lint, be explicit and provide the exact command.
What Is Left?
The gradual-tightening task created a command that analyses the code and becomes progressively stricter. It can be run manually or automatically in a CI pipeline. One of its parameters, max‑complexity, defaults to 20 and will be reduced over time.
Similarly, the complexity‑check tasks start with a lower bar and should be tightened periodically to raise the quality standards of the repository.
Try to a large extent, the last mile has to be walked by all of our teammates.
We now have a strong feedback mechanism for bad code that will fail the pipeline and stop code from being committed or pushed.
The last bit requires team culture to be built. On one of my teams, we had a soft check in every retro to see if every member had made the codebase a little bit better in a sprint. A sprint is 10 days, and “a little bit” can include refactoring a tiny 2–3‑line function and making it better. The bar is really low, but the social pressure of wanting to make things better motivated all of us to drive positive change.
Having a high‑quality codebase with a good developer experience is not a pipe dream, and making it a reality is easier than ever with AI coding assistants like Claude Code or Copilot.
What have you been able to improve recently? 😃


