Pull Request Reviews
Source: Dev.to
Should you have PR reviews?
Absolutely.
Two‑tier review process
In Software Engineering at Google they describe a two‑tier review process:
- Language review – focuses on the use of the language, ensuring consistency and shared semantics.
- Feature implementation review – looks at the correctness and design of the feature.
Sometimes the same person can cover both tiers (e.g., an engineer who is also a member of the Python approvers group).
The goal is to keep language usage consistent across the codebase, complementing automated linting tools.
Human review challenges
A Google engineer once reported that a PR was held up for a month. Feedback requested during the design phase was largely ignored, yet at PR time many reviewers offered very specific opinions. This highlights how timing and expectations can create friction.
Automate the hard parts that humans struggle with
Lint, Prettify, etc.
If you’re on GitHub, you can set up GitHub Actions for linting, formatting (e.g., Prettier), and type checking (for dynamically typed languages). A half‑day effort is enough to:
- Add the actions,
- Make them required checks,
- Protect the main branch.
You may also adopt a style guide for decisions that require judgment (e.g., when to avoid list comprehensions in Python or when to split a function even if it technically does “one thing”). Documenting these conventions prevents repetitive discussions in every PR.
Tests and test coverage
Add PR checks for:
- Test execution – PRs must pass all tests.
- Coverage metrics – track how much of the codebase is exercised by tests.
Initially, fail PRs only for failing tests and collect coverage data. Later, introduce a low coverage threshold and raise it gradually. This automation reduces the need for manual review of obvious correctness issues.
What manual review shouldn’t be
In many organizations, the lack of automated tooling leads to a rule like “the branch is protected; someone must approve the PR.” Setting up the automation often takes just a few days, but teams may skip it due to unfamiliarity or a desire to ship quickly. Adding a human gate in place of automation is the opposite of speeding up delivery.
A manual review should not be a catch‑all for everything that could have been automated. If you can automate a check, do it; code contributions are the lifeblood of your application, so prioritize flow‑enhancing automation.
What a manual review should focus on
Manual reviews belong to the realm of intangibles—they are an opportunity to act as a thinking partner for the author. As Simon (a former colleague) put it:
“You’re offering context, experience, and advice to help the author get to a great answer… it’s not about correctness but about guiding the author through trade‑offs they may not have considered, while empowering them to make the final call.”
This aligns with Dawna Markova’s concept of “Thinking Partners” in Collaborative Intelligence.
Guiding questions for reviewers
- Does the change reflect the intended trade‑offs?
- Is the code readable, maintainable, and friendly for future contributors?
- Does it strike the right balance between complexity and correctness?
These discussions are exchanges, not formal change requests. They help the author confirm or re‑evaluate their decisions.
When reviewers bring historical context—knowledge of a subsystem, user feedback, or support experience—the review becomes a powerful knowledge‑sharing moment. Conversely, a reviewer lacking business context can surface hidden assumptions (e.g., “we always did it this way”) and prompt the author to reconsider.
The broader impact
When done deliberately, PR reviews become a medium for:
- Knowledge sharing,
- Up‑skilling the entire organization,
- Building relationships and decision‑making muscle.
They shift from gatekeeping to collaborative improvement. Your experience may differ, and that’s valuable—feel free to share your perspective. The key is to treat PR reviews as a deliberate, strategic practice rather than an afterthought.