In the AI Era, Code Is Cheap. Reputation Isn’t.
Source: Dev.to
In the era of AI it’s easier than ever to be an Open Source contributor!
But, at the same time, and quite paradoxically, it’s harder than ever.
Why? Because it’s now mechanically easier, but reputationally harder.
Previously you competed against other incredibly smart volunteers, employed developers contributing their expertise in free time, or newcomers.
Now you compete against all of that and an army of AI agents generating pull requests, issues, refactors, and vulnerability reports faster than any human can.
It’s tempting. Using an LLM can dramatically reduce the cognitive work required to understand a codebase and to produce a valuable change request. Things that were previously “very hard” are now a few prompts away, lowering the entry barrier for contributions—great! But it also creates a massive increase in volume, which is flooding repositories.
What’s the problem?
- Maintainer capacity did not increase
- Review cost did not decrease
These two truths have already forced maintainers to resort to drastic measures, like what happened with curl:

How do you become a valuable Open Source contributor in the era of infinite code generation?
Here are a few tips learned from dealing with a large volume of contributions.
1. Think. Then think again.
AI can produce great code, but it can also produce the worst code ever seen by humanity. The outcome depends heavily on:
- The prompt you use
- The quality of the LLM
- The context you provide
- Your own coding abilities (the most important factor)
Why do you need to know how to code before using LLMs to contribute?
Because an LLM cannot know everything.
Example – a PR that attempted to change this:
} catch (error) {
console.error('Some error:', error);
}
to this:
} catch (error) {
logger.error('Some error:', error);
}
At first glance the change looks valid: we want to use logger instead of console.
The problem: the LLM didn’t know (and didn’t check) that we recently changed the typings on our logger instances, rendering the change invalid.
A valid change would have looked like:
} catch (err) {
logger.error({ msg: 'Some error', err });
}
Good LLMs (Claude, Codex, etc.) might have caught the issue, but others would not—unless you asked them. How do you ask something you don’t know exists? That’s where your coding abilities come into play.
Key point: You cannot prompt what you don’t know exists.
A contributor who trusts the LLM blindly will miss type‑checking errors, ignore project conventions, and overlook caveats. For such programmers, LLMs become a powerful mechanical aid, freeing mental bandwidth for the real work—thinking and reviewing. This protects your reputation: maintainers will notice a pattern of low‑signal PRs and may stop reviewing them (or worse). LLMs make contribution faster, but they don’t replace your brain unless you let them.
2. Respect volume. Review is not free.
You’re happy to contribute, and we’re happy you want to help. Open‑source work is a noble endeavor that makes repositories safer and improves your skills—a win‑win!
But maintainers have:
- Limited capacity
- Limited time
- Limited patience
Consider these realities:
- The cost of opening a PR is now near zero.
- The cost of reviewing it is not.
This asymmetry creates the real tension of the AI era.
If you:
- Comment “please review” repeatedly,
- Ping maintainers aggressively,
- Demand merging timelines,
you increase the cognitive cost of your contribution—even if the PR is valid.
Harsh truth: Maintainers are human. If interacting with you feels expensive, they may subconsciously deprioritize your work, resulting in fewer reviews, merges, and release‑note mentions.
When you use LLMs or agents, you can generate dozens or hundreds of PRs quickly. If maintainers can’t review them all, many will simply be ignored. That’s effectively a Denial of Service (DoS): you’re sending more requests than the maintainers can process.
3. Quality that doesn’t cut corners (pls don’t sue me, Wendy’s)
Maintainer capacity is low. We closed around 700 issues (stale, invalid, fixed) in the past month, while receiving 300 new issues. Some were valid, some were not. The problem is we can’t know beforehand; we have to check them all. You cannot trust an LLM to reliably filter “slop” because the LLM itself may be sloppily trained.
Related link: HackerOne Curl report – an example of maintainers dealing with AI‑generated vulnerability reports.
As you can see, maintainers won’t be happy if…
(Continue with the rest of your original content here, preserving headings, lists, code blocks, and links as needed.)
Respect the Maintainers’ Time
The links above and that beautiful list may look legit—until they don’t.
Realising something is “slop” requires manual checking, which wastes time and effort.
Be human. Respect the maintainers’ time and work.
How? Help them reduce the cost of reviewing your code.
Before Submitting
- Did you reproduce the issue yourself?
- Did you run CI locally?
- Did you include screenshots for UI changes?
- Did you provide benchmarks for performance claims?
- Did you include a PoC for security reports?
- Did you check if a feature request already exists?
If you claim a performance improvement, show numbers.
If you fix a bug, explain the root cause.
If you propose a feature, ask first.
High‑quality contributions get reviewed faster.
Low‑signal contributions accumulate and slow everyone down.
In the AI era, reputation compounds. When maintainers can trust you, your PRs are reviewed faster and you’ll feel happier—a win‑win!
4. Bundle Trivial Changes
Typos and small mistakes happen. For example:
Before
if (!sub) {
throw new Error('subsciption not found');
}
After
if (!sub) {
throw new Error('subscription not found');
}
Do you spot the bug? Yep—a missing r in subscription. The contribution is valid; we want correct English.
Does it deserve a PR? Not by itself—maybe if you combine a few more.
If you find 10 misspellings, what’s easier for maintainers?
- 1 PR with 10 changes
- 10 PRs with 1 change each
If you were the maintainer, what would you prefer?
Every PR has overhead: CI runs, notifications, review time, merge time, mental context switches, etc. Reduce that overhead by batching trivial work.
AI can generate an enormous list of fixes. If you tell an agent like Claude or “Clawd” “Hey, for each issue create a PR,” you’ll quickly become the most hated contributor on the project.
Don’t do that. Be mindful. Be human.
LLMs vs. Human Work
LLMs are fun, but human work is better.
We can’t fight AI—it’s here to stay. What we can control is how we use it.
- GSoC values human contributions; that’s the program’s purpose: turning more humans into open‑source contributors.
- We need a new generation of maintainers who use LLMs wisely.
- LLMs can help you do amazing things, but they’ll never replace your unique human creativity.
It’s easier to contribute, but harder to stand out.
- Reputation matters.
- Trust is earned.
- You build it by doing good work.
Note: This is personal experience. Some projects have different rules—some might even welcome 1 000 PRs a day (unlikely, but who knows).
Bottom line: AI isn’t human, but maintainers and contributors are. Remember that.
Welcome to Open Source!
Further Reading
- AI is destroying Open Source, and it’s not even good yet – Jeff Geerling
- Over the weekend Ars Technica retracted an article because the AI a writer used hallucinated quotes from an open‑source library maintainer.
- The maintainer, Scott Shambaugh, was harassed by an AI agent over “slop” code that wasn’t merged.
- The bot was likely running on a local “agentic AI” instance (perhaps OpenClaw).
- The creator of OpenClaw was recently hired by OpenAI to “work on bringing agents to everyone.”