Hacktoberfest as a First-Time Maintainer
Source: Dev.to
Introduction
Earlier this year I took over maintenance duties on Cloudinary’s Community Libraries.
A few months later my team was gearing up to participate in DigitalOcean’s annual Hacktoberfest.
Reader, I was worried.
I was still familiarising myself with how the libraries worked and trying to wrap my head around their issue backlogs. I’d never participated in a Hacktoberfest before, and I had read some scary stories from other maintainers about an annual avalanche of spam.
What’s worse than an avalanche? A turbo‑avalanche powered by the recent explosion of AI‑generated coding tools. As September drew to a close, I imagined hundreds of AI‑written PRs arriving—each hundreds of lines long, based on prompts that poorly understood the underlying issues, containing code the submitters themselves didn’t understand, yet we would be expected to review.
The good news
It wasn’t that bad. While the incentive structures of Hacktoberfest did generate some spam (and almost all of that spam smelled like AI), our team was able to get ahead of many problems with strong policies, and we could have prevented many more with a bit of additional preparation and planning.
- The worst PRs were about as bad as I expected, but there weren’t as many of them as I feared.
- The best PRs were, dare I say, really good. Hackathon participants tackled some of the issues I’d been putting off for months, helping push the libraries forward.
All in all, participation as a maintainer took a lot of time (and we’re still discussing whether to participate next year), but it also delivered tangible value without overwhelming us. Hopefully it helped some contributors get more comfortable with the mechanics of open source along the way.
If you’re thinking about participating as a maintainer in future years, read on to learn what worked for us — and what didn’t.
What Worked
1. Limit participation to a specific set of existing issues
We created a Hacktoberfest tag and applied it to a curated list of issues. This addressed several concerns:
- Clear mandate – we could quickly close any drive‑by, low‑value PRs (e.g., “improve docs”).
- Focused effort – we spent our Hacktoberfest time reviewing PRs rather than reproducing new issues or defining new features.
- Straightforward fixes – we limited contributions to issues whose fixes would be reasonably simple, allowing for quick reviews.
2. Setting a quota up‑front
We defined a goal for how many contributions we wanted to accept by the end of the month and tagged that many issues. The reasons:
- Success metric – having a quantifiable target lets you communicate success or failure. We based the quota on the number of successful submissions we received in 2024 (see the original link).
- Swag planning – we were offering a plush Cloudinary unicorn to contributors of accepted PRs, so we needed to know how many we’d need ahead of time.
What Didn’t Work
1. Over‑tagging issues
We weren’t stingy enough with the Hacktoberfest tag. Because we set the quota first, we tagged a large number of issues before confirming they were a good fit for Hacktoberfest. The result:
- We spent time reproducing issues, specifying new features, and reviewing complex PRs.
- Review delays grew long, and the tag ended up on issues we only poorly or vaguely understood, which later “bit” us.
2. Misaligned expectations
Knowing we had a quota motivated me to do a full docs review and file a handful of minor issues that would be easy to fix and review. While that was helpful, it also created a situation where:
- I wouldn’t have done the docs review without Hacktoberfest.
- The primary value of Hacktoberfest—introducing newcomers to open‑source mechanics—was somewhat sidelined by the quota‑driven focus.
3. Poorly defined issues
Some tagged issues were poorly defined, leading to embarrassing PRs. Example:
- A contributor submitted a “fix” that was unrelated to the problem. The issue had actually been resolved a year earlier as part of a broader update, but it was never closed. The submitter likely fed the stale issue into an AI tool (e.g., Cursor), which tried its best to solve a solved problem.
4. AI‑generated noise
Many submissions were created with AI assistance. The human reviewers behind these PRs had varying levels of understanding:
- When contributors understood the problem and the solution, the process went smoothly.
- When understanding was low, follow‑up conversations to clarify the issue or suggest alternate paths often went nowhere, resulting in closed PRs and wasted time on both sides.
The Positive Surprises
Despite the challenges, the highest‑quality submissions far exceeded my expectations. A handful of contributors delivered well‑thought‑out, clean PRs that helped us move the libraries forward.
Lessons Learned & Recommendations
- Curate the Hacktoberfest tag carefully – only apply it to issues you fully understand and that are truly suitable for quick, low‑risk fixes.
- Set realistic goals – rather than a fixed quota, consider a flexible target based on the number of well‑defined, tag‑eligible issues.
- Leverage the educational aspect – encourage newcomers to learn the open‑source workflow, even if their PRs are small or need iteration.
- Prepare for AI‑generated submissions – have a clear policy for handling AI‑assisted PRs, including guidelines for reviewers to assess intent and correctness.
- Plan swag and rewards after the fact – estimate swag needs based on actual accepted PRs rather than a pre‑set quota.
Closing Thoughts
Participating in Hacktoberfest as a maintainer was a mixed bag of challenges and rewards. With better issue selection, more flexible goals, and a focus on the educational value for newcomers, future Hacktoberfests can be even more beneficial for both maintainers and contributors.
If you’re considering joining as a maintainer, start by defining a clear, narrow scope, communicating expectations early, and embracing the learning opportunity for the community. Happy hacking!
Reflection on Hacktoberfest Participation
I understood the underlying frameworks better than I do, and took the time to suggest thoughtful solutions to nuanced problems that we hadn’t yet been able to tackle. I’m extremely grateful for their time and effort, which was worth far more than a unicorn plushie and a Digital Ocean T‑shirt.
Because a number of the issues turned out to be rather thorny (and because I and the other technical reviewers had more to do in October than review Hacktoberfest PRs), review times stretched to weeks, which I know was frustrating for contributors. In a couple of cases, we were not able to either accept or reject PRs before the November 15 deadline. We sent those contributors a unicorn anyway, but if the real purpose of Hacktoberfest is to familiarize folks with the mechanics of open source, learning that things are surprisingly complicated and your code might never land isn’t a great lesson. Again, I wish we’d been a little more careful about which issues we invited people to solve.
Outcomes for Cloudinary
- Our community library docs are in better shape now.
- Dozens of small fixes that would have been easy to put off have now been done.
- We made real progress on a handful of meaty issues that I’d been postponing for months.
- Hopefully, participants are more familiar with Cloudinary and our SDKs, and are more likely to use us as they continue to build and grow their careers.
At the same time, the slow (and in a couple of cases, unresolved) reviews may have had the opposite effect, driving participants away. The cost in time to Cloudinary was real, as we prioritized Hacktoberfest work over other projects in October and November.
Personal Takeaway
Personally, I’m definitely glad I did it, once. Whether we as a company decide to do it again next year remains to be seen.
Happy hacking!