From Sausage to Omelette
Source: Dev.to
Nobody really wants to see how sausage is made, do they? The process is messy, unglamorous, and usually skipped over in the final presentation (and oh boy, are we glad of it! 🤮🤮🤮)… but if you skip the “sausage‑making”, you never get the “full omelette”!
mmm, sausage…
…But we’re not talking about breakfast foods today, we’re talking about technology! In our world, most people would prefer to focus on shiny new features or huge cost‑savings stories rather than attend to the gritty details like testing, refactoring, and tooling. Without these “sausage‑making” activities, the full DevOps “meal” (with all the acceleration and improvement we crave) is always just out of reach.
My team recently went through this firsthand, so today we’ll explore an instance of our “sausage‑making” journey to better code quality and then show how it directly maps to the “omelette” of DevOps outcomes.
How the Sausage Was Made 🐖
Like many teams, we weren’t very disciplined at first. Oh, don’t get me wrong, we were checking all the required boxes:
- Our code was stored in the company’s official version‑control system.
- Pull requests were reviewed and approved according to governance standards.
- Our changes were deployed by CI/CD.
- We had dependency‑analysis scans in our pipeline.
In company terms, our DevOps practices were on track… even a little bit progressive? BUT a closer look revealed cracks in the armor:
- Most PR reviews functioned only as a way to follow company rules. Approvals for several hundred lines of changes were sometimes granted within half a minute (old‑school “pencil‑whipping” ✏️).
- Testing? The little bit that existed wasn’t integrated into the pipeline; it ran only if you remembered to run it.
- Quality‑analysis tools? Ehhhh…
We were moving fast, but stuff broke sometimes. Every code push became a loop:
- Get PR approval and push.
- Uh‑oh, why did the deployment break?
- Fix errors, open a new PR.
- 🤞 GOTO 1
It was clear that our goal wasn’t moving faster but moving more sustainably. To get there, we needed to make some sausage first. We elected to make code quality a priority. Here’s what that meant for us.
Centralize Our Tooling 🎯
We started by researching a minimum set of tools that the whole team would adopt.
IMPORTANT – This isn’t about heavy‑handed governance of workstations; it’s about reaching a team agreement on a small handful of extensions and tools that become “our stuff”.
How we decided which tools win
- The tool must be well‑accepted in the industry.
- If it’s open source, it must not be a 🧟 “zombie project” (i.e., it has recent activity).
- It must be implementable both in CI and locally in a developer’s editor.
- It should require almost zero extra work for developers. “Don’t make seatbelts out of a cactus 🌵.”
- It must be customizable so team‑specific overrides can be reflected in the guardrails.
Some tools are already provided by the company, such as Blackduck and SonarQube. Since those are required or road‑mapped, we needed to integrate them as well.
Implementing in the Pipeline 🛠️
Once we defined our list of quality tools, we ran them in the pipeline on ephemeral copies of our code—containers created for each run and destroyed afterward. This forces awareness of every dependency and quirk. As code moves through environments, we scan it regularly and catch mistakes or security problems early.
IMPORTANT – Implemented ≠ Enforced.
The #1 mistake teams make when improving code quality is biting off more than they can chew. If you’ve lived without quality checks for a long time, the initial pain can be high. Ease the transition by waiting to block merges until the tooling has been used for a while and the obvious issues have been cleaned up.
We discovered that SonarQube was already scanning (but not enforcing) every PR. The results were ugly: a couple hundred security hotspots, a couple hundred code smells, and coverage not being reported because of our monorepo structure.
- Security Hotspots – Things that could potentially be exploitable (think “tornado watch”).
- Code Smells – Code that works but could be cleaner or more maintainable.
- Code Coverage – Percentage of code executed by automated tests; high coverage signals comprehensive testing.
Over the next few weeks we:
- Got the tooling to report reliable information (custom GitHub Actions workflow for our multi‑language monorepo).
- Defined a strategy for running tests, aggregating results, and delivering final data.
- Wrote a large number of tests to cover legacy code that had been pushed to production.
- Refactored code that wasn’t designed for easy testing, eliminating many smells and hotspots.
Our simple mantra was: Get. To. Passing. Seeing the metrics improve with each push was incredibly rewarding—pure dopamine!
We Haz Sausage Now 🐷
Today we’re proud to report:
- 5 code smells (down from ~150)
- 0 security hotspots (down from ~130)
- Test coverage > 85 % (up from 0 %)
The “sausage‑making” work paid off, delivering a much healthier DevOps “omelette”.
