Three Lines of Code in Two Weeks — It's Not Always Laziness (or rethinking how we measure developer productivity)
Source: Dev.to
The Unseen Work of a Developer
I’ve been thinking about this topic for a long time and finally decided to write it down.
The whole business of evaluating code by the number of lines written, or other attempts to estimate the volume of work, has always bothered me.
I don’t write code on an industrial scale now—maybe just some small tools for myself—but I used to write a lot of code and did it for over 15 years.
The Daily Dance
You’d come to the office in the morning and start writing something, saving along the way, of course.
In the evening I sometimes liked to hit Ctrl + Z and watch, in fast‑forward (even if in reverse order), how the cursor moved, how blocks of code were highlighted, appeared, and disappeared.
- First, a condition and a loop would appear in one place.
- Then a piece of code from the loop would move into a procedure, the loop would disappear entirely, etc.
It’s a mesmerizing dance of the cursor and the text beneath it.
By the end of the day I’d have a new concise procedure and a couple of insertions into existing blocks – in total 50–80 lines of code.
Often I had to refine legacy code, carefully integrating my additions in different places without breaking anything.
“Who saw all these searches and wanderings of mine?”
To an external observer, only the number of lines in the morning and the number in the evening are visible. Those 80 lines don’t even hint at what I did all day. I’m sure you understand what I’m talking about.
The Missing Telemetry
In the era of total fascination with AI, I can’t shake the thought that it would be good to legitimize this whole cognitive process.
On websites, mechanisms for tracking user behavior have long been used: where they clicked, how long they stayed on the page, how far they scrolled, etc.
Why isn’t there something like this in development?
- Switching between modules
- Hovering over methods to see tooltips
- Navigating dependencies, searching, typing text, selecting, deleting, moving
- Adding a library, fixing a config, running, compiling, test runs, re‑compilation
- Step‑by‑step debugging (stepping in/out)
All of this is a massive amount of time and cognitive work.
The SQL Debugging Example
Debugging a heavy SQL query can take a week:
- Look at execution plans, measure I/O statistics, add indexes.
- Pull out pieces of sub‑queries, debug them separately, then insert them back.
Who sees this? No one. Only you.
At the end of the week the commit looks like three short but perfect fixes in the query. What did you do all week? Added those three lines? Sometimes you feel uneasy realizing how your work’s result looks to an outside observer.
The Reality of Development Work
Everyone is used to evaluating the result, but in development—just like in science—90 % of the time is experimentation and error searching. Evaluation tools are designed for the result, as if you’re digging a ditch from here until lunchtime.
No one will appreciate all your struggles, trials, and errors, but that’s precisely where the result is born.
If telemetry collected all your actions into a log, then built a graph from it, or an AI evaluated how difficult it was, the business would see a transparent, demonstrable picture of your work.
The Picasso Analogy
There’s a story about Picasso. Someone asked him to draw a sketch on a napkin. He did it in a couple of minutes. When asked how much he should be paid, he named a huge sum. The client was outraged – “how could you ask for so much for just 2 minutes?”
Picasso replied that it didn’t take him 2 minutes, but his whole life.
Doesn’t this resonate with developers who spend a massive amount of time on invisible work, then try to prove to the client that it wasn’t just 2 lines of code?
A Proposed Solution: Development Telemetry
We could collect thousands of honest logs of the development process (and logs of imitation) and train a model on them to identify patterns.
- Raw logs aren’t suitable for direct training, but from them we could build heat maps or activity graphs.
- The graph would show the “pulse” of the process. Someone works faster, someone slower, but the process is roughly similar.
Implementation Ideas
- External utility that simply logs work.
- In the settings, the user defines what exactly to collect and from which applications (IDE, debugger, SQL editor, Postman, browsers, etc.).
- Title filters with regex, object‑type filters, and so on.
- The log would show, for example, a switch from the debugger to SSMS/PL/SQL, wrestling with it for a long time, then a hop to Stack Overflow and scrolling through dozens of pages.
The result would be an incremental record of actions with context, not just the delta of the result. Not for surveillance, but for analyzing how exactly we (humans) do it, because that’s the most interesting part.
The Human Process: Incubation and Insight
It’s well known that a person can’t write code from 9 to 6, especially when it requires searching for a solution, not just typing out standard algorithms.
- Sometimes you just can’t think straight.
- You can force yourself to write all you want – it won’t help; you’ll produce messy code you can’t understand later.
The solution sits on the tip of your tongue. You feel it exists and is simple, but a clear picture doesn’t form. You’ve already tried dozens of options, but the perfect result isn’t coming together.
You need a distraction – read jokes, news, watch cat videos – to let your thoughts defragment. This is incubation, which can last from a few hours to several days.
At some point a flash happens – the solution clicks. You open your code, look at what you’ve written, and think: “My god, what is this?!” Delete it! And in five minutes you write a block that works immediately, something that hadn’t worked for days before.
Closing Thoughts
If we could capture the full, nuanced journey of a developer’s work—not just the final line count—we would have a richer, fairer way to evaluate effort, improve tooling, and perhaps even teach AI to better understand the creative process behind software. The invisible work deserves to be seen.
Heat‑Map Insight into Cognitive Workflows
Originally published on Habr in Russian.
At the feeling of awkwardness towards yourself: I spent so much time, but if someone asked, there’s nothing to show. It’s cognitive dissonance. We’re used to measuring by volume. If something is worthwhile, there should be a lot of it, right?
How do you explain to the business why you spent one out of two weeks watching videos?
Such a period – incubation‑insight – would be visible on a heat map, because it is preceded by a process of intense trial and error, when you rewrote the same piece over and over, then a pause in the IDE, cats in the browser, and a short final solution.
Privacy concerns might arise. But a person can choose what to log and what not to, and it’s in their interest. Crucially, the logs don’t necessarily have to contain texts; they could contain only markers based on which heat maps can be built. It doesn’t matter what exactly was written or what its quality was during the search phase; the number of attempts and the number of options are much more important, because that’s where all the time went.
Effective managers might try to use this against you. After all, Petya kept struggling and eventually produced some monstrosity, while Vasya went to watch cats and came back with three lines of perfect code. But this is precisely where statistics on such maps and the impartiality of AI are needed. Similar to how a neural network analyzes fluorography images, when even an experienced specialist sees nothing special, but the neural network sees markers.
Why isn’t this available yet?
A school math problem, finding the area of a field, for example.
In school, they teach how to apply the Pythagorean theorem. They drill dozens of problems so that a person approaches the same theorem from different sides. Once the material is absorbed, a new topic begins. But in school, they teach a ready‑made process, which over hundreds of years has turned into a manual.
But at work, a person is often forced to build their own process independently. Yes, based on the school one, but independently.
Maybe these heat maps could find application in science, or perhaps they could replace tests in interviews (instead of solving problems and IQ tests, you could show a “passport” of your thinking style, because that’s what everyone is hunting for in hiring, not just knowledge of algorithms)…
What do you think?