The Arithmetic of Productivity Boosts: Why Does a “40% Increase in Productivity” Never Actually Work?
Source: Towards Data Science
Introduction: False promises?
As a consultant and manager in the data sphere, I’ve sat through my fair share of slide‑deck presentations—on both sides. Any slide deck worth its salt promises something, often about efficiency or productivity. You have probably heard statements like:
- “This tool will make your data scientists 40 % more productive!”
- “You will spend 30 % less time fixing bugs by doing this. You could basically implement a six‑hour workday and still come out on top!”
- “With our solution, you could code up two projects in the time it took you previously to do only one. This halves the time to production!”
Sometimes the promises fail simply because the product is bad. But why do they often fall flat even with good products? You might switch to a tool you genuinely love and still not see the promised improvement. Why? Are the numbers you were presented with lies?
My background—a Ph.D. in mathematics—has probably scarred me for life in more ways than one. One of the deepest scars is my need to understand precisely what numbers represent. The figures in the statements above all point to one thing, while telling a completely different story if you stop and think.
While outright lying certainly happens, a much more common practice is being misleading. This sort of marketing assumes you won’t think critically when presented with numbers. Let’s think critically together and see what we uncover.
Lies, Lies, and Marketing
What’s the problem with the productivity claim?
The core issue: the statement claims to optimise a specific part of the workflow while implicitly promising a global productivity boost.
A concrete example
You’re a major AI player and have just released a tool that helps data scientists pick model parameters. Early surveys show:
“Data scientists are 20 % more productive when selecting model parameters.”
You initially phrase it as:
Our tool has improved the productivity of model parameter selection for data scientists by 20 %.
Marketing tweaks it to:
Our tool has improved model parameter selection, making data scientists 20 % more productive.
At first glance the change seems minor—just a few word swaps. In reality it transforms a moderately impressive claim into an exaggerated one.
Why the tweak matters
The marketing version suggests that data scientists are 20 % more productive overall, not just during parameter selection.
Your survey, however, only measured productivity while they were choosing parameters.
How the numbers break down
| Activity | Approx. share of a data scientist’s time* |
|---|---|
| Core data‑science tasks (e.g., modeling, analysis) | ~40 % |
| Model‑parameter selection (a subset of the above) | ~10 % of that 40 % → 4 % of total time |
| Other duties (prototyping, stakeholder management, meetings, debugging, pipeline maintenance, etc.) | ~60 % |
If the tool makes parameter selection 20 % faster, the impact on total work time is:
0.04 \times 0.20 = 0.008 \;\text{or roughly **0.8 %** of total productivity}
That gain is barely noticeable in a typical week, and the initial learning curve could even cause a short‑term dip in efficiency.
The marketing loophole
Our tool has improved model parameter selection, making data scientists 20 % more productive.
- Interpretation 1 (intended): The 20 % boost applies only to the model‑parameter‑selection step.
- Interpretation 2 (implied): Data scientists are 20 % more productive overall.
If the claim is challenged, marketing can point to the sentence structure and argue that the increase is limited to the specific task, thereby having a “fallback” explanation.
Take‑away
The power of marketing often lies in re‑ordering the right words. A modest, task‑specific improvement can be spun into a sweeping productivity claim, even though the actual effect on total work output is negligible.
A Better Way? Focus on Cognitive Load Rather Than Productivity
What does the story I just told really tell you? If you have many different tasks that are complex (like a data scientist does), then aiming for productivity gains is not pushing the needle very far.
Don’t get me wrong—if you have an easy opportunity to become 20 % more productive with one of your tasks, go for it! But don’t expect it to translate into more than a percent or two difference in total productivity.
What can we do instead?
When we juggle many complex tasks, we can use cognitive load as a metric and try to reduce it.
Example: A competing company builds a tool for model‑parameter selection. Instead of trying to speed up the process, the tool’s sole purpose is to reduce the data scientist’s cognitive load. The model‑selection step still takes the same amount of clock time, but the data scientist feels energized and ready for the next challenge afterward.
Most people—including me—cannot work eight hours a day while staying at the top of our game the whole time. Some days feel like six effective hours; other days feel like only two. If a process requires little cognitive load, I can work longer effectively. This often yields the same total productivity (a few percent gain) plus the added benefit of improved morale.
Next time someone claims a “40 % increase in productivity,” ask:
- How much of the total work time does this productivity increase affect?
- How much cognitive load does this take away—or introduce?