GitLab CI Caching Didn’t Speed Up My Pipeline — Here’s Why
Source: Dev.to

Most DevOps guides say:
“Enable caching — it will speed up your CI pipelines.”
I’ve done that many times in my career. Here I’d like to share some thoughts on the topic, illustrated with a small experiment.
I built a GitLab CI lab, added dependency caching. Did it make the runs faster? The result might surprise you:
My pipeline didn’t get faster at all. In some cases it was slightly slower.
This isn’t a post against caching. Caching worked exactly as expected; it just didn’t translate into a faster pipeline duration in this particular setup. The article is about what actually happens after you enable it and why the outcome might not match expectations.
What I Wanted to Test
I wanted to validate a simple assumption:
- Does dependency caching really reduce pipeline duration?
- Where does the improvement come from?
- When is caching actually worth it?
So I built a small Python project with a multi‑stage GitLab CI pipeline and measured the results.
The Setup
The pipeline has three stages:
- prepare → install dependencies
- quality → compile/lint
- test → run tests
Each job installs dependencies independently—just like many real‑world pipelines. To make the effect visible, I used slightly heavier dependencies:
pandasscipyscikit-learnmatplotlib
Baseline: No Cache
Each job runs:
time pip install -r requirements.txt
As expected:
- Dependencies are downloaded in every job.
- Work is repeated across stages.
- Every pipeline run starts from scratch.
Results (No Cache)
| Run | Duration |
|---|---|
| #1 | ~38 s |
| #2 | ~34 s |
Adding Cache
I introduced a GitLab cache:
.cache:
cache:
key:
files:
- requirements.txt
paths:
- .cache/pip
policy: pull-push
and configured pip:
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
Now dependencies should be reused between jobs and runs.
The Result (With Cache)
| Mode | Run | Duration |
|---|---|---|
| No cache | 1 | ~38 s |
| No cache | 2 | ~34 s |
| With cache | 1 | ~40 s |
| With cache | 2 | ~38 s |
Almost no difference.
Why Didn’t It Get Faster?
- Fast package source – If the runner uses a nearby mirror (e.g., Hetzner), downloads are already quick.
pipis efficient – Modern Python packaging uses pre‑built wheels, making installs fast.- Cache overhead – Archive creation, upload/download, and extraction add time that can cancel any benefit.
- CI jobs spend time elsewhere – Container startup, image pulling, and repo checkout dominate the runtime.
The Real Takeaway
Dependency caching is not automatically a performance optimization. Its impact depends on:
- Dependency size
- Network conditions
- Runner configuration
- Pipeline structure
When Caching Helps
- Large dependency trees
- Slow networks or remote mirrors
- Distributed runners
- Frequent pipeline runs
When It Might Not Help
- Small projects
- Fast mirrors
- Short pipelines
- High cache overhead
Not Just About Speed
Caching can still:
- Reduce outbound traffic
- Improve resilience
- Decrease reliance on external registries
What’s Next
Next step: testing a shared cache with S3‑compatible storage.
Repo
You can find the full lab here:
👉
Final Thought
Not every best practice gives a measurable improvement—but understanding why is where real DevOps begins.