GitLab CI Caching Didn’t Speed Up My Pipeline — Here’s Why

Published: (March 19, 2026 at 10:33 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Cover image for GitLab CI Caching Didn’t Speed Up My Pipeline — Here’s Why

Most DevOps guides say:

“Enable caching — it will speed up your CI pipelines.”

I’ve done that many times in my career. Here I’d like to share some thoughts on the topic, illustrated with a small experiment.

I built a GitLab CI lab, added dependency caching. Did it make the runs faster? The result might surprise you:

My pipeline didn’t get faster at all. In some cases it was slightly slower.

This isn’t a post against caching. Caching worked exactly as expected; it just didn’t translate into a faster pipeline duration in this particular setup. The article is about what actually happens after you enable it and why the outcome might not match expectations.

What I Wanted to Test

I wanted to validate a simple assumption:

  • Does dependency caching really reduce pipeline duration?
  • Where does the improvement come from?
  • When is caching actually worth it?

So I built a small Python project with a multi‑stage GitLab CI pipeline and measured the results.

The Setup

The pipeline has three stages:

  1. prepare → install dependencies
  2. quality → compile/lint
  3. test → run tests

Each job installs dependencies independently—just like many real‑world pipelines. To make the effect visible, I used slightly heavier dependencies:

  • pandas
  • scipy
  • scikit-learn
  • matplotlib

Baseline: No Cache

Each job runs:

time pip install -r requirements.txt

As expected:

  • Dependencies are downloaded in every job.
  • Work is repeated across stages.
  • Every pipeline run starts from scratch.

Results (No Cache)

RunDuration
#1~38 s
#2~34 s

Adding Cache

I introduced a GitLab cache:

.cache:
  cache:
    key:
      files:
        - requirements.txt
    paths:
      - .cache/pip
    policy: pull-push

and configured pip:

variables:
  PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"

Now dependencies should be reused between jobs and runs.

The Result (With Cache)

ModeRunDuration
No cache1~38 s
No cache2~34 s
With cache1~40 s
With cache2~38 s

Almost no difference.

Why Didn’t It Get Faster?

  1. Fast package source – If the runner uses a nearby mirror (e.g., Hetzner), downloads are already quick.
  2. pip is efficient – Modern Python packaging uses pre‑built wheels, making installs fast.
  3. Cache overhead – Archive creation, upload/download, and extraction add time that can cancel any benefit.
  4. CI jobs spend time elsewhere – Container startup, image pulling, and repo checkout dominate the runtime.

The Real Takeaway

Dependency caching is not automatically a performance optimization. Its impact depends on:

  • Dependency size
  • Network conditions
  • Runner configuration
  • Pipeline structure

When Caching Helps

  • Large dependency trees
  • Slow networks or remote mirrors
  • Distributed runners
  • Frequent pipeline runs

When It Might Not Help

  • Small projects
  • Fast mirrors
  • Short pipelines
  • High cache overhead

Not Just About Speed

Caching can still:

  • Reduce outbound traffic
  • Improve resilience
  • Decrease reliance on external registries

What’s Next

Next step: testing a shared cache with S3‑compatible storage.

Repo

You can find the full lab here:

👉

Final Thought

Not every best practice gives a measurable improvement—but understanding why is where real DevOps begins.

0 views
Back to Blog

Related posts

Read more »