Multi-tenant Loki on Kubernetes

Published: (December 17, 2025 at 11:44 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

What is Loki?

Loki is a horizontally scalable, highly available, multi‑tenant log aggregation system inspired by Prometheus.

If you’re using Loki for the first time, check out the official documentation:
Loki Docs

What we need

  • A Loki deployment (SimpleScalable mode via Helm)
  • Promtail
  • An S3‑compatible bucket (for object storage)

Loki setup (SimpleScalable + S3 storage)

The deployment relies on object storage, so you need an S3‑compatible bucket and its credentials to store the logs.

Helm chart structure

loki-helm-chart/
├── templates/
│   └── loki-secrets.yaml
├── Chart.yaml
└── values.yaml

loki-secrets.yaml – S3 credentials

apiVersion: v1
kind: Secret
metadata:
  name: loki-s3-credentials
  namespace: monitoring
type: Opaque
stringData:
  AWS_ACCESS_KEY_ID: ""
  AWS_SECRET_ACCESS_KEY: ""

values.yaml – Loki configuration

deploymentMode: SimpleScalable

loki:
  auth_enabled: true

  extraEnvFrom:
    - secretRef:
        name: loki-s3-credentials

  schemaConfig:
    configs:
      - from: "2024-04-01"
        store: tsdb
        object_store: s3
        schema: v13
        index:
          prefix: loki_index_
          period: 24h

  storage_config:
    tsdb_shipper:
      index_gateway_client:
        server_address: '{{ include "loki.indexGatewayAddress" . }}'

  storage:
    type: s3
    s3:
      endpoint: rook-ceph-rgw-store.rook-ceph.svc.cluster.local:80
      s3ForcePathStyle: true
      insecure: true

    bucketNames:
      chunks: loki-logs
      ruler: loki-ruler
      admin: loki-admin

  commonConfig:
    replication_factor: 3

  ingester:
    chunk_encoding: snappy

  querier:
    multi_tenant_queries_enabled: true
    max_concurrent: 4

  pattern_ingester:
    enabled: true

  limits_config:
    allow_structured_metadata: true
    volume_enabled: true

Important notes

  • auth_enabled: true tells Loki to expect a tenant identifier (via X‑Scope‑OrgID) for multi‑tenancy.
  • Retention is performed by the Compactor. If retention isn’t enabled there, logs can live forever even when retention_period is set.

Remaining values (PVCs for local state)

Loki components still need local disk for WAL, cache, and working state, so you’ll see PVCs on the read/write/backend pods.

Note: ceph-block is our expandable storage class.

backend:
  replicas: 2
  persistence:
    enabled: true
    size: 2Gi
    storageClass: ceph-block
    accessModes:
      - ReadWriteOnce
  extraEnvFrom:
    - secretRef:
        name: loki-s3-credentials

write:
  replicas: 3
  persistence:
    enabled: true
    size: 2Gi
    storageClass: ceph-block
    accessModes:
      - ReadWriteOnce
  extraEnvFrom:
    - secretRef:
        name: loki-s3-credentials

read:
  replicas: 2
  extraEnvFrom:
    - secretRef:
        name: loki-s3-credentials

Gateway configuration (what it does and does not do)

gateway:
  nginx:
    customHeaders:
      - name: X-Scope-OrgID
        value: $http_x_scope_orgid
  • This does not enforce isolation; it merely forwards whatever tenant header the caller provides.
  • Grafana’s docs recommend that X‑Scope‑OrgID be set by an authenticating reverse proxy so users can’t spoof other tenants.

In my setup, clients never access Loki directly. Only Promtail (running inside the cluster) pushes logs, and I control the tenant mapping, so forwarding this header is acceptable.

Promtail setup

Promtail runs on each node and ships container logs to Loki.

daemonset:
  enabled: true
deployment:
  enabled: false

config:
  clients:
    - url: http://loki-gateway.monitoring.svc.cluster.local:80/loki/api/v1/push
      timeout: 60s
      batchwait: 1s
      batchsize: 1048576

  serverPort: 3101

Setting the tenant correctly

Use the Kubernetes namespace as a namespace label:

snippets:
  common:
    - action: replace
      source_labels: [__meta_kubernetes_namespace]
      target_label: namespace

Then, in the pipeline, set the tenant from that label:

snippets:
  pipelineStages:
    - cri: {}
    - tenant:
        label: namespace

And voilà – a multi‑tenant Loki setup is ready!

Adding Loki as a data source in Grafana

Once Loki is up and running, add it as a data source in Grafana and start exploring logs per tenant.

Grafana dashboard

Tenant filtering screenshots

Setup X-Scope-OrgID – I set the value to monitoring.

Grafana dashboard – monitoring namespace

Now we will only see logs from the monitoring namespace:

Grafana dashboard – filtered logs

A HUGE shout‑out to Ben Ye for his help!
GitHub profile

Hope you found this useful. I’m always up for a quick chat—connect with me on LinkedIn or Twitter.

Back to Blog

Related posts

Read more »