SwiftUI Rate Limiting & Backpressure (Protect Your Backend From Your Own App)

Published: (March 8, 2026 at 01:13 PM EDT)
4 min read
Source: Dev.to

Source: Dev.to

Most apps call APIs like this:

try await api.fetchFeed()

That works… until something triggers many requests at once.

Examples

  • infinite refresh loops
  • aggressive background sync
  • multiple views requesting the same data
  • retry storms after network recovery
  • multiple devices syncing simultaneously

Suddenly your app becomes the biggest threat to your own backend.

🧠 The Core Principle

A healthy system controls its request rate. Even when everything is working, uncontrolled traffic can overload APIs.

🧱 1. What Is Rate Limiting?

Rate limiting controls how many requests can occur during a time window.

Example rule

10 requests per second

If more requests arrive, they must wait, queue, or be rejected. This protects both the backend and the client device.

🧬 2. Token Bucket Model

A common approach is the token bucket algorithm.

Concept

Bucket contains tokens
Each request consumes one token
Tokens refill over time

If the bucket is empty, the request must wait.

🧱 3. Simple Rate Limiter Implementation

actor RateLimiter {
    private let maxTokens: Int
    private let refillInterval: TimeInterval

    private var tokens: Int
    private var lastRefill: Date

    init(maxTokens: Int, refillInterval: TimeInterval) {
        self.maxTokens = maxTokens
        self.refillInterval = refillInterval
        self.tokens = maxTokens
        self.lastRefill = Date()
    }

    func acquire() async {
        refill()
        while tokens == 0 {
            try? await Task.sleep(nanoseconds: 100_000_000) // 0.1 s
            refill()
        }
        tokens -= 1
    }

    private func refill() {
        let now = Date()
        let elapsed = now.timeIntervalSince(lastRefill)
        if elapsed >= refillInterval {
            tokens = maxTokens
            lastRefill = now
        }
    }
}

This ensures requests happen at a controlled pace.

🚦 4. Using the Rate Limiter

Wrap your API calls:

let limiter = RateLimiter(maxTokens: 5, refillInterval: 1)

func fetchFeed() async throws -> Feed {
    await limiter.acquire()
    return try await api.fetchFeed()
}

Now your app cannot exceed 5 requests per second.

🔄 5. What Is Backpressure?

Backpressure prevents systems from producing more work than they can handle.

Scenario

Scroll view triggers 50 image loads

Without backpressure: 50 network requests instantly.
With backpressure: Requests queue and execute gradually.

This protects CPU, memory, network, and backend APIs.

📦 6. Request Queue Pattern

actor RequestQueue {
    private var queue: [() async -> Void] = []

    func enqueue(_ task: @escaping () async -> Void) {
        queue.append(task)
        processNext()
    }

    private func processNext() {
        guard !queue.isEmpty else { return }
        let task = queue.removeFirst()
        Task {
            await task()
            processNext()
        }
    }
}

The pattern ensures controlled execution of queued tasks.

🔋 7. Why Mobile Apps Need Rate Limiting

Mobile environments are unpredictable. Common issues include:

  • API quotas
  • Slow cellular connections
  • Background sync bursts
  • Multi‑tab refresh loops

Without limits, the backend gets flooded, battery drains faster, and the UI becomes unstable. Rate limiting stabilizes the system.

🌐 8. Combine With Circuit Breakers

Circuit Breakers → stop requests during failures
Rate Limiting   → control requests during success

Together they form complete network resilience.

🧪 9. Testing Rate Limiting

Test scenarios

  • Rapid refresh loops
  • Large scroll lists triggering network calls
  • Background sync bursts
  • Retry storms

Verify that

  • Request rate remains stable
  • The queue processes correctly
  • No backend overload occurs

⚠️ 10. Common Anti‑Patterns

Avoid:

  • Unlimited parallel requests
  • Immediate retries after failure
  • Per‑view network calls without coordination
  • Ignoring server rate limits
  • Not deduplicating identical requests

These cause request storms, API bans, battery drain, and backend instability.

🧠 Mental Model

User Action
 → Request Queue
   → Rate Limiter
     → Network Request
       → API

Not: “Fire every request immediately.”

🚀 Final Thoughts

Rate limiting and backpressure give your app:

  • Predictable network behavior
  • Backend protection
  • Smoother performance
  • Better battery usage
  • Production‑grade resilience

This is the difference between an app that overwhelms its backend and a system that scales safely.

0 views
Back to Blog

Related posts

Read more »