Real-Time Proxy Monitoring: Build a Dashboard with Python and Grafana

Published: (March 8, 2026 at 12:18 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Key Metrics to Track

  • Success rate – Percentage of requests returning HTTP 200
  • Response time – Average and P95 latency per proxy
  • Bandwidth usage – Data consumed per proxy and total
  • Error distribution – Types of errors (timeout, 403, 429, CAPTCHA)
  • IP uniqueness – How many unique IPs you are actually using
  • Pool health – Percentage of active vs. failed proxies
  • Rotation frequency – How often IPs change
  • Geographic distribution – Where your exit IPs are located
  • Cost per successful request – Real cost accounting
  • Blacklist rate – How many IPs are currently blocked

Architecture Overview

Your Application
    |
    v
Proxy Middleware (collects metrics)
    |
    v
Prometheus (stores time‑series data)
    |
    v
Grafana (visualizes dashboards)

Proxy Wrapper with Metrics

import time
import requests
from prometheus_client import Counter, Histogram, Gauge, start_http_server

# Define metrics
REQUEST_COUNT = Counter(
    "proxy_requests_total",
    "Total proxy requests",
    ["proxy", "status", "target_domain"]
)

RESPONSE_TIME = Histogram(
    "proxy_response_seconds",
    "Response time in seconds",
    ["proxy"],
    buckets=[0.1, 0.5, 1, 2, 5, 10, 30]
)

ACTIVE_PROXIES = Gauge(
    "proxy_pool_active",
    "Number of active proxies in pool"
)

BANDWIDTH = Counter(
    "proxy_bandwidth_bytes",
    "Bandwidth consumed in bytes",
    ["proxy", "direction"]
)

class MonitoredProxy:
    def __init__(self, proxy_url):
        self.proxy_url = proxy_url
        self.proxy_dict = {"http": proxy_url, "https": proxy_url}

    def request(self, url, **kwargs):
        start = time.time()
        domain = url.split("/")[2]

        try:
            response = requests.get(
                url,
                proxies=self.proxy_dict,
                timeout=kwargs.get("timeout", 15),
                **kwargs
            )
            duration = time.time() - start

            # Record metrics
            REQUEST_COUNT.labels(
                proxy=self.proxy_url,
                status=str(response.status_code),
                target_domain=domain
            ).inc()

            RESPONSE_TIME.labels(proxy=self.proxy_url).observe(duration)

            BANDWIDTH.labels(
                proxy=self.proxy_url, direction="response"
            ).inc(len(response.content))

            return response

        except Exception:
            duration = time.time() - start
            REQUEST_COUNT.labels(
                proxy=self.proxy_url,
                status="error",
                target_domain=domain
            ).inc()
            raise

Prometheus Configuration (prometheus.yml)

scrape_configs:
  - job_name: "proxy_monitor"
    scrape_interval: 15s
    static_configs:
      - targets: ["localhost:8000"]

Core Dashboard Panels (Grafana)

  • Success rate

    rate(proxy_requests_total{status="200"}[5m]) /
    rate(proxy_requests_total[5m]) * 100
  • Average response time

    rate(proxy_response_seconds_sum[5m]) /
    rate(proxy_response_seconds_count[5m])
  • Error distribution

    sum by (status) (rate(proxy_requests_total{status!="200"}[5m]))
  • Bandwidth per hour

    sum(rate(proxy_bandwidth_bytes[1h])) * 3600

Alert Rules (alert_rules.yml)

groups:
  - name: proxy_alerts
    rules:
      - alert: LowSuccessRate
        expr: |
          rate(proxy_requests_total{status="200"}[5m]) /
          rate(proxy_requests_total[5m])  5
        for: 5m
        annotations:
          summary: Average proxy latency above 5 seconds

Lightweight CSV Logger (Alternative)

If Prometheus and Grafana are overkill, you can log to CSV:

import csv
from datetime import datetime

def log_request(proxy, url, status, latency, bytes_received):
    with open("proxy_log.csv", "a", newline="") as f:
        writer = csv.writer(f)
        writer.writerow([
            datetime.now().isoformat(),
            proxy,
            url,
            status,
            round(latency, 3),
            bytes_received
        ])

Later, analyze the CSV with pandas (or any data‑analysis tool) to identify trends and problematic proxies.

Further Reading

For more proxy monitoring setups and infrastructure guides, visit DataResearchTools.

0 views
Back to Blog

Related posts

Read more »