I Built a Self-Hosted LLM Observability Tool for AI Applications (Logmera)

Published: (March 4, 2026 at 09:12 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

The Problem: Lost Visibility in AI Applications

When building AI applications, you quickly lose visibility into what the system is doing. Common questions arise:

  • What prompts were sent to the model?
  • What responses came back?
  • How long did the request take?
  • Which model handled the request?
  • Why did a request fail?

Developers often start by logging to the console, but this becomes messy and unmanageable in production.

Introducing Logmera

Logmera is a self‑hosted observability tool for AI/LLM applications. Instead of printing logs to the console, it stores:

  • prompts
  • responses
  • model name
  • latency
  • request status

in a PostgreSQL database and presents them in a simple web dashboard.

Why Self‑Hosted?

Many LLM observability tools send data to external cloud services, which can raise concerns about:

  • privacy
  • compliance
  • data ownership

Logmera runs entirely on your own infrastructure, keeping all logs inside your PostgreSQL database.

Architecture

Your AI Application


Logmera Python SDK


Logmera Server (FastAPI)


PostgreSQL Database


Dashboard

Quick Start (≈2 minutes)

Install the SDK

pip install logmera

Run the Server

Logmera requires a PostgreSQL database. Start the server with the connection URL:

logmera --db-url "postgresql://username:password@localhost:5432/database"

The server will be available at:

Log a Request from Python

import logmera

logmera.log(
    project_id="chatbot",
    prompt="Hello",
    response="Hi there",
    model="gpt-4o",
    latency_ms=120,
    status="success"
)

After executing the code, the request appears in the dashboard.

Dashboard Features

  • Browse logs
  • Search prompts
  • Filter by project or model
  • Track latency
  • Inspect full responses

These capabilities make debugging AI systems much easier.

REST API

Logs can be sent from any language via the exposed REST endpoint.

curl -X POST http://127.0.0.1:8000/logs \
  -H "Content-Type: application/json" \
  -d '{
        "project_id":"demo",
        "prompt":"Hello",
        "response":"Hi",
        "model":"gpt-4o",
        "latency_ms":95,
        "status":"success"
      }'

Typical Use Cases

  • AI SaaS applications
  • Chatbots
  • Retrieval‑augmented generation (RAG) systems
  • AI agents
  • Automation tools powered by LLMs

Logmera provides simple, real‑time visibility into what your AI system is doing.

Resources

  • PyPI:
  • GitHub:

If you’re building AI applications, feel free to try Logmera and share your feedback.

0 views
Back to Blog

Related posts

Read more »