I Built QualityHub: AI-Powered Quality Intelligence for Your Releases

Published: (February 1, 2026 at 06:10 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

🎯 The Problem

As a developer at Renault, I faced this question every day:

β€œCan we ship this release to production?”

We had test results, coverage metrics, SonarQube reports… but no single source of truth to answer this simple question.

So I built QualityHub – an AI‑powered platform that analyzes your quality metrics and gives you instant go/no‑go decisions.

πŸš€ What is QualityHub?

QualityHub is an open‑source quality intelligence platform that:

  • πŸ“Š Aggregates test results from any framework (Jest, JUnit, JaCoCo…)
  • πŸ€– Analyzes quality metrics with AI
  • βœ… Decides if you can ship to production
  • πŸ“ˆ Tracks trends over time in a beautiful dashboard

The Stack

ComponentTechnology
BackendTypeScript, Express, PostgreSQL, Redis
FrontendNext.jsβ€―14, Tailwindβ€―CSS
CLITypeScript with parsers for Jest, JaCoCo, JUnit
DeploymentDocker Compose (self‑hostable)
LicenseMIT

πŸ’‘ How It Works

1. Universal Format – qa-result.json

Instead of forcing you to use specific tools, QualityHub uses an open standard format:

{
  "version": "1.0.0",
  "project": {
    "name": "my-app",
    "version": "2.3.1",
    "commit": "a3f4d2c",
    "branch": "main",
    "timestamp": "2026-01-31T14:30:00Z"
  },
  "quality": {
    "tests": {
      "total": 1247,
      "passed": 1245,
      "failed": 2,
      "skipped": 0,
      "duration_ms": 45230,
      "flaky_tests": ["UserAuthTest.testTimeout"]
    },
    "coverage": {
      "lines": 87.3,
      "branches": 82.1,
      "functions": 91.2
    }
  }
}

This format works with any test framework.

2. CLI Parsers

The CLI automatically converts your test results:

# Jest (JavaScript/TypeScript)
qualityhub parse jest ./coverage

# JaCoCo (Java)
qualityhub parse jacoco ./target/site/jacoco/jacoco.xml

# JUnit (Java/Kotlin/Python)
qualityhub parse junit ./build/test-results/test

3. Risk Analysis Engine

The backend analyzes your results and calculates a Risk Score (0‑100).

Risk factors analyzed

  • Test pass rate
  • Code coverage (lines, branches, functions)
  • Flaky tests
  • Coverage trends
  • Code‑quality metrics (if available)

Sample output

{
  "risk_score": 85,
  "status": "SAFE",
  "decision": "PROCEED",
  "reasoning": "Test pass rate: 99.8%. Coverage: 87.3%. No critical issues.",
  "recommendations": []
}

4. Beautiful Dashboard

QualityHub dashboard

Track metrics over time, see trends, and make informed decisions.

πŸ”§ Quick Start

Self‑Hosted (5β€―minutes)

# Clone repo
git clone https://github.com/ybentlili/qualityhub.git
cd qualityhub

# Start everything with Docker
docker-compose up -d

# βœ… Backend: http://localhost:8080
# βœ… Frontend: http://localhost:3000

Use the CLI

# Install
npm install -g qualityhub-cli

# Initialize
qualityhub init

# Parse your test results
qualityhub parse jest ./coverage

# Push to QualityHub
qualityhub push qa-result.json

Done! Your metrics appear in the dashboard instantly.

🎨 Why I Built This

The Pain Points

  • Fragmented tools – Jest for tests, JaCoCo for coverage, SonarQube for quality… each with its own UI and format.
  • No single answer – β€œCan we ship?” required checking several tools and making a gut decision.
  • No history – Hard to track quality trends over time.
  • Manual process – No automation, no CI/CD integration.

The Solution

QualityHub aggregates everything into one dashboard and uses AI to make the decision for you.

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     CLI     β”‚ ← Parse test results
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚ POST /api/v1/results
       ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     Backend (API)           β”‚
β”‚  β€’ Express + TypeScript     β”‚
β”‚  β€’ PostgreSQL + Redis       β”‚
β”‚  β€’ Risk Analysis Engine     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
               ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Frontend (Dashboard)       β”‚
β”‚  β€’ Next.js 14                β”‚
β”‚  β€’ Real‑time metrics         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“Š Technical Deep Dive

1. The Parser Architecture

Each parser extends a base class:

export abstract class BaseParser {
  abstract parse(filePath: string): Promise;

  protected buildBaseResult(adapterName: string) {
    return {
      version: '1.0.0',
      project: {
        name: this.projectInfo.name,
        commit: process.env.GIT_COMMIT || 'unknown',
        // Auto‑detect CI/CD environment
        timestamp: new Date().toISOString(),
      },
      metadata: {
        ci_provider: this.detectCIProvider(),
        adapters: [adapterName],
      },
    };
  }
}

This makes adding new parsers trivial. Want pytest support? Extend BaseParser and implement parse().

2. Risk Scoring Algorithm (MVP)

The current version uses rule‑based scoring:

let score = 100;

// Test failures
if (tests.failed > 0) {
  score -= tests.failed * 5;
}

// Coverage thresholds
if (coverage.lines < 80) {
  score -= (80 - coverage.lines) * 0.5;
}
if (coverage.branches < 70) {
  score -= (70 - coverage.branches) * 0.5;
}
if (coverage.functions < 75) {
  score -= (75 - coverage.functions) * 0.5;
}

// Flaky tests
if (flakyTests.length > 0) {
  score -= flakyTests.length * 3;
}

// Ensure 0‑100 range
score = Math.max(0, Math.min(100, score));

Future: Replace with an AI‑powered model that learns from historical releases.

3. red analysis (Claude API) for contextual insights

4. Database Schema

Simple and efficient:

CREATE TABLE projects (
    id UUID PRIMARY KEY,
    name VARCHAR(255) UNIQUE NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE qa_results (
    id UUID PRIMARY KEY,
    project_id UUID REFERENCES projects(id),
    version VARCHAR(50),
    commit VARCHAR(255),
    branch VARCHAR(255),
    timestamp TIMESTAMP,
    metrics JSONB,          -- Flexible JSON storage
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE risk_analyses (
    id UUID PRIMARY KEY,
    qa_result_id UUID REFERENCES qa_results(id),
    risk_score INTEGER,
    status VARCHAR(50),
    reasoning TEXT,
    risks JSONB,
    recommendations JSONB,
    decision VARCHAR(50)
);

JSONB allows flexible metric storage without schema migrations.

πŸš€ What’s Next?

v1.1 (Planned)

  • πŸ€– AI‑Powered Analysis with Claude API
  • πŸ“Š Trend Detection (coverage dropping over time)
  • πŸ”” Slack/Email Notifications
  • πŸ”Œ GitHub App (comments on PRs)

v1.2 (Future)

  • πŸ“ˆ Advanced Analytics (benchmarking, predictions)
  • πŸ” SSO & RBAC for enterprise
  • 🌍 Multi‑language support
  • 🎨 Custom dashboards

πŸ’‘ Lessons Learned

  1. Open Standards Win
    Making qa-result.json an open standard was key. Now anyone can build parsers or integrations.

  2. Developer Experience Matters
    The CLI must be dead simple:

    qualityhub parse jest ./coverage  # Just works

    No config files, no setupβ€”just works.

  3. Self‑Hosting is a Feature
    Many companies can’t send their metrics to external SaaS. Docker Compose makes self‑hosting trivial.

🀝 Contributing

QualityHub is 100β€―% open‑source (MIT License).

Want to contribute?

  • πŸ§ͺ Add parsers (pytest, XCTest, Rust…)
  • 🎨 Improve the dashboard
  • πŸ› Fix bugs
  • πŸ“š Write docs

Check out the Contributing Guide.

  • GitHub:
  • CLI:
  • npm:

🎯 Try It Now

# Self‑host in 5 minutes
git clone https://github.com/ybentlili/qualityhub.git
cd qualityhub
docker-compose up -d

# Or just the CLI
npm install -g qualityhub-cli
qualityhub parse jest ./coverage

πŸ’¬ What do you think?

Would you use this? What features would you like to see?

Drop a ⭐ on GitHub if you find this useful!

Built with ❀️ in TypeScript

Back to Blog

Related posts

Read more Β»