Why my AI crash reconstruction MVP isn't ready for production (and why I'm rebuilding it)

Published: (March 8, 2026 at 11:24 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Demo Phase and Initial Prototype

I built Incident Lens AI, a forensic video analysis suite for crash reconstruction, as a frontend‑first proof of concept using React, Vite, and the Gemini 3 Pro SDK. The browser streams video frames and audio directly to the LLM, which then reasons about the crash, generates liability timelines, cites traffic laws, and outputs structured JSON for interactive dashboards. This approach let me iterate quickly on the UI and validate the multimodal concept without any backend infrastructure.

Why the Original Architecture Won’t Scale

Security Concerns

Sending raw dash‑cam or CCTV footage to a public LLM API from the client is a non‑starter for any enterprise dealing with sensitive data and personally identifiable information. No insurance pilot program would approve such a data‑exposure risk.

Hallucination and Accuracy Issues

LLMs are not physics engines. My initial documentation claimed the AI could calculate vehicle speed via photogrammetry and motion mechanics, but without precise camera calibration the model is merely guessing. In a courtroom, an “AI‑estimated” speed would be easily challenged and dismissed.

Moving to a Hybrid Architecture

Deterministic Backend Processing

I’m shifting the heavy lifting to a secure Python backend that uses deterministic computer‑vision tools (e.g., OpenCV) to extract hard data: pixel velocities, exact collision coordinates, and other measurable quantities. These numbers feed established physics formulas to compute actual speed and force.

Role of Gemini LLM

Once the deterministic data is ready, Gemini re‑enters the pipeline to perform what it does best: cross‑reference case law, synthesize a timeline, and generate a human‑readable dossier. This separation ensures the final report is both mathematically sound and legally persuasive.

Next Steps and Call for Collaboration

The current repository remains available as a proof of concept to illustrate the vision of multimodal AI in forensics. The real engineering work—making the system secure, deterministic, and legally defensible—starts now. If you’re navigating the jump from AI prototype to production in a zero‑trust industry, I’d love to hear how you’re handling it.

Frontend Prototype

You can explore the frontend prototype here:

Production Vision for Incident Lens AI

Incident Lens AI is intended to become a production‑grade platform for insurance carriers, legal defense teams, and fleet safety managers. It leverages the multimodal capabilities of Google Gemini 3 Pro to turn unstructured video evidence (dash‑cam, CCTV, body‑cam) into legally admissible forensic reconstructions.

Key Features

  • Autonomous Reconstruction

    • Physics Engine: Calculates vehicle speed ((v = d/t)) using photogrammetry and motion‑blur analysis.
    • Signal Inference: Determines occluded traffic‑light states by analyzing cross‑traffic flow and pedestrian behavior.
    • Debris Field Analysis: Reconstructs impact vectors from glass‑shard trajectories and fluid spray patterns.
  • Legal Admissibility

    • Search Grounding: Uses Gemini to cite specific statutes and case law, grounding every claim in verifiable legal references.

If you’re interested in contributing or learning more, feel free to reach out or submit a pull request on the GitHub repository.

0 views
Back to Blog

Related posts

Read more »