How I ran 6 LLMs in parallel without paying a cent in API fees (Electron + DOM Injection)
Source: Dev.to
Introduction
Let’s be honest: trusting a single LLM with a complex problem is basically a coin toss right now. I got incredibly tired of my daily workflow:
- Ask ChatGPT a question → get a confident answer.
- Paste the same question into Claude to fact‑check → get a contradictory answer.
- Ask Perplexity to break the tie.
I was acting as the manual API router, and it was exhausting.
I wanted a “Peer Review” system where the AIs cross‑checked each other, but two massive roadblocks appeared:
- Cost – Running a 6‑model cross‑validation loop via official APIs (GPT‑4o, Claude 3.5, DeepSeek, etc.) for every query gets expensive fast.
- Latency (The Waterfall) – Chaining these sequentially means waiting minutes for an answer.
So I built AI Council – a local desktop app that bypasses APIs entirely and runs six AIs in parallel using their free web UIs.
The Architecture: Electron & 6 BrowserViews
The entire “API” is just DOM injection: parsing the HTML, finding the text area, simulating keystrokes, and clicking the Send button.
Bypassing the Waterfall: Fan‑Out / Fan‑In
The Primary Draft
You ask a question. The Primary AI (e.g., ChatGPT) generates a first draft.
Fan‑Out (Parallel Review)
The app takes that draft and broadcasts it to the other five AI panels at the exact same time, hitting the Submit button on all six BrowserViews simultaneously.
Fan‑In (Compilation)
The app monitors the DOM of all windows. Once they all stop generating, it extracts the text, compiles the feedback, and feeds it back to the Primary AI.
The Final Output
The Primary AI rewrites the answer based on the peer review.
The orchestration is essentially a giant Promise.allSettled:
// Conceptual Fan‑out logic
async function runParallelReview(draft) {
const reviewers = [
claudeView,
geminiView,
deepseekView,
grokView,
perplexityView
];
// Fire them all at once
const reviewPromises = reviewers.map(view =>
injectPromptAndWaitForCompletion(view, `Review this draft: ${draft}`)
);
// Wait for all models to finish physically typing
const reviews = await Promise.allSettled(reviewPromises);
return compileReviews(reviews);
}The Real Headache: Managing Web UI States
How do you know when an AI is “done” typing when you don’t have a clean API response? The solution involves watching DOM mutations and detecting when the typing indicator disappears.
The Result
If you’re curious about the DOM injection scripts or the Electron multi‑view architecture, the entire project is open‑source.
Check out the repo here → https://github.com/MinkyuTheBuilder/ai-council
Feel free to fork it, star it, or open an issue with feedback on the DOM‑scraping logic.