Why the search for truth can never be worth more than the search to question it.
Source: Dev.to
Introduction
How I built an open‑source deep research engine that costs a fraction of what OpenAI, Gemini, and others charge, while delivering significantly better results.
Greetings, LessWrong community, developers, and anyone interested. This is my first post here, and I hope to live up to the community’s principles.
The Problem
We live in a fast‑paced society where the value of knowledge and truth scales exponentially with technological progress. In the age of AI‑generated content and “fake culture,” autonomously generated, fact‑checked knowledge is becoming increasingly important.
At the same time, we are all exposed to the pressure of “effectiveness” and “productivity.” Who still has the time to conduct real, in‑depth research, validate information, or establish facts? Virtually no one.
That’s why people turn to deep‑research engines. Google, OpenAI, Perplexity, and others offer quick and “easy” ways to conduct deeper searches effectively and quickly. But do they meet the demands of what we really need? I don’t think so, for several reasons:
Issues with Existing Tools
- Incorrect or hallucinated citations and sources – Tools such as Perplexity list sources that sound plausible, but many either do not exist or contain inaccurate content.
- False security and cost throttling – Providers promise high‑quality searches, yet behind the scenes they cut sources or use inferior models. Only expensive subscriptions unlock the full power.
- Functional hallucinations – OpenAI’s Deep Research, for example, repeatedly generates false facts about its capabilities (e.g., claiming it can generate things or use tools it cannot).
- Gatekeeping of truth – Subscription constraints and content censorship limit access to information. A truly open search should look different.
- Lack of transparency – Methodology, source utilization, and processing are hidden behind a black box.
In short, today’s deep‑research tools fill a gap but fall short of what users truly want in a research tool.
Lutum Veritas Research Project

I am Martin, a 37‑year‑old self‑taught IT career‑changer from Germany. I wanted my own software, and I wanted to publish it as open source because truth should not be hidden behind paywalls. From the start, the core ideas of the software were clear:
- No subscriptions, no paywalls – Bring your own key and pay only for usage.
- Robust source scraping and search – Fetch not only AI‑generated SEO dossiers but also the “DIRT” and the “ESSENCE” from the internet. Hence Lutum Veritas – extracting truth from the dirt.
- No censorship – Search for what you want and receive answers without permission or compliance restrictions.
- Open source and deterministic – Transparency by design.
- Deeper, more detailed searches – Results that go far beyond what the market currently offers.
Self‑Criticism
I am not claiming that the software is perfect or that it beats every other tool in every discipline worldwide. What I have built is a standalone BYOK (bring‑your‑own‑key) open‑source deep‑research tool that:
- Performs searches for a fraction of the cost of regular subscriptions or API deep‑research services.
- Provides significantly deeper and more detailed analysis, including an “academic deep‑research mode” that can generate reports exceeding 200 000 characters.
- Recognizes many more causal relationships than major market players, thanks to the way context transfer is implemented.
Bugs and imperfections will exist, but development is ongoing.
Call for Testers
Further development requires testers and feedback. I invite every developer, researcher, or anyone interested to test the software, challenge it, and challenge me. Your input will help refine the tool to meet high standards and deliver on its promises.
Get Involved
GitHub: