Clean Architecture for AI Agents with Convo-Lang decoupling orchestration from reasoning
Source: Dev.to
Decoupling Orchestration from Reasoning
In this post, I’ll show how to design a clean, maintainable architecture for AI systems using Convo‑Lang.
As a concrete example, I’ll use a hallucination‑resistant AI agent that analyzes a job description, evaluates candidate fit against detailed professional experience, and generates a tailored resume only when the role is actually relevant.
In this setup, all reasoning and decision logic lives in Convo‑Lang, while Python is used strictly for orchestration — loading inputs, executing agents, and wiring the pipeline together.
The goal of the example is not the resume itself. The goal is to demonstrate how to decouple orchestration from reasoning and build an AI system that is easy to understand, extend, and maintain over time.
The full working example is available in the Convo‑Lang repository.
You can explore the complete code here:
You can clone it, run it locally, and experiment with it by simply replacing the job description and writing your own experience profile — the sample inputs live in the data/ folder.
What Convo‑Lang actually is
Convo‑Lang is not:
- a prompt‑template engine
- a thin wrapper around chat completions
- a “nicer way to write prompts”
Convo‑Lang is a domain‑specific language for LLM reasoning and agent workflows. It allows you to define:
- explicit agent roles
- typed input and output contracts
- deterministic logic
- schema‑enforced outputs
- multi‑agent pipelines
All of this lives in .convo files — outside of application code.
Why resumes are a good stress test
Resume generation is a hostile domain for hallucinations:
- inventing skills is unacceptable
- inventing companies or roles is unacceptable
- inventing dates is unacceptable
- decisions must be explainable
A single “smart prompt” is the worst possible approach here.
So instead of asking how to prompt, I started by asking:
How should this system be modeled?
Modeling the system as Convo‑Lang agents
The solution is built as five Convo‑Lang agents, each responsible for exactly one thing:
| Agent | Responsibility |
|---|---|
| JobDescriptionAnalyzer | Turns raw job text into structured requirements |
| CandidateProfileAnalyzer | Converts free‑form experience text into factual, structured data |
| ProfileJobMatcher | Matches experience to requirements and explicitly lists gaps |
| ResumeWriter | Generates a resume strictly from verified data |
| FitEvaluator | Decides whether applying makes sense |
Each agent:
- lives in its own
.convofile - has a single responsibility
- communicates only through typed contracts
This separation is not cosmetic; it is the foundation of reliability.
Typed contracts instead of “return JSON please”
In most LLM systems, structured output is a suggestion.
In Convo‑Lang, it is a contract.
Below is a real schema used by the CandidateProfileAnalyzer agent:
>define
ProfileData = struct(
workExperience: array(
struct(
title: string
companyName: string
firstDate: string
lastDate?: string
summary: string
experience: array(string)
)
)
projects?: array(
struct(
title: string
firstDate: string
lastDate?: string
experience: array(string)
)
)
)
This immediately changes system behavior:
- required fields must exist
- optional fields are explicit
- invented fields are invalid
- downstream agents can trust the data shape
Hallucinations don’t silently propagate—they violate the contract.
Validating inputs before any reasoning happens
Hallucinations often start before generation, when invalid or ambiguous input quietly enters the system.
Convo‑Lang allows agents to validate inputs explicitly, before any reasoning takes place:
>define
JobData = struct(
title: string
mustRequirements: array(string)
niceToHaveRequirements: array(string)
keywords: array(string)
)
>do
jobData = new(JobData job_data)
That single line enforces a lot:
- checks that
job_dataexists - validates required fields
- enforces correct types
- rejects malformed input early
If the input does not match JobData, the agent does not proceed. The model never reasons over invalid data. Here, input validation is part of the agent’s contract, not an afterthought.
Explainable matching instead of opaque scoring
The ProfileJobMatcher agent does not produce a mysterious score. It produces:
- only relevant roles and projects
- explicit
matchReasonsfor each item - two concrete gap lists: must‑have and nice‑to‑have
MatchData = struct(
coverageProfileData: struct(
workExperience: array(
title: string
companyName: string
firstDate: string
lastDate?: string
summary: string
experience: array(string)
matchReasons: array(string)
)
projects?: array(...)
)
gaps: struct(
mustRequirements: array(string)
niceToHaveRequirements: array(string)
)
)
Nothing is hidden. Every match and every gap is inspectable. This output becomes the single source of truth for all downstream steps.
Deterministic logic inside the agent (not in prose)
A key feature of Convo‑Lang is that deterministic logic lives next to reasoning. In the FitEvaluator, the final decision is not guessed; it is calculated:
>do
jobData = new(JobData job_data)
matchData = new(MatchData match_data)
totalConfidence = 100
jobRequirementsAmount = jobData.mustRequirements.length
requirementPos = 0
for req in jobData.mustRequirements {
requirementPos = requirementPos + 1
if req in matchData.gaps.mustRequirements {
totalConfidence = totalConfidence - (100 / jobRequirementsAmount)
}
}
# ... additional deterministic rules ...
if totalConfidence >= 80 {
decision = "Apply"
} else {
decision = "Do not apply"
}
The logic is explicit, testable, and reproducible—no hidden heuristics hidden in natural‑language prompts.
Putting it all together
- Python orchestration loads the job description and candidate profile from files.
- It calls each Convo‑Lang agent in turn, passing validated contracts.
- Agents return typed data structures that the next agent can safely consume.
- The final
FitEvaluatorreturns a clear decision (Apply/Do not apply). - If the decision is to apply, the
ResumeWriterbuilds a resume using only the verified data.
Because reasoning lives entirely in Convo‑Lang, the Python layer remains thin and focused on I/O and pipeline wiring. This separation makes the system:
- Easy to read – each
.convofile has a single purpose. - Easy to test – contracts can be unit‑tested independently of the LLM.
- Resilient to hallucination – contracts enforce data shape and validation before any generation.
- Explainable – every step produces inspectable output.
Try it yourself
git clone https://github.com/convo-lang/convo-lang.git
cd convo-lang/packages/convo-lang-py/examples/02_patterns/resume_generator
python run.py # or whatever entry point the repo provides
Replace the files in data/ with your own job description and experience profile, then rerun the pipeline. You’ll see how the system either produces a polished, fact‑checked resume or tells you the role isn’t a good fit—without ever hallucinating a skill, company, or date.
Code Example
ints = div(totalConfidence jobRequirementsAmount)
requirementGapAmount = matchData.gaps.mustRequirements.length
mainConfidence = mul(
sub(jobRequirementsAmount requirementGapAmount)
requirementPoints
)
decision = "apply"
if (lt(mainConfidence 70)) then (
decision = "skip"
)
elif (lt(mainConfidence 90)) then (
decision = "maybe apply"
)
Business Logic
- readable
- reviewable
- testable
The LLM explains the decision — but it does not invent the rules.
Schema‑Enforced Output with @json
Convo‑Lang does not rely on “please return JSON”.
It enforces it.
@json RecommendationData
>user
Help the candidate decide whether applying for this job makes sense.
If the output does not match RecommendationData, it is invalid.
Structured output is no longer a best‑effort promise—it is a guarantee.
Python as an Orchestrator, Not a Reasoning Layer
So where does Python fit into this architecture?
Python is intentionally boring. It does not:
- contain prompts
- contain business rules
- interpret free‑form model output
It only:
- loads input data
- executes agents
- passes validated JSON between them
- handles I/O
job_data = convo_job_description_analyzer.complete(...)
profile_data = convo_candidate_profile_analyzer.complete(...)
match_data = convo_profile_job_matcher.complete(...)
resume_data = convo_resume_writer.complete(...)
decision = convo_fit_evaluator.complete(...)
All intelligence lives in .convo. Python is just the runtime. This separation is deliberate.
Why This Separation Matters
By keeping reasoning in Convo‑Lang and orchestration in Python:
- AI logic becomes portable
- Behavior is consistent across CLI, editor, and SDK
- Prompt changes don’t require backend redeploys
- Agent logic can be reviewed like code
The agents folder becomes the product.
The SDK becomes an implementation detail.
What This Example Actually Demonstrates
This post isn’t really about resumes. It demonstrates that Convo‑Lang lets you:
- treat LLM logic as first‑class code
- build multi‑agent systems without prompt chaos
- validate inputs and outputs explicitly
- make hallucinations visible instead of hidden
- scale reasoning without rewriting everything
That is why Convo‑Lang is worth using.
Final Takeaway
Hallucinations are rarely a model problem.
They are almost always an architecture problem.
Convo‑Lang gives you the tools to fix that at the right level.
Resources
- Convo‑Lang core:
- Convo‑Lang Python SDK:
- Resume agent example:
- Documentation: