Prompt management, RAG, and agents with HazelJS
Source: Dev.to
One starter: typed prompt templates, a live registry, FileStore persistence, RAG, supervisor agents, and AI tasks—all driven by the same prompt system
Managing LLM prompts well is hard: you want versioning, overrides without redeploys, and a single place that RAG, agents, and plain AI tasks all read from. The HazelJS Prompt Starter shows how to do exactly that. Built on @hazeljs/prompts, @hazeljs/rag, and @hazeljs/agent, it gives you:
- a PromptRegistry with typed templates,
- FileStore persistence, and
- a REST API to inspect and override any prompt at runtime.
RAG answer synthesis, the supervisor agent, worker agents, and four AI tasks (welcome, summarize, sentiment, translate) all use that same registry. In this post we walk through what’s in the starter and how to use it.
Features
| Feature | Description |
|---|---|
| PromptTemplate | Typed {variable} rendering with full TypeScript inference |
| PromptRegistry | Global prompt store — register, override, version at runtime |
| FileStore | Prompts persist to ./data/prompts.json between restarts |
| RAG integration | The RAG answer synthesis prompt is registry‑driven and overridable via REST |
| Agent integration | Supervisor system + routing prompts come from the registry |
| Worker agents | Researcher and Analyst workers use registry prompts for tool behaviour |
| AI tasks | Welcome, summarize, sentiment, translate — all backed by registry prompts |
| Live REST API | Inspect, preview, and override any prompt without restarting the server |
One server, one registry: change a prompt with
PUT /api/prompts/:key, and the next RAG question, agent run, or AI task uses the new template.
Quick‑Start
git clone https://github.com/hazel-js/hazeljs-prompt-starter.git
cd hazeljs-prompt-starter
cp .env.example .env # add OPENAI_API_KEY
npm install
npm run dev
The server runs at http://localhost:3000. Try listing prompts:
curl http://localhost:3000/api/prompts
Then override the RAG answer prompt and ask a question (see examples below).
Every prompt is identified by a key (e.g. rag:answer, agent:supervisor:system, app:summarize). The REST API lets you manage them without touching code.
Prompt Registry API
| Endpoint | Description |
|---|---|
GET /api/prompts | List every registered prompt (key, name, version, template) |
GET /api/prompts/stores | Show configured store backends (e.g. FileStore) |
GET /api/prompts/:key | Full details for one prompt |
GET /api/prompts/:key/versions | List cached versions |
POST /api/prompts/:key/preview | Render with supplied variables (see exactly what the LLM gets) |
PUT /api/prompts/:key | Override a prompt at runtime (persisted to FileStore immediately) |
POST /api/prompts/save | Persist entire in‑memory registry to FileStore |
POST /api/prompts/load | Reload all prompts from FileStore |
Example: Override the RAG answer prompt
curl -X PUT http://localhost:3000/api/prompts/rag%3Aanswer \
-H "Content-Type: application/json" \
-d '{
"template": "Answer in one sentence.\nContext: {context}\nQuestion: {query}\nAnswer:",
"metadata": { "version": "2.0.0", "description": "Concise one‑sentence answers" }
}'
Example: Preview a prompt with variables
curl -X POST http://localhost:3000/api/prompts/app%3Asummarize/preview \
-H "Content-Type: application/json" \
-d '{ "variables": { "text": "HazelJS is a TypeScript framework.", "maxWords": "10" } }'
The RAG pipeline uses the rag:answer prompt from the registry. Override it via the Prompts API and the next /api/rag/ask call will use the new template.
RAG API
| Endpoint | Description |
|---|---|
POST /api/rag/ingest | Ingest plain‑text documents into the in‑memory vector store |
POST /api/rag/ask | Q&A using the current rag:answer prompt (response includes promptUsed) |
POST /api/rag/ask/custom | One‑shot Q&A with a custom template (no registry change) |
GET /api/rag/stats | Document count and current rag:answer template |
DELETE /api/rag/clear | Wipe the vector store |
Ingest and ask
curl -X POST http://localhost:3000/api/rag/ingest \
-H "Content-Type: application/json" \
-d '{
"documents": [
{ "content": "HazelJS is a TypeScript backend framework built for scalability.", "source": "intro.txt" },
{ "content": "@hazeljs/prompts provides typed, overridable prompt templates.", "source": "prompts.txt" }
]
}'
curl -X POST http://localhost:3000/api/rag/ask \
-H "Content-Type: application/json" \
-d '{ "question": "What is HazelJS?" }'
Workflow: Override rag:answer with PUT /api/prompts/rag%3Aanswer, then run the same question again — the answer style follows the new prompt.
Agent API
| Endpoint | Description |
|---|---|
POST /api/agent/run | Run the supervisor on a task (delegates to Researcher and/or Analyst) |
GET /api/agent/workers | List workers and their prompt registry keys |
Example
curl -X POST http://localhost:3000/api/agent/run \
-H "Content-Type: application/json" \
-d '{ "task": "Research the benefits of RAG over fine‑tuning and analyse the trade‑offs." }'
The response includes supervisorSystemPrompt — the exact prompt used for the supervisor. Override agent:supervisor:system or agent:worker:researcher and run again to see different delegation and output styles.
AI Tasks API
| Endpoint | Description |
|---|---|
GET /api/ai/examples | Current template and sample variables for all four tasks |
POST /api/ai/task/welcome | Personalized greeting |
POST /api/ai/task/summarize | Word‑limited summarisation |
POST /api/ai/task/sentiment | JSON sentiment (sentiment, confidence, reason) |
POST /api/ai/task/translate | Language translation |
Examples
curl -X POST http://localhost:3000/api/ai/task/welcome \
-H "Content-Type: application/json" \
-d '{ "name": "Alice", "topic": "prompt engineering" }'
curl -X POST http://localhost:3000/api/ai/task/summarize \
-H "Content-Type: application/json" \
-d '{ "text": "HazelJS makes building TypeScript back‑ends fast and type‑safe.", "maxWords": 15 }'
curl -X POST http://localhost:3000/api/ai/task/sentiment \
-H "Content-Type: application/json" \
-d '{ "text": "I love how easy it is to manage prompts with HazelJS!" }'
curl -X POST http://localhost:3000/api/ai/task/translate \
-H "Content-Type: application/json" \
-d '{ "text": "Hello, world!", "targetLanguage": "Spanish" }'
All four tasks pull their prompts from the same registry, so you can update any of them at runtime via the Prompt API.
Recap
- One registry → single source of truth for every prompt.
- Live REST API → edit, preview, version, and persist prompts without a redeploy.
- Typed templates → full TypeScript inference for safer prompt construction.
- FileStore → prompts survive server restarts.
Give it a spin, tweak prompts on the fly, and see how instantly the behavior of RAG, agents, and AI tasks changes!
API Examples
# Summarise a piece of text (max 30 words)
curl -X POST http://localhost:3000/api/ai/task/summarize \
-H "Content-Type: application/json" \
-d '{ "text": "HazelJS is a modular TypeScript framework...", "maxWords": "30" }'
# Get sentiment analysis
curl -X POST http://localhost:3000/api/ai/task/sentiment \
-H "Content-Type: application/json" \
-d '{ "text": "I love how easy HazelJS makes dependency injection!" }'
Prompt Registry Overview
| Key | Package | Description |
|---|---|---|
rag:answer | @hazeljs/rag | RAG answer synthesis |
rag:entity-extraction | @hazeljs/rag | GraphRAG entity extraction |
rag:community-summary | @hazeljs/rag | GraphRAG community summarisation |
rag:graph-search | @hazeljs/rag | GraphRAG search synthesis |
agent:supervisor:system | @hazeljs/agent | Supervisor identity + worker list |
agent:supervisor:routing | @hazeljs/agent | JSON routing decision |
agent:worker:researcher | this starter | ResearcherAgent tool prompt |
agent:worker:analyst | this starter | AnalystAgent tool prompt |
app:welcome | this starter | Personalised greeting |
app:summarize | this starter | Word‑limited summarisation |
app:sentiment | this starter | JSON sentiment classification |
app:translate | this starter | Language translation |
Project Structure
src/
├── main.ts # Bootstrap + startup banner
├── app.module.ts # Root HazelModule
├── prompts/ # @hazeljs/prompts integration
│ ├── prompts.service.ts
│ ├── prompts.controller.ts
│ └── prompts.module.ts
├── rag/ # @hazeljs/rag — reads rag:answer from registry
│ ├── rag.service.ts
│ ├── rag.controller.ts
│ └── rag.module.ts
├── agent/ # @hazeljs/agent — supervisor + workers from registry
│ ├── agent.service.ts
│ ├── agent.controller.ts
│ ├── workers/researcher.agent.ts
│ ├── workers/analyst.agent.ts
│ └── agent.module.ts
├── ai/ # AI tasks via registry prompts
│ ├── ai-task.service.ts
│ ├── ai-task.controller.ts
│ └── ai.module.ts
├── llm/ # OpenAI LLM provider for agents
│ └── openai-llm.provider.ts
└── health/
└── health.controller.ts # Liveness + readiness
How It Works
- Prompt Registry – A single global
PromptRegistry(backed byPromptTemplateand aFileStore) holds all prompt templates. - Runtime Overrides – Prompts can be listed, previewed, and overridden via a REST API; changes are picked up instantly by RAG, agents, and AI tasks.
- Persistence – Overrides are persisted to
./data/prompts.json(or another store) and survive restarts.
Environment Variables
| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY | Required – OpenAI API key | — |
EMBEDDING_MODEL | Model used for embeddings | — |
QA_MODEL | Model for question‑answering | — |
AGENT_MODEL | Model for agents | — |
PROMPTS_FILE | Path to the prompts JSON file | ./data/prompts.json |
PORT | HTTP port for the server | — |
See the starter’s .env.example and README for the full list.
Extending the Store
For production you can replace the FileStore with a RedisStore (or any other backend) by adjusting the registry configuration inside PromptsService. The REST API and all consumers remain unchanged.
What the Starter Gives You
- One Registry –
PromptTemplate,PromptRegistry, and a pluggable store for typed, overridable prompts. - One REST API – List, preview, and override any prompt at runtime. All services (RAG, agents, AI tasks) react to changes instantly.
- RAG, Agents, & AI Tasks – All read from the same registry, enabling behavior tuning without code changes.
Clone the repo, set OPENAI_API_KEY, and you have a single application that demonstrates prompt management, RAG, supervisor agents, and AI tasks in one place.
For more information about HazelJS and
@hazeljs/prompts, visit hazeljs.com and the HazelJS GitHub repository.