Prompt management, RAG, and agents with HazelJS

Published: (March 7, 2026 at 07:33 PM EST)
8 min read
Source: Dev.to

Source: Dev.to

One starter: typed prompt templates, a live registry, FileStore persistence, RAG, supervisor agents, and AI tasks—all driven by the same prompt system

Managing LLM prompts well is hard: you want versioning, overrides without redeploys, and a single place that RAG, agents, and plain AI tasks all read from. The HazelJS Prompt Starter shows how to do exactly that. Built on @hazeljs/prompts, @hazeljs/rag, and @hazeljs/agent, it gives you:

  • a PromptRegistry with typed templates,
  • FileStore persistence, and
  • a REST API to inspect and override any prompt at runtime.

RAG answer synthesis, the supervisor agent, worker agents, and four AI tasks (welcome, summarize, sentiment, translate) all use that same registry. In this post we walk through what’s in the starter and how to use it.


Features

FeatureDescription
PromptTemplateTyped {variable} rendering with full TypeScript inference
PromptRegistryGlobal prompt store — register, override, version at runtime
FileStorePrompts persist to ./data/prompts.json between restarts
RAG integrationThe RAG answer synthesis prompt is registry‑driven and overridable via REST
Agent integrationSupervisor system + routing prompts come from the registry
Worker agentsResearcher and Analyst workers use registry prompts for tool behaviour
AI tasksWelcome, summarize, sentiment, translate — all backed by registry prompts
Live REST APIInspect, preview, and override any prompt without restarting the server

One server, one registry: change a prompt with PUT /api/prompts/:key, and the next RAG question, agent run, or AI task uses the new template.


Quick‑Start

git clone https://github.com/hazel-js/hazeljs-prompt-starter.git
cd hazeljs-prompt-starter
cp .env.example .env   # add OPENAI_API_KEY
npm install
npm run dev

The server runs at http://localhost:3000. Try listing prompts:

curl http://localhost:3000/api/prompts

Then override the RAG answer prompt and ask a question (see examples below).

Every prompt is identified by a key (e.g. rag:answer, agent:supervisor:system, app:summarize). The REST API lets you manage them without touching code.


Prompt Registry API

EndpointDescription
GET /api/promptsList every registered prompt (key, name, version, template)
GET /api/prompts/storesShow configured store backends (e.g. FileStore)
GET /api/prompts/:keyFull details for one prompt
GET /api/prompts/:key/versionsList cached versions
POST /api/prompts/:key/previewRender with supplied variables (see exactly what the LLM gets)
PUT /api/prompts/:keyOverride a prompt at runtime (persisted to FileStore immediately)
POST /api/prompts/savePersist entire in‑memory registry to FileStore
POST /api/prompts/loadReload all prompts from FileStore

Example: Override the RAG answer prompt

curl -X PUT http://localhost:3000/api/prompts/rag%3Aanswer \
  -H "Content-Type: application/json" \
  -d '{
    "template": "Answer in one sentence.\nContext: {context}\nQuestion: {query}\nAnswer:",
    "metadata": { "version": "2.0.0", "description": "Concise one‑sentence answers" }
  }'

Example: Preview a prompt with variables

curl -X POST http://localhost:3000/api/prompts/app%3Asummarize/preview \
  -H "Content-Type: application/json" \
  -d '{ "variables": { "text": "HazelJS is a TypeScript framework.", "maxWords": "10" } }'

The RAG pipeline uses the rag:answer prompt from the registry. Override it via the Prompts API and the next /api/rag/ask call will use the new template.


RAG API

EndpointDescription
POST /api/rag/ingestIngest plain‑text documents into the in‑memory vector store
POST /api/rag/askQ&A using the current rag:answer prompt (response includes promptUsed)
POST /api/rag/ask/customOne‑shot Q&A with a custom template (no registry change)
GET /api/rag/statsDocument count and current rag:answer template
DELETE /api/rag/clearWipe the vector store

Ingest and ask

curl -X POST http://localhost:3000/api/rag/ingest \
  -H "Content-Type: application/json" \
  -d '{
    "documents": [
      { "content": "HazelJS is a TypeScript backend framework built for scalability.", "source": "intro.txt" },
      { "content": "@hazeljs/prompts provides typed, overridable prompt templates.", "source": "prompts.txt" }
    ]
  }'

curl -X POST http://localhost:3000/api/rag/ask \
  -H "Content-Type: application/json" \
  -d '{ "question": "What is HazelJS?" }'

Workflow: Override rag:answer with PUT /api/prompts/rag%3Aanswer, then run the same question again — the answer style follows the new prompt.


Agent API

EndpointDescription
POST /api/agent/runRun the supervisor on a task (delegates to Researcher and/or Analyst)
GET /api/agent/workersList workers and their prompt registry keys

Example

curl -X POST http://localhost:3000/api/agent/run \
  -H "Content-Type: application/json" \
  -d '{ "task": "Research the benefits of RAG over fine‑tuning and analyse the trade‑offs." }'

The response includes supervisorSystemPrompt — the exact prompt used for the supervisor. Override agent:supervisor:system or agent:worker:researcher and run again to see different delegation and output styles.


AI Tasks API

EndpointDescription
GET /api/ai/examplesCurrent template and sample variables for all four tasks
POST /api/ai/task/welcomePersonalized greeting
POST /api/ai/task/summarizeWord‑limited summarisation
POST /api/ai/task/sentimentJSON sentiment (sentiment, confidence, reason)
POST /api/ai/task/translateLanguage translation

Examples

curl -X POST http://localhost:3000/api/ai/task/welcome \
  -H "Content-Type: application/json" \
  -d '{ "name": "Alice", "topic": "prompt engineering" }'

curl -X POST http://localhost:3000/api/ai/task/summarize \
  -H "Content-Type: application/json" \
  -d '{ "text": "HazelJS makes building TypeScript back‑ends fast and type‑safe.", "maxWords": 15 }'

curl -X POST http://localhost:3000/api/ai/task/sentiment \
  -H "Content-Type: application/json" \
  -d '{ "text": "I love how easy it is to manage prompts with HazelJS!" }'

curl -X POST http://localhost:3000/api/ai/task/translate \
  -H "Content-Type: application/json" \
  -d '{ "text": "Hello, world!", "targetLanguage": "Spanish" }'

All four tasks pull their prompts from the same registry, so you can update any of them at runtime via the Prompt API.


Recap

  • One registry → single source of truth for every prompt.
  • Live REST API → edit, preview, version, and persist prompts without a redeploy.
  • Typed templates → full TypeScript inference for safer prompt construction.
  • FileStore → prompts survive server restarts.

Give it a spin, tweak prompts on the fly, and see how instantly the behavior of RAG, agents, and AI tasks changes!


API Examples

# Summarise a piece of text (max 30 words)
curl -X POST http://localhost:3000/api/ai/task/summarize \
  -H "Content-Type: application/json" \
  -d '{ "text": "HazelJS is a modular TypeScript framework...", "maxWords": "30" }'

# Get sentiment analysis
curl -X POST http://localhost:3000/api/ai/task/sentiment \
  -H "Content-Type: application/json" \
  -d '{ "text": "I love how easy HazelJS makes dependency injection!" }'

Prompt Registry Overview

KeyPackageDescription
rag:answer@hazeljs/ragRAG answer synthesis
rag:entity-extraction@hazeljs/ragGraphRAG entity extraction
rag:community-summary@hazeljs/ragGraphRAG community summarisation
rag:graph-search@hazeljs/ragGraphRAG search synthesis
agent:supervisor:system@hazeljs/agentSupervisor identity + worker list
agent:supervisor:routing@hazeljs/agentJSON routing decision
agent:worker:researcherthis starterResearcherAgent tool prompt
agent:worker:analystthis starterAnalystAgent tool prompt
app:welcomethis starterPersonalised greeting
app:summarizethis starterWord‑limited summarisation
app:sentimentthis starterJSON sentiment classification
app:translatethis starterLanguage translation

Project Structure

src/
├── main.ts                     # Bootstrap + startup banner
├── app.module.ts               # Root HazelModule
├── prompts/                    # @hazeljs/prompts integration
│   ├── prompts.service.ts
│   ├── prompts.controller.ts
│   └── prompts.module.ts
├── rag/                        # @hazeljs/rag — reads rag:answer from registry
│   ├── rag.service.ts
│   ├── rag.controller.ts
│   └── rag.module.ts
├── agent/                      # @hazeljs/agent — supervisor + workers from registry
│   ├── agent.service.ts
│   ├── agent.controller.ts
│   ├── workers/researcher.agent.ts
│   ├── workers/analyst.agent.ts
│   └── agent.module.ts
├── ai/                         # AI tasks via registry prompts
│   ├── ai-task.service.ts
│   ├── ai-task.controller.ts
│   └── ai.module.ts
├── llm/                        # OpenAI LLM provider for agents
│   └── openai-llm.provider.ts
└── health/
    └── health.controller.ts   # Liveness + readiness

How It Works

  • Prompt Registry – A single global PromptRegistry (backed by PromptTemplate and a FileStore) holds all prompt templates.
  • Runtime Overrides – Prompts can be listed, previewed, and overridden via a REST API; changes are picked up instantly by RAG, agents, and AI tasks.
  • Persistence – Overrides are persisted to ./data/prompts.json (or another store) and survive restarts.

Environment Variables

VariableDescriptionDefault
OPENAI_API_KEYRequired – OpenAI API key
EMBEDDING_MODELModel used for embeddings
QA_MODELModel for question‑answering
AGENT_MODELModel for agents
PROMPTS_FILEPath to the prompts JSON file./data/prompts.json
PORTHTTP port for the server

See the starter’s .env.example and README for the full list.

Extending the Store

For production you can replace the FileStore with a RedisStore (or any other backend) by adjusting the registry configuration inside PromptsService. The REST API and all consumers remain unchanged.


What the Starter Gives You

  1. One RegistryPromptTemplate, PromptRegistry, and a pluggable store for typed, overridable prompts.
  2. One REST API – List, preview, and override any prompt at runtime. All services (RAG, agents, AI tasks) react to changes instantly.
  3. RAG, Agents, & AI Tasks – All read from the same registry, enabling behavior tuning without code changes.

Clone the repo, set OPENAI_API_KEY, and you have a single application that demonstrates prompt management, RAG, supervisor agents, and AI tasks in one place.

For more information about HazelJS and @hazeljs/prompts, visit hazeljs.com and the HazelJS GitHub repository.

0 views
Back to Blog

Related posts

Read more »