Why I built Heym instead of extending n8n

Published: (April 28, 2026 at 05:30 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Heym is a self‑hosted, source‑available AI workflow automation platform. It provides a single runtime for everything an AI workflow needs: agents, retrieval, approval steps, observability, scheduling, and the ability to expose workflows as callable tools for AI assistants. It runs on your own infrastructure via Docker Compose, so no data leaves your stack.

What Heym actually is

Heym offers a unified environment where agents can reason, call tools, retrieve documents, and pause for human review—all within the same execution flow. It’s designed specifically for AI‑native workflows, unlike deterministic automation platforms such as n8n, Zapier, or Make.

The execution model

The workflow engine builds a directed acyclic graph (DAG) from the canvas and runs independent nodes concurrently using a thread pool. In streaming mode, events are emitted as each node completes, allowing the frontend to update in real time.

  • Agent nodes support a full tool‑calling loop: they can run Python tools, connect to external MCP servers, delegate to sub‑agents, and invoke other workflows as tools.
  • When context usage approaches 80 % of the model window, the engine automatically compresses history to prevent long‑running agents from silently failing mid‑task.

Human‑in‑the‑Loop as a first‑class primitive

AI output often has real consequences (e.g., drafted emails, generated reports, data transformations). The HITL node pauses execution at any point, generates a public one‑time review URL, and waits. A reviewer can:

  • Accept the output
  • Edit it
  • Refuse it

All without needing a Heym account. Execution then resumes from an exact stored snapshot, and the same run can pause multiple times. This is not a workaround; it’s a core workflow design primitive.

Built‑in knowledge retrieval

Document retrieval is native to the runtime rather than a separate external service. Heym includes built‑in vector store management:

  1. Upload documents
  2. Create stores
  3. Wire semantic search directly into your workflow

The entire pipeline runs inside a single workflow and appears in a unified trace, eliminating the need for separate systems and debugging contexts.

MCP Server

Every Heym instance runs a built‑in MCP Server. Any workflow you build can be exposed as a tool that Claude Desktop, Cursor, or any MCP client can call directly. Agent nodes can also connect to external MCP servers as tool sources, enabling bidirectional capability flow.

Observability

  • Traces tab: logs every execution automatically.
  • Evals tab: lets you create test suites and run evaluations across multiple models simultaneously with configurable scoring.

Observability is baked into the platform, not added as an afterthought.

Stack

  • Frontend: Vue 3 with TypeScript and Vue Flow
  • Backend: Python, FastAPI, async SQLAlchemy
  • Database: PostgreSQL 16
  • Deployment: Docker Compose stack

Where it is now

Heym is at version v0.0.1 and is actively developed. The source code is available under the MIT license with a Commons Clause.

If you’re building AI workflows and spending more time on glue code than on the actual problem, give Heym a try.

GitHub:

Heym Workflow Canvas

0 views
Back to Blog

Related posts

Read more »