A Request for Comment on the Connex AGI Architecture

Published: (February 9, 2026 at 12:29 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

I’ve been working on a new project called Connex AGI, a system designed to be more than just another chatbot. The goal is to build a “compiler for human intent”—a system that transforms nebulous user goals into structured, executable programs.

We are aiming for a biological cognitive model, integrating deliberative reasoning, perception, reflexes, and memory into a cohesive whole. I am opening up the repository for community review and would love feedback on the underlying architecture.

The Architecture

A Biological Approach

Connex AGI implements a multi‑tier architecture that mimics biological systems. Instead of a single LLM loop, we split the cognitive load across specialized layers.

Architecture diagram

1. The Senses (Perception & Reflexes)

Before the “brain” even processes a request, we have the Perception Layer (Tier Peer). Using the Model Context Protocol (MCP), it gathers real‑time data—e.g., reading logs or analyzing video streams—to ground the AI in reality.

Simultaneously, the Reflex Layer handles high‑speed, unconditional responses. Like a nervous system, it executes pre‑programmed plans (e.g., in response to a GitHub webhook) without waiting for the slower, expensive reasoning of the Planner.

2. The Core Brain (Planner & Orchestrator)

This is where the heavy lifting happens:

  • Tier 1 – Planner: Uses models such as DeepSeek‑R1 or GPT‑o1 to decompose natural‑language goals into a Directed Acyclic Graph (DAG) of actions.
  • Tier 2 – Orchestrator: Acts as the manager. It handles state management, routes outputs from one step to inputs of another, and self‑corrects if a step fails.

3. Execution & Evolution

  • Tier 3 – SkillDock: The modular worker layer where specific tools (web search, code execution, etc.) live.
  • Tier 4 – Motivation: A self‑improvement loop. After execution, the system reviews its logs; if a failure is due to a missing capability, it autonomously generates and installs new skills.
  • Tier 5 – World Layer: A “theory of physics” for the AGI. It uses a latent model to predict state transitions and verify whether an action is physically possible.

4. The Hive Mind (Registry)

Tier 10 – Registry allows AGIs to share skills and reflexes. If an instance encounters a problem it can’t solve, it can query the global registry to download the necessary knowledge learned by another AGI.

5. Memory (The Experience)

Connex AGI implements a dual‑tier memory system to solve the “amnesia” problem common in LLMs:

  • Short‑Term (The Cache): A RAM‑based layer that holds the last ~10 interactions, ensuring immediate dialogue flow without latency.
  • Long‑Term (The Archive): A persistent SQLite vector database. Instead of simple keyword search, the system uses cosine similarity to find “top‑match memories,” allowing it to recall relevant context from months ago based on meaning.
  • Experience Notes: To prevent data bloat, a daily summarization process compresses raw logs into high‑level notes that are easier for the Planner to re‑use.

Request for Comments

What I need from you: critical feedback on the following points.

  1. Complexity vs. Utility – Is the 8‑tier (or 10‑tier) separation necessary, or could the Planner and Orchestrator be merged without losing reliability?
  2. Latency – With separate layers for Perception, Planning, and Execution, do you foresee major latency bottlenecks?
  3. The World Layer – Is the concept of a “Latent Metaphysical Core” to verify actions practical in a software‑agent context?

Check out the Code

The system is built primarily in Python (≈ 74 %) and TypeScript.

I appreciate every star, fork, and code review!

0 views
Back to Blog

Related posts

Read more »