Standardizing 'Intelligence': The 3-Layer Metadata Philosophy

Published: (April 17, 2026 at 08:12 PM EDT)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

In previous posts we explained why AI agents fail when they rely on “vibes” and why they need a “Cognitive Interface.” But what does Intelligence look like at the code level?

If you ask ten developers to describe a tool to an AI, you’ll get ten different answers—some focus on technical types, others on flowery prose, and some on security. At apcore we have standardized this “Intelligence” into a 3‑Layer Metadata Stack. By separating technical syntax from behavioral governance and tactical wisdom, we give an AI agent a 360‑degree view of your module.

The 3‑Layer Metadata Stack

We visualize the “Intelligence” of a module as a stack that moves from Required to Tactical:

LayerPurposeKey Fields
Layer 1 – CoreMinimum definition needed for a module to exist in the apcore ecosystem.input_schema, output_schema, description
Layer 2 – GovernanceDefines the “Personality” and “Safety Profile” of your code.readonly, destructive, requires_approval, idempotent
Layer 3 – Tactical WisdomInjects human experience directly into the module’s metadata.x-when-to-use, x-when-not-to-use, x-common-mistakes, …

Layer 1 – Core (Required)

  • input_schema – Exactly what the AI must send.
  • output_schema – Exactly what the AI will receive.
  • description – A short “blurb” for the AI’s search engine.

Goal: Precision. By enforcing JSON Schema Draft 2020‑12, we provide a universal language that any LLM can understand. If the AI doesn’t get the syntax right, nothing else matters.

Layer 2 – Governance (Behavioral)

  • readonly – Is it safe to call this multiple times for information?
  • destructive – Will this delete or overwrite data?
  • requires_approval – Does a human need to click “Yes” before this runs?
  • idempotent – Can the AI safely retry if the connection drops?

Goal: Governance. Security and policy move from the prompt into the protocol, reducing the chance of logical mistakes.

Layer 3 – Tactical Wisdom (Extensions)

  • x-when-to-use – Positive guidance for the agent’s planner.
  • x-when-not-to-use – Negative guidance to prevent common misfires.
  • x-common-mistakes – Pitfalls discovered during development.

Goal: Tactical wisdom. Human experience is encoded directly in the metadata, avoiding cognitive overload that occurs when all information is crammed into a single description string.

Progressive Disclosure & Agent Phases

apcore uses Progressive Disclosure to keep token usage low and reasoning reliable:

  1. Discovery phase – The agent sees Layer 1 only.
  2. Planning phase – The agent loads Layer 2 to check safety and retry rules.
  3. Execution phase – The agent loads Layer 3 to avoid known traps.

By stacking the metadata, we reduce token consumption and significantly increase the reliability of the agent’s reasoning.

Example: A Fully‑Realized apcore Module

class SensitiveTransferModule(Module):
    # Layer 1: Core
    input_schema = TransferInput
    description = "Transfer funds to an external IBAN."

    # Layer 2: Annotations (Governance)
    annotations = ModuleAnnotations(
        destructive=True,
        requires_approval=True,   # Safety gate
        idempotent=True
    )

    # Layer 3: Extensions (Tactical Wisdom)
    metadata = {
        "x-when-not-to-use": "Do not use for internal account transfers.",
        "x-common-mistakes": "Ensure the IBAN includes the country code.",
        "x-preconditions": "User must be MFA authenticated."
    }

“Intelligence” in the agentic era is not a magical property of the model; it is an engineering standard of the module. When you build with the apcore 3‑Layer Philosophy, you are engineering a Skill that any AI can perceive and use with professional precision.

What’s Next

In the next article we’ll tackle the root cause of AI hallucinations: “The Death of ‘String‑Based’ Descriptions in AI Integration.”

This is Article #7 of the apcore: Building the AI‑Perceivable World series. Join us in standardizing the future of AI interaction.

Repository

GitHub: https://github.com/aiperceivable/apcore

0 views
Back to Blog

Related posts

Read more »