Behavioral Annotations: Why readonly and destructive guide LLM Planning

Published: (May 3, 2026 at 09:07 PM EDT)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

In our previous article we discussed how Schemas act as the “Postman” of the apcore ecosystem—ensuring that data is delivered in the correct format. But knowing how to deliver a message isn’t enough for an autonomous agent. The agent also needs to understand the impact of the delivery.

Imagine an agent tasked with “fixing a data inconsistency.” It discovers two modules: common.user.sync and executor.user.reset. Without behavioral context, the agent might pick the reset module because it sounds more “thorough,” not realizing it will delete the entire user profile.

This is why Behavioral Annotations are a core technical pillar of the apcore protocol. In this thirteenth article we explore how these simple boolean flags act as “Cognitive Stop Signs” for AI planners.

Behavioral Annotations Overview

  • Schemas handle the syntax (e.g., “Is it a string? Is it required?”).
  • Annotations handle the semantics (e.g., “Is it safe? Is it permanent?”).

By providing this semantic layer we move from “Code‑Calling” to “Skill‑Perceiving.” The AI agent no longer treats your modules as black boxes; it perceives their personality.

Standardized Annotations

The apcore protocol defines a set of standardized annotations grouped into Safety, Execution, and Governance:

AnnotationMeaning
readonlyNo side effects. Safe for discovery and infinite retries.
destructiveData will be permanently modified or deleted.
idempotentMultiple calls with the same input have the same effect as one.
pureOutput depends only on input; no external state dependency.
streamingThe module returns a stream of events/chunks rather than a single block.
cacheableResults can be stored for future use.
cache_ttlHow long (in seconds) the result remains valid.
paginatedThe result is part of a series; requires a cursor/token to continue.
requires_approvalPauses execution for a human “Yes” (HITL).
open_worldInteracts with non‑deterministic external systems (e.g., Web, Email).
internalHidden from standard discovery; used for system‑to‑system calls.
extraA catch‑all map for surface‑specific or custom behavioral hints.

How an LLM Uses These Flags (Planning Phase)

When a sophisticated agent (e.g., Claude 3.5 or GPT‑4o) receives a list of tools, it builds a Plan of Action.

  • If a module is marked destructive: true, the model’s internal safety alignment often triggers a caution state.
  • The agent may first check for a “dry‑run” flag, or it might ask the user for confirmation:

“I have found a way to fix this, but it requires a destructive database operation. Do you want me to proceed?”

Without these annotations the agent is blind—it executes the plan first and discovers the consequences later, which is usually too late.

Automated Annotations with apexe

The power of automated annotations is a highlight of apexe, our tool for wrapping existing CLIs. When you run:

apexe scan git

the tool does more than extract parameters; it uses pattern matching to classify commands:

  • git status and git logreadonly: true
  • git push --force and git reset --harddestructive: true

By simply scanning help text, apexe creates a Safe Workspace where an AI agent can browse your repository without accidentally blowing up your production branch.

Conclusion & Next Steps

Engineering for AI means engineering for Cognitive Safety. By using apcore Behavioral Annotations you turn raw functions into “Professional Skills,” giving the AI the wisdom it needs to plan responsibly, reducing token waste, and preventing agentic disasters.

Next, we’ll dive into the AI’s Short‑Term Memory: the Context Object and how it manages traces and state across complex module chains.

This is Article #13 of the apcore: Building the AI‑Perceivable World series. Safety is a protocol‑level primitive.

GitHub:

0 views
Back to Blog

Related posts

Read more »