OpenAI's guardrails don't control costs. Here's the gap.

Published: (May 1, 2026 at 10:00 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Overview of OpenAI Guardrails

OpenAI shipped guardrails in the Agents SDK last month.

  • Input guardrails – run logic before the agent processes a message (block, redirect, log).
  • Output guardrails – run logic after the agent produces a response (flag, filter, hold).
  • Tool‑call guardrails – intercept a tool invocation before it fires (approve or reject based on your rules).

These are behavior controls that answer the question “Did my agent do the right thing?” They are real, solve real problems, and work well for validation, content filtering, and tool‑approval logic.

Limitations Regarding Cost

The guardrails have no concept of spend:

  • No budget_usd parameter.
  • No on_exceed hook.
  • No token accumulation across a task.
  • No cost ceiling per agent function.

This isn’t an oversight; it’s out of scope. OpenAI’s framework focuses on orchestration and quality control, while budget enforcement belongs to a different layer.

The Gap

Your pipeline can pass every guardrail check, produce clean output, and have all tool calls approved—yet still generate a massive bill (e.g., a $47,000 AWS invoice) if a retry loop runs unchecked. Guardrails passed, budget destroyed.

Introducing agentguard47

agentguard47 sits below the framework layer and enforces spend limits per agent function.

# agentguard47 example
from agentguard47 import guard

@guard(budget_usd=2.00, on_exceed="raise")
def run_analyzer(task):
    result = client.responses.create(...)
    return result

When accumulated spend reaches the specified budget (e.g., $2.00), the decorator raises an exception that you can catch and handle. This prevents silent loops and surprises at billing time, giving each agent function its own cost ceiling.

How to Use agentguard47

  1. Install

    pip install agentguard47
  2. Wrap any agent function (whether using OpenAI’s Agents SDK, LangChain, or a raw openai client) with the @guard decorator.

  3. Handle the on_exceed action (raise, log, custom callback, etc.) to decide what to do when the budget is breached.

Integration with Existing Tools

  • Works seamlessly with OpenAI’s Agents SDK.
  • Compatible with LangChain pipelines.
  • Functions that call the raw OpenAI client can also be guarded.

The decorator is agnostic to the function’s internals; it only tracks cost accumulation and enforces the specified ceiling.

Recommendation

  • Use OpenAI’s guardrails for behavior validation, content filtering, and tool‑approval logic.
  • Add agentguard47 for spend enforcement, hard stops on budget breaches, and cost‑tracking per agent.

These tools operate at different layers and complement each other. You need both questions answered:

  1. Did the agent behave correctly? – OpenAI guardrails.
  2. Did the agent stay within budget? – agentguard47.

Documentation & Examples

agentguard47 docs and examples

0 views
Back to Blog

Related posts

Read more »