OpenAI의 가드레일은 비용을 제어하지 못한다. 여기 차이가 있다.
Source: Dev.to
Overview of OpenAI Guardrails
OpenAI shipped guardrails in the Agents SDK last month.
- Input guardrails – run logic before the agent processes a message (block, redirect, log).
- Output guardrails – run logic after the agent produces a response (flag, filter, hold).
- Tool‑call guardrails – intercept a tool invocation before it fires (approve or reject based on your rules).
These are behavior controls that answer the question “Did my agent do the right thing?” They are real, solve real problems, and work well for validation, content filtering, and tool‑approval logic.
Limitations Regarding Cost
The guardrails have no concept of spend:
- No
budget_usdparameter. - No
on_exceedhook. - No token accumulation across a task.
- No cost ceiling per agent function.
This isn’t an oversight; it’s out of scope. OpenAI’s framework focuses on orchestration and quality control, while budget enforcement belongs to a different layer.
The Gap
Your pipeline can pass every guardrail check, produce clean output, and have all tool calls approved—yet still generate a massive bill (e.g., a $47,000 AWS invoice) if a retry loop runs unchecked. Guardrails passed, budget destroyed.
Introducing agentguard47
agentguard47 sits below the framework layer and enforces spend limits per agent function.
# agentguard47 example
from agentguard47 import guard
@guard(budget_usd=2.00, on_exceed="raise")
def run_analyzer(task):
result = client.responses.create(...)
return result
When accumulated spend reaches the specified budget (e.g., $2.00), the decorator raises an exception that you can catch and handle. This prevents silent loops and surprises at billing time, giving each agent function its own cost ceiling.
How to Use agentguard47
-
Install
pip install agentguard47 -
Wrap any agent function (whether using OpenAI’s Agents SDK, LangChain, or a raw
openaiclient) with the@guarddecorator. -
Handle the
on_exceedaction (raise,log, custom callback, etc.) to decide what to do when the budget is breached.
Integration with Existing Tools
- Works seamlessly with OpenAI’s Agents SDK.
- Compatible with LangChain pipelines.
- Functions that call the raw OpenAI client can also be guarded.
The decorator is agnostic to the function’s internals; it only tracks cost accumulation and enforces the specified ceiling.
Recommendation
- Use OpenAI’s guardrails for behavior validation, content filtering, and tool‑approval logic.
- Add agentguard47 for spend enforcement, hard stops on budget breaches, and cost‑tracking per agent.
These tools operate at different layers and complement each other. You need both questions answered:
- Did the agent behave correctly? – OpenAI guardrails.
- Did the agent stay within budget? – agentguard47.