Azure OpenAI's Content Filter: When Safety Theater Blocks Real Work
Source: Dev.to

The Problem
When defining tools for function calling, certain terms trigger Azure’s content filter even when the context is completely benign:
run script→ Blockedclick element→ Blockedfill form field→ Blocked
These are standard operations for any browser‑automation tool (Playwright, Puppeteer, Selenium). Azure’s filter treats them as threats.
The Workaround
The solution is embarrassingly simple: use neutral synonyms.
| Blocked Term | Accepted Alternative |
|---|---|
run script | process dynamic content |
click element | activate page item |
fill form field | update an input area |
execute code | evaluate expression |
inject | insert |
The identical intent with neutral language passes instantly.
Why This Matters
The filter screens tool names and descriptions as part of the prompt itself. It’s pattern‑matching on keywords, not analyzing actual risk. A tool called clickElement that automates form submissions is blocked, while the same tool called activatePageItem passes. The filter provides no additional safety—it just forces developers to use euphemisms.
Comparison with Google Gemini
Testing the same tool definitions with Google’s Gemini models showed no friction whatsoever with procedural phrasing. The tools worked exactly as expected without needing to sanitize the vocabulary. This isn’t about one provider being “less safe”; it’s about Azure implementing safety theater that inconveniences legitimate developers while providing minimal actual protection.
The Deeper Issue
Anyone with malicious intent can simply adopt the euphemisms. The filter doesn’t stop bad actors—it adds friction for legitimate use cases.
Real safety comes from:
- Understanding context and intent
- Rate limiting and monitoring
- User authentication and audit trails
- Clear terms of service with enforcement
Keyword blocking is the security equivalent of banning the word “knife” from cooking websites.
Practical Advice
If you’re building tools with Azure OpenAI function calling:
- Audit your tool names for trigger words before deployment.
- Use neutral, abstract terminology in descriptions.
- Test with actual API calls early—the playground may behave differently.
- Document the translations so your team understands the mapping.
Example of a sanitized tool definition
{
"name": "activatePageItem",
"description": "Activates an interactive item on the page at the specified coordinates",
"parameters": {
"type": "object",
"properties": {
"x": { "type": "number", "description": "Horizontal position" },
"y": { "type": "number", "description": "Vertical position" }
}
}
}
More natural (blocked) version
{
"name": "clickElement",
"description": "Clicks an element on the page at the specified coordinates",
"parameters": { ... }
}
Conclusion
Azure’s content filter for function calling needs refinement. Pattern matching on keywords without context analysis creates friction for developers while providing minimal security benefit. Until that changes, the workaround is simple: speak in euphemisms. Your browser‑automation tool doesn’t “click buttons”—it “activates interactive page items.”
Originally published on javieraguilar.ai.