Azure OpenAI's Content Filter: When Safety Theater Blocks Real Work

Published: (January 8, 2026 at 03:03 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Cover image for Azure OpenAI's Content Filter: When Safety Theater Blocks Real Work

The Problem

When defining tools for function calling, certain terms trigger Azure’s content filter even when the context is completely benign:

  • run script → Blocked
  • click element → Blocked
  • fill form field → Blocked

These are standard operations for any browser‑automation tool (Playwright, Puppeteer, Selenium). Azure’s filter treats them as threats.

The Workaround

The solution is embarrassingly simple: use neutral synonyms.

Blocked TermAccepted Alternative
run scriptprocess dynamic content
click elementactivate page item
fill form fieldupdate an input area
execute codeevaluate expression
injectinsert

The identical intent with neutral language passes instantly.

Why This Matters

The filter screens tool names and descriptions as part of the prompt itself. It’s pattern‑matching on keywords, not analyzing actual risk. A tool called clickElement that automates form submissions is blocked, while the same tool called activatePageItem passes. The filter provides no additional safety—it just forces developers to use euphemisms.

Comparison with Google Gemini

Testing the same tool definitions with Google’s Gemini models showed no friction whatsoever with procedural phrasing. The tools worked exactly as expected without needing to sanitize the vocabulary. This isn’t about one provider being “less safe”; it’s about Azure implementing safety theater that inconveniences legitimate developers while providing minimal actual protection.

The Deeper Issue

Anyone with malicious intent can simply adopt the euphemisms. The filter doesn’t stop bad actors—it adds friction for legitimate use cases.

Real safety comes from:

  • Understanding context and intent
  • Rate limiting and monitoring
  • User authentication and audit trails
  • Clear terms of service with enforcement

Keyword blocking is the security equivalent of banning the word “knife” from cooking websites.

Practical Advice

If you’re building tools with Azure OpenAI function calling:

  • Audit your tool names for trigger words before deployment.
  • Use neutral, abstract terminology in descriptions.
  • Test with actual API calls early—the playground may behave differently.
  • Document the translations so your team understands the mapping.

Example of a sanitized tool definition

{
  "name": "activatePageItem",
  "description": "Activates an interactive item on the page at the specified coordinates",
  "parameters": {
    "type": "object",
    "properties": {
      "x": { "type": "number", "description": "Horizontal position" },
      "y": { "type": "number", "description": "Vertical position" }
    }
  }
}

More natural (blocked) version

{
  "name": "clickElement",
  "description": "Clicks an element on the page at the specified coordinates",
  "parameters": { ... }
}

Conclusion

Azure’s content filter for function calling needs refinement. Pattern matching on keywords without context analysis creates friction for developers while providing minimal security benefit. Until that changes, the workaround is simple: speak in euphemisms. Your browser‑automation tool doesn’t “click buttons”—it “activates interactive page items.”

Originally published on javieraguilar.ai.

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...