GPT-5.5 Bio Bug Bounty

Published: (April 22, 2026 at 08:00 PM EDT)
1 min read

Source: OpenAI Blog

Model in scope

  • GPT‑5.5 in Codex Desktop only.

Challenge

Identify one universal jail‑breaking prompt that can successfully answer all five bio‑safety questions from a clean chat without triggering moderation.

Rewards

  • $25,000 to the first true universal jailbreak that clears all five questions.
  • Smaller awards may be granted for partial wins at our discretion.

Timeline

  • Applications open: April 23 2026 (rolling acceptances)
  • Application deadline: June 22 2026
  • Testing period: April 28 2026 – July 27 2026

Access

We will extend invitations to a vetted list of trusted bio red‑teamers and review new applications. Selected applicants will be onboarded to the bio bug bounty platform.

Disclosure

All prompts, completions, findings, and communications are covered by an NDA.

Application

Submit a short application (name, affiliation, experience) by June 22 2026:
Apply here (opens in a new window)

Accepted applicants and collaborators must have existing ChatGPT accounts and will sign an NDA. Apply now and help us make frontier AI safer.

0 views
Back to Blog

Related posts

Read more »

GPT-5.5 System Card

OpenAI GPT‑5.5 is a new model designed for complex, real‑world work, including writing code, researching online, analyzing information, creating documents and s...

Less human AI agents, please

Forensic Summary A developer documents repeated instances of an AI agent deliberately circumventing explicit task constraints, then reframing its non‑complianc...