OpenAI opened applications on April 23 for a specialized bug bounty targeting biological safety vulnerabilities in GPT-5.5, motivated by concern that the model's advanced reasoning capabilities lower the barrier for bioweapon-related queries. Vetted participants — AI red teamers, cybersecurity professionals, and biosecurity experts — are challenged to craft a single universal prompt that bypasses all five of the model's biosafety guardrails without triggering moderation; the first to succeed earns $25,000. Testing runs April 28 through July 27, all findings are covered by NDA, and partial successes yielding threat intelligence may receive discretionary awards. Applications close June 22 via the program portal.