HackerOne Sets the Standard for AI-Era Testing with Good Faith AI Research Safe Harbor
New framework standardizes protections for good-faith AI testing, enabling faster, safer research
SAN FRANCISCO, January 20, 2026 — HackerOne, a global leader in Continuous Threat Exposure Management (CTEM), today announced the Good Faith AI Research Safe Harbor, a new industry framework that establishes clear authorization and legal protections for researchers testing AI systems in good faith. As AI systems scale rapidly across critical products and services, legal ambiguity around testing can slow responsible research and increase risk. The new safe harbor removes that friction by giving organizations and AI researchers a clear, shared standard to find and fix AI risks faster and with greater impact.
This announcement builds on HackerOne’s Gold Standard Safe Harbor, introduced in 2022 and widely adopted to protect good-faith security research across traditional software. Together, the two frameworks define how organizations should authorize, support, and protect research across both conventional and AI-powered systems.
AI testing often involves techniques and outcomes that don’t fit neatly into traditional vulnerability disclosure frameworks, creating legal ambiguity that can slow discovery and increase risk. The Good Faith AI Research Safe Harbor resolves this by defining Good Faith AI Research and clearly authorizing responsible AI testing.
“AI testing breaks down when expectations are unclear,” said Ilona Cohen, Chief Legal and Policy Officer at HackerOne. “Organizations want their AI systems tested, but researchers need confidence that doing the right thing won’t put them at risk. The Good Faith AI Research Safe Harbor provides clear, standardized authorization for AI research, removing uncertainty on both sides.”
Organizations that adopt the Good Faith AI Research Safe Harbor commit to recognizing good-faith AI research as authorized activity. This includes refraining from legal action, providing limited exemptions from restrictive terms of service, and supporting researchers if third parties pursue claims related to authorized research. The safe harbor applies only to AI systems owned or controlled by the adopting organization and is designed to support responsible disclosure and collaboration.
“AI security is ultimately about trust,” said Kara Sprague, CEO of HackerOne. “If AI systems aren’t tested under real-world conditions, trust erodes quickly. By extending safe harbor protections to AI research, HackerOne is defining how responsible testing should work in the AI era. This is how organizations find problems earlier, work productively with researchers, and deploy AI with confidence.”
The Good Faith AI Research Safe Harbor is available to HackerOne customers as a standalone framework that can be adopted alongside the Gold Standard Safe Harbor. Programs that adopt it can clearly signal to researchers that AI testing is welcome, authorized, and protected, driving higher-quality testing and stronger outcomes.
With this announcement, HackerOne reinforces its leadership in shaping how security, trust, and authorization work in the AI era, setting clear expectations for organizations, researchers, and the future of AI systems.
About HackerOne
HackerOne is a global leader in Continuous Threat Exposure Management (CTEM). The HackerOne Platform unites agentic AI solutions with the ingenuity of the world’s largest community of security researchers to continuously discover, validate, prioritize, and remediate exposures across code, cloud, and AI systems. Through solutions like bug bounty, vulnerability disclosure, agentic pentesting, AI red teaming, and code security, HackerOne delivers measurable, continuous reduction of cyber risk for enterprises. Industry leaders, including Anthropic, Crypto.com, General Motors, Goldman Sachs, Lufthansa, Uber, UK Ministry of Defence, and the U.S. Department of Defense, trust HackerOne to safeguard their digital ecosystems.