Image Offensive Security AI AI Red Teaming Inside the AI Red Teaming CTF: What 200+ Players Taught Us About Breaking and Defending LLMs October 23rd, 2025 200+ players, 10 days, 11 challenges. See what the HackerOne × Hack The Box AI red teaming CTF uncovered about breaking and defending LLMs. Read Now
Image AI AI Red Teaming AI Security 101: What is AI Red Teaming? September 17th, 2025 Learn what AI red teaming is, why it matters, and how it helps organizations secure AI systems against adversarial attacks and misuse. Read Now
Image Crowdsourced Security AI AI Red Teaming Government Cybersecurity Leaders Embrace Crowdsourcing, But Must Commit to AI Defense August 29th, 2025 Government CISOs succeed with crowdsourced security. To stay resilient, they must bring the same focus to AI security. Read Now
Image Public Policy AI Response AI Red Teaming What Security Leaders Need to Know About the UK’s Updated Cyber Framework August 26th, 2025 Learn how the UK’s CAF v4.0 updates impact cybersecurity risk assessment and what security leaders need to know. Get actionable insights. Read Now
Image AI AI Red Teaming DEF CON 33: Field Notes on AI Security, AI Red Teaming, and the Road Ahead August 21st, 2025 DEF CON 33 revealed where AI security is headed: continuous testing, hybrid models, and agentic systems. See the key takeaways. Read Now
Image HackerOne News AI Red Teaming LLMs adversarial testing with HackerOne: Join the CTF! August 19th, 2025 Push the limits of AI safety in [ai_gon3_rogu3], a 10-day CTF by HackerOne & HTB. Register now to test and break LLMs in real adversarial scenarios. Read Now
Image AI Offensive Security AI Red Teaming How Anthropic’s Jailbreak Challenge Put AI Safety Defenses to the Test March 3rd, 2025 Last month, Anthropic partnered with HackerOne to complete an AI red teaming challenge on a... Read Now
Image Public Policy AI AI Red Teaming Unlocking Trust in AI: The Ethical Hacker's Approach to AI Red Teaming December 19th, 2023 HackerOne offers robust AI Red Teaming services that help organizations bolster the security, fairness, and reliability of their AI deployments. Read Now