What We’ve Learned from 5 Months of Hackbot Activity

In February, we welcomed the first AI hackbots to HackerOne, a new class of security researchers powered by AI, operating under a special set of rules tailored to this emerging frontier. A lot can happen in a few months when you’re moving at the speed of AI. It's time for a recap of what we've learned and where we're headed.
The Past 5 Months: Breaking New Ground
These are just some of the most notable developments, showcasing how AI is evolving from concept to practical application in vulnerability discovery and offensive security. A growing group of hackbots and agentic vulnerability scanners is emerging, and they’re already capturing real-world bounties.
- Recently, XBOW climbed to the top of the US leaderboard, measured by reputation, demonstrating that AI can be effective at finding certain types of vulnerabilities. What stands out about XBOW's approach is its unique invention of validators, which provides a potential solution to one of AI’s key challenges: hallucinations and false positives. Strong validation capabilities will become a defining characteristic for all successful hackbots.
- Meanwhile, Autonomous Cyber is developing FUZZ-E, an AI that hacks together with you. It can be tiring to pursue every possible exploitable scenario by hand. Leveraging AI as a scout to send ahead so you can focus on the juiciest opportunities is a great example of the future of human-AI collaboration.
- PortSwigger introduced MCP support to Burp Suite, opening the door to AI agents. This integration enables Burp Suite to be controlled via AI clients, such as Claude, a step towards seamless AI-human collaboration within security testing workflows.
- Alias Robotics released Cybersecurity AI (CAI), an extensible framework for applying AI to bug hunting. CAI is open source, democratizing access to advanced AI to build your own hackbots.
The Challenge with AI Hallucinations
As AI becomes part of many hackers’ tools, we see more false or hallucinated vulnerabilities that seem real because they’re created by an LLM. These reports can be difficult to distinguish from genuine findings, creating noise that undermines the effectiveness of security programs.
We recognize that not every hackbot builder will have access to advanced validator technology, which is why we see validation as a core capability that the HackerOne platform must offer in a world where AI is abundant. With the launch of Hai Insight Agent, we’ve taken our first step towards a future of HackerOne-powered validators that will help maintain our high bar for signal.

What This Means for Offensive Security Programs
If you run a bug bounty program, this is your opportunity to start testing the security capabilities of AI. Let hackbots test you by pointing them to your program.
Here are three ways to stay ahead:
- Review your policy: Does it allow hackbots to participate? If you restrict “automated testing," consider rephrasing those rules. Shift the focus from how vulnerabilities are discovered to the quality of the findings. Treat AIs like humans and hold them accountable for delivering valuable, actionable reports, no matter how they're generated.
- Maintain your bounty table: As hackbots become increasingly better at identifying low and medium-severity issues, ensure your bounties reflect what truly matters to your business so you keep rewarding participants for targeting what counts most.
- Invest in validation capabilities: As AI-generated reports become more common, you'll need strong validation to tell real vulnerabilities from AI hallucinations. Consider adopting advanced tools like Hai Insight Agent to stay ahead and keep your program quality high.
What This Means for Security Researchers
If you're a security researcher, the message is clear: incorporate AI into your workflow. Our vision is that every security researcher will have powerful AI within their hacking workbench. The tools are becoming more accessible and powerful:
- Check out the Burp Suite MCP integration for AI-guided web application testing
- Explore Shift for Caido to make AI work for you
- Experiment with CAI to build your own AI hacking agents
The security researchers who embrace these tools early will have a significant advantage as the field continues to evolve and new AI models grow increasingly capable.
What’s Coming Next
We’ve heard your feedback: the HackerOne leaderboard needs to evolve. With AI-powered collectives entering the arena, it is no longer just a competition of individual minds. We’re developing new ways to split the leaderboards between individuals and companies (and potentially collaborations in the future), similar to how we currently distinguish between bug bounty programs and VDPs.
The goal is to maintain transparency and fairness while also acknowledging that crowdsourced security is evolving with the rapid deployment of AI.
The Future of Security Research
We're entering a new era of AI-human collaborative vulnerability discovery. The past five months have shown us that AI isn't replacing human ingenuity: it's augmenting the capabilities of security researchers and opening new possibilities for finding and fixing vulnerabilities at scale. Hackbots are here, they're getting better, and they're changing how we think about cybersecurity.