Jobert Abma
Co-founder and Principal Engineer

Responsible AI at HackerOne

HackerOne using GenAI for vulnerability detection and remediation

By Jobert Abma, Co-founder and Principal Software Engineer and Alex Rice, Co-founder and CTO

Generative Artificial Intelligence (GenAI) is ushering in a new era of how humans leverage technology. At HackerOne, we are combining human intelligence with artificial intelligence at scale to improve the efficiency of people and unlock entirely new capabilities. This blog will go over our approach and our principles to ensure model safety.

HackerOne's AI can already be used to:

1. Help automate vulnerability detection, using Nuclei, for example

HackerOne AI automating vulnerability detection


2. Provide a summary of a hacker's history across many vulnerabilities

HackerOne AI summarizing a vulnerability report


3. Provide remediation advice, including suggested code fixes

HackerOne AI providing vulnerability remediation advice


The Power of Large Language Models (LLMs)

Language is at the heart of hacking. Hackers communicate security vulnerabilities as text. Collaboration between customers, hackers, and HackerOne security analysts is text for the most part as well. Before AI, HackerOne used two parallel strategies to understand vulnerability data: feature extraction (machine learning) and creating structure where there wasn’t any (normalization). Both of these helped us build rich reporting, analytics, and intelligence.

And now Large Language Models (LLMs) give us a powerful third strategy: leveraging fine-tuning, prompt engineering, and techniques such as Retrieval-Augmented Generation (RAG) to simplify many typical machine learning tasks. Text generation, text summarization, feature and text extraction, and even text classification have become table stakes. LLMs enable us and everyone on HackerOne to increase the efficiency of existing processes significantly, and in the future it will scale the detection of security vulnerabilities, support better prioritization, and accomplish faster remediation.

HackerOne’s Approach and Principles for Responsible AI

We've been around groundbreaking technology long enough to know that there are always unintended consequences, and that everything can be hacked. We have carefully reviewed these risks in consultation with numerous customers, hackers, and other experts. Today we're ready to share those principles for further discussion.

Foundation in Large Language Models (LLMs)

At the core of our AI technology lies a foundation of state-of-the-art LLMs. These powerful models serve as the basis for how our AI interacts with the world. What sets us apart is the proprietary insight we build on top of these models, trained from real-world vulnerability information, and tailored to the specific use cases people on HackerOne engage in. By combining the strengths of foundation LLMs with our specialized knowledge and vulnerability information, we create a potent tool for discovering, triaging, validating, and remediating vulnerabilities at scale.

Data Security and Confidentiality

Security and confidentiality are embedded in our approach. We understand that customer and hacker vulnerability information is highly sensitive and must remain under their control. We do not leverage any multi-tenant or public LLMs. At no point do AI prompts or private vulnerability information leave HackerOne infrastructure or undergo transmission to any third parties.

Tailored Interactions

One size does not fit all in the world of security. We address the risk of unintended data leakage by ensuring that our AI models are tailored specifically to each customer. We do not use your private data to train our models. Rather, our approach lets you make your private data available to the model at inference time with techniques such as Retrieval-Augmented Generation (RAG). This ensures your data remains secure, confidential, and private to you and your interactions only.

Human Agency

Finally, we have instilled a governing principle requiring the deployment of AI with strong human-in-the-loop oversight. We believe in human-AI collaboration, where technology serves as a copilot, enhancing the capabilities of security analysts and hackers. Technology is a tool, not a replacement for the invaluable human expertise.

And, as with all technology we develop, AI is within the scope for our bug bounty program.

What’s Next

Far too often throughout history, emerging technologies are developed with trust, safety, and security as afterthoughts. We are changing the status quo. We are committed to enhancing security through safe, secure, and confidential AI, while tightly coupled with strong human oversight. Our goal is to provide people with the tools they need to achieve security outcomes beyond what has been possible today—without compromise.

We have started rolling out our models to customers and security analysts already. Over the next few months, we will expand this to everyone, including hackers. We're beyond excited to start sharing with you more details on the specific use cases we're focused on enhancing with AI.

Welcome to the future of hacking!

The Ultimate Guide to Managing Ethical and Security Risks in AI

AI Ebook