The Hacker Perspective on Generative AI and Cybersecurity

September 7, 2023 Michiel Prins

Future Risk Predictions

In a recent presentation at Black Hat 2023, HackerOne Founder, Michiel Prins, and hacker, Joseph Thacker aka @rez0, discussed some of the most impactful risk predictions related to Generative AI and LLMs, including:

  • Increased risk of preventable breaches 
  • Loss of revenue and brand reputation
  • Increased cost of regulatory compliance
  • Diminished competitiveness
  • Reduced ROI on development investments

Hacker Herman Satkauskas also points out that, while AI has lowered the barrier to entry for ethical hackers, “malicious attackers will also realize they have the tools at their disposal to conduct cybercrime.”

The Top Generative AI and LLM Risks According to Hackers

According to hacker Gavin Klondike, “We’ve almost forgotten the last 30 years of cybersecurity lessons in developing some of this software.” The haste of GAI adoption has clouded many organizations’ judgment when it comes to the security of artificial intelligence. Security researcher Katie Paxton-Fear aka @InsiderPhD, believes, “this is a great opportunity to take a step back and bake some security in as this is developing and not bolting on security 10 years later.”

Quote from AI Hacker Joseph Thacker


Prompt Injections

The OWASP Top 10 for LLM defines prompt injection as a vulnerability during which an attacker manipulates the operation of a trusted LLM through crafted inputs, either directly or indirectly. Paxton-Fear warns about prompt injection, saying:

“As we see the technology mature and grow in complexity, there will be more ways to break it. We’re already seeing vulnerabilities specific to AI systems, such as prompt injection or getting the AI model to recall training data or poison the data. We need AI and human intelligence to overcome these security challenges.”

Thacker uses this example to help understand the power of prompt injection:

“If an attacker uses prompt injection to take control of the context for the LLM function call, they can exfiltrate data by calling the web browser feature and moving the data that are exfiltrated to the attacker’s side. Or, an attacker could email a prompt injection payload to an LLM tasked with reading and replying to emails.”

Ethical hacker, Roni Carta aka @arsene_lupin, points out that if developers are using ChatGPT to help install prompt packages on their computers, they can run into trouble when asking it to find libraries. Carta says, “ChatGPT hallucinates library names, which threat actors can then take advantage of by reverse-engineering the fake libraries.”

According to Thacker, “The jury is out on whether or not it’s solvable, but personally, I think it is.” He says the mitigation depends on the implementation and deployment of the prompt injection and, “of course, by testing.”

Agent Access Control

“LLMs are as good as their data,” says Thacker. “The most useful data is often private data.”

According to Thacker, this creates an extremely difficult problem in the form of agent access control. Access control issues are very common vulnerabilities found through the HackerOne platform every day. Where access control goes particularly wrong regarding AI agents is the mixing of data. Thacker says AI agents have a tendency to mix second-order data access with privileged actions, exposing the most sensitive information to potentially be exploited by bad actors.

The Evolution of the Hacker in the Age of Generative AI

Naturally, as new vulnerabilities emerge from the rapid adoption of Generative AI and LLMs, the role of the hacker is also evolving. During a panel featuring security experts from Zoom and Salesforce, hacker Tom Anthony predicted the change in how hackers approach processes with AI:

“At a recent Live Hacking Event with Zoom, there were easter eggs for hackers to find — and the hacker who solved them used LLMs to crack it. Hackers are able to use AI to speed up their processes by, for example, rapidly extending the word lists when trying to brute force systems.” 

He also senses a distinct difference for hackers using automation, claiming AI will significantly uplevel the reading of source code. Anthony says, “Anywhere that companies are exposing source code, there will be systems reading, analyzing, and reporting in an automated fashion.”

Hacker Jonathan Bouman uses ChatGPT to help hack technologies he’s less confident with. 

“I can hack web applications but not break new coding languages, which was the challenge at one Live Hacking Event. I copied and pasted all the documentation provided (removing all references to the company), gave it all the structures, and asked it ‘Where would you start?’ It took a few prompts to ensure it wasn’t hallucinating, and it did provide a few low-level bugs. Because I was in a room with 50 ethical hackers, I was able to share my findings with a wider team, and we escalated two of those bugs into critical vulnerabilities. I couldn't have done it without ChatGPT, but I couldn’t have made the impact I did without the hacking community.”

There are even new tools for the education of hacking LLMs — and therefore for identifying the vulnerabilities created by them. Anthony uses “an online game for prompt injection where you work through levels, tricking the GPT model to give you secrets. It’s all developing so quickly.”

How AI Shows the Value of Bug Bounty

It’s no secret that security leaders are faced with the challenging task of articulating the value of their security programs to stakeholders and board members. And one of the most tricky pieces of showcasing that value is comparing how much a bug bounty costs against how much that bug would cost in the hands of a cybercriminal. 

Our hacker community is using AI to prove that value. According to Satkauskas,

“I tried an experiment where I load up the security finding to ChatGPT and ask it ‘How much would this vulnerability cost a company if it was in the wrong hands?’ ChatGPT can provide a ballpark estimate, meaning it’s far easier to make a case for the impact of that finding in your report.”

According to the 7th Annual Hacker Powered Security Report, the average bug bounty across industries is $1,000 and $3,700 for high and critical vulnerabilities. When you consider the potential financial impact of GenAI-facilitated loss data, you can start to estimate the real value of ethical hackers experiements in GenAI to secure your organization.

Chart showing the average cost of bug bounties


Use the Power of Hackers for Secure Generative AI

Even the most sophisticated security programs are unable to catch every vulnerability. HackerOne is committed to helping organizations secure their GAI and LLMs and to staying at the forefront of security trends and challenges. With HackerOne, organizations can:

Contact us today to learn more about how we can help take a secure approach to Generative AI.

Previous Article
Ethical Hacking: Unveiling the Power of Hacking for Good in Cybersecurity
Ethical Hacking: Unveiling the Power of Hacking for Good in Cybersecurity

In an era where data breaches and cyberattacks dominate headlines, a new and unconventional approach to cyb...

Next Article
You're Doing Pentesting Wrong
You're Doing Pentesting Wrong

Pentesting has been around for decades, but it hasn’t undergone the revolution that other security practice...