How to Use AI Prompting for Security Vulnerabilities

February 6, 2024 Zahra Putri Fitrianti

What Is an AI Prompt?

A prompt is an instruction given to an LLM to retrieve desired information to have it perform a desired task. There are so many things that we can do with LLMs and so much information that we can receive by simply asking a question. It’s not a perfect source of truth (for instance, it can be really bad at math), but it can be an immense well of information if we know how to effectively tap into it. That is the challenge — and opportunity — of AI prompting.

3 Ways to Write Effective AI Prompts

1. Be clear and specific

The more specific and clear you are in your prompt, the better instruction it will be for the model in terms of what tasks to execute. Never assume that the LLM will immediately know what you mean; it’s better to be over-prescriptive than not prescriptive enough. 

An example of a minimally effective prompt might be: “This is a very long article, and I want to know only the important things. Can you point them out but make sure it’s not too long?” Your prompt doesn’t necessarily need to be long to be effective. To make the same prompt clearer, you might change it to: “Summarize the top three key findings of the following article in 150 words or less.”

2. Provide context

LLMs like GPT, Claude, and Titan, among others, are trained on very large datasets that are typically public information. This means that they lack specific knowledge or context about private or internal domains, such as that “HackerOne Assessments” only refers to Pentest-as-a-Service (PTaaS) offered by HackerOne. By explaining important context like this, the LLM will be able to provide a better output faster, with fewer back and forths and corrections.

3. Use examples

Many LLMs are trained to be able to utilize provided examples and factor the data into their outputs. By providing examples, the model gains more context into your domain and, therefore, can understand your intention better. It also reduces any ambiguity and directs the system to generate more accurate and relevant responses. Think of it like adjusting the settings on a camera to capture the perfect shot — tuning AI with examples helps it to focus on your specific needs. 

3 Types of AI Prompts

1. Zero-Shot Prompt

The Zero-Shot prompt tends to be pretty direct and provides the LLM with little to no context. An example of this kind of prompt might be: “Generate an appropriate title that describes the following security vulnerability.” It includes information about a security vulnerability, but it doesn’t define what would be considered an “appropriate” title or what the title is being used for. This isn’t necessarily a bad place to start, but a more comprehensive output might require more context into the purpose of the prompt.

2. One-Shot Prompt

A One-Shot Prompt provides the AI with greater context into the needs and purpose of the prompt. For security vulnerabilities, I would ask the LLM for a suggestion for remediation and provide context for what the report is about. For example: “The report below describes a security vulnerability where a cross-site scripting (XSS) vulnerability was found on the asset xyz.com. Please provide the remediation guidance for this report.”

3. Few-Shot Prompt

Very similar to the One-Shot Prompt, the Few-Shot Prompt provides even more contextual examples and is even more prescriptive about the specific required outputs. This might look like: “The report below describes an XSS security vulnerability found by a hacker. Extract the following details from the report:

  • Common Weakness Enumeration (CWE) ID of the security vulnerability (example: CWE-79)
  • Common Vulnerabilities and Exposures (CVE) ID of the security vulnerability (example: CVE-2021-44228)
  • Vulnerable host (example: xyz.com)
  • Vulnerable endpoint (example: /endpoint)
  • The technologies used by the affected software (example: graphql, react, ruby)

How to Get Started With Prompting GenAI and LLMs

Crafting effective prompts requires testing and is typically done in an iterative manner. Start by experimenting with a variety of prompts to gauge the AI's responses. A great way to begin is by prompting AI about a topic you’re well-versed in — that way, you can tell if the output is accurate or not. An effective prompt generally yields accurate, relevant, and coherent responses that are in line with the topic of interest, depending on what you'd like to get out of it. If you feel like the response is off-topic or inaccurate, that’s a pretty good indicator that your prompt needs adjusting. Rephrase it, make it more specific, be clearer, or provide additional context until you achieve the desired results. Keep refining your prompts until they meet your standards, and don’t forget to save your best prompts for future use!

The team at HackerOne is experimenting with AI in different ways every day, so follow along for more insights into the impacts of AI on cybersecurity.

Previous Article
How an Improper Access Control Vulnerability Led to Account Theft in One Click
How an Improper Access Control Vulnerability Led to Account Theft in One Click

HackerOne’s 7th Annual Hacker Powered Security Report states that improper access control is the second mos...

Next Article
Recap: Elite Pentesters Tell All in a Live Q&A
Recap: Elite Pentesters Tell All in a Live Q&A

The participants answered live as well as carefully curated questions from popular community platforms such...