Internal vs. Expert Triage: Why the Vulnerability Management Method You Pick Matters

Luke Stephens
Founder & CEO @ Haksec
Jewel Timpe
Triage Leader
Image
Reports on background

Most CISOs already understand that “triage” is not glamorous work. But many still underestimate what triage truly involves. The Australian Signals Directorate provided an excellent long-form explanation of triage, but tl;dr, triage is the operational engine that turns raw inflow into a controlled queue of real risk, with clear ownership, clear timelines, and clear next actions. 

This matters because modern VDPs and bug bounty programs attract volume, and volume attracts noise. On the vulnerability intake side, HackerOne sees that 60 to 80% of vulnerability submissions are invalid, a useful reality check for any team assuming a steady stream of clean, ready-to-fix reports.

At the same time, the broader security operations world is wrestling with the same underlying physics: too much signal to sort through, and too much noise to ignore. In the 2025 SANS Detection and Response Survey, a whopping 73% of respondents identified false positives as a main challenge to threat detection, up from 64% in 2024.

The practical takeaway for CISOs is blunt: triage is not an inbox. It is a production process. If you build it like an inbox, you will not get the outcomes that you're hoping for.

What Internal Triage Looks Like On Paper

In a perfect world, internal triage is surprisingly straightforward:

  1. First, you acknowledge receipt and set expectations. That seems basic, but established coordinated disclosure guidance calls out acknowledgement timelines explicitly, because fast acknowledgement drives trust and reduces back-and-forth. CERT/CC notes that 24 to 48 hours is a common acknowledgement timeframe for vendors and coordinators.
  2. Second, you validate what was submitted. In practical terms, that means checking scope, reproducing the issue, de-duplicating it against prior reports, and sorting “real security risk” from “informative” or "not applicable" reports. This “identify, verify, resolve” process is baked into all standard vulnerability disclosure program definitions.
  3. Third, you categorize and prioritize. This is where many programs quietly break, because “severity” is not the same as “urgency in our environment.” CVSS is designed as an open framework for communicating vulnerability characteristics and severity and, importantly, it includes groups that let you incorporate threat and environmental considerations rather than treating every base score as universal truth. These environment scores are often underutilized, though they can play an important role in setting expectations between the hackers and program.

Finally, you hand off to remediation with enough context that engineering can act, and you keep the reporter informed as the case moves through validation, triage, remediation, and closure. The OWASP disclosure guidance is explicit about the behavioral basics:

  • Respond in a reasonable timeline
  • Communicate openly
  • Provide a clear method of reporting

This is the linear version leaders imagine: a clean workflow, clean decisions, and a stable queue.

What Internal Triage Looks Like in Reality

In reality, internal triage rarely fails because a team does not know the steps. It fails because the steps are competing against everything else.

  1. The first failure mode is competing priorities. Internal teams often try to “fit triage in” alongside roadmap delivery, incident response, governance requests, and stakeholder reporting. NIST calls out the value of formalizing actions to accept, assess, and manage vulnerability disclosure reports, precisely because informal handling increases the odds that reports sit and stall. It also increases the frustration felt by the reporter, which may result in full disclosure.
  2. The second failure mode is inconsistent classification. Two analysts (or two engineers) can look at the same report and see different impact, different exploitability, and different urgency. That inconsistency is not just an operational headache. It undermines trust with researchers and corrodes internal confidence in the process. It is one reason decision frameworks beyond raw scoring exist. For example, the US Cybersecurity and Infrastructure Security Agency describes SSVC as a methodology that helps decide vulnerability response actions consistent with stakeholder priorities, rather than assuming a single universal “score equals decision.”
  3. The third failure mode is volume surges. You can forecast average weekly volume. You cannot forecast exactly when a high profile CVE wave, an attacker trend, or an influx of low quality submissions will hit your program. Evidence-based prioritization signals exist to help cut through this. CISA’s Known Exploited Vulnerabilities catalogue is positioned as an input to vulnerability management prioritization frameworks, because exploitation in the wild changes the urgency calculus immediately.
  4. The fourth failure mode is communication overhead. Many teams plan for technical validation and forget that triage is also relationship management: clarifying reports, requesting more evidence, explaining decisions, and keeping expectations sane. CERT/CC and OWASP both emphasize timelines and communication as core components of coordinated disclosure, not optional polish.
  5. The fifth failure mode is hidden operational cost. Hiring and retaining people who can do this work is hard. The global cybersecurity workforce is still under strain, but the conversation is shifting. In 2025, ISC2 moved away from publishing a single “workforce gap” number and instead emphasized that the bigger constraint is access to critical skills. While teams still feel understaffed, the challenge is less about headcount and more about whether the people you do have can cover the skills your environment actually requires.

Even when you have the people, you still have the overhead of noise management across tools and queues. Splunk’s State of Security page highlights how often teams feel stuck in tool work instead of defense: 46% say they spend more time maintaining tools than defending the organization, 59% say they have too many alerts, and 55% deal with too many false positives.

None of this is theoretical. Research on operational security teams repeatedly lands on the same theme: high false positive rates mean manual validation dominates the day, and that manual validation becomes a bottleneck.

See how Deriv experienced triage bottlenecks and tightened the loop to minutes.

What Changes with Expert Triage

Partnering with a triage provider does not magically remove your responsibility for vulnerabilities. Security ownership still sits with the organization. What changes is who carries the operational weight of intake, validation, de-duplication, and consistent communication, day after day.

A mature triage provider is built around a few repeatable characteristics:

Dedicated triagers whose primary job is triage

It is not possible to run a successful triage program when your triagers also have ten other responsibilities. HackerOne managed more than 250,000 reports in 2025, which illustrates what “triage at scale” looks like in practice.

Standardized workflows 

Standardized workflows for scope, duplication, reproduction, severity ranking, and report summarization are paramount. HackerOne’s Hai Triage page describes this explicitly: reports are checked for scope, duplication, and context, with human-in-the-loop oversight, and the resulting summaries highlight vulnerability, severity, impact, and remediation steps before escalation.

Noise reduction

60 to 80% of submissions are invalid, but every submission takes time to triage. Programs often receive a mix of high quality and non applicable submissions. Streamlined triage helps teams identify and filter low-value reports and duplicates to focus faster on the reports that matter most.

Consistency under load

The point of outsourcing is not just “someone else reads the inbox”. The point is consistent throughput, consistent decisions, and consistent communication during both steady state and surge conditions.

For CISOs, the real benefit is not abstract. The benefit is that your internal team spends more of its time on remediation, engineering partnership, and risk decisions, instead of spending its best hours on low-impact administrative tasks.

As programs scale, the value of structured triage becomes even more visible. The combination of human expertise and intelligent tooling can help teams move faster, reduce duplicate effort, and maintain clarity across large volumes of submissions, which is where solutions like Hai’s Insight Agent begin to show their impact in real environments:

“What’s powerful about Hai’s Insight Agent is that it feels like having someone on the team who knows every report that’s ever come through our program. It can surface similarities and differences between submissions, making it easy to spot duplicates or inaccuracies.”

—Clara Andress, Bug Bounty Operations Manager at Zoom
 

What to Measure and Govern as a CISO

Whether triage is internal or external, you need hard metrics and clear guardrails. Otherwise “triage quality” becomes a vibe.

Start with responsiveness. CERT/CC highlights acknowledgement timelines as a foundational expectation in coordinated disclosure. That maps cleanly to operational KPIs like time to first response.

Track triage speed and flow. HackerOne’s documentation describes the concept of “time from submission to triage” as a distinct metric in report lifecycles, separate from time to close or time to bounty. Even if you do not use HackerOne, the metric design is still sound: measure how long it takes to go from submitted to validated decision, because that is where queues either move or rot.

Treat prioritization inputs as a system, not a single number. CVSS is useful for communicating technical severity and includes levers for threat and environmental context. EPSS is designed to estimate the probability of exploitation activity being observed over the next 30 days. And KEV is positioned as a practical prioritization input because it focuses on vulnerabilities known to be exploited. Used together, these signals are far more defensible than “patch by base score alone.”

Finally, govern the handoff. A validated report that sits in limbo between security and engineering is still a live risk. UK NCSC guidance on vulnerability management makes triage and prioritization a core part of the process, but it also places responsibility on the organization to own the risks of not updating. In other words: after triage comes executive risk ownership, not more queueing.

AI-Powered, Human-Guided Triage

AI is now part of the triage conversation for a good reason: it can remove a lot of the mechanical load. Classification, duplicate detection, extracting reproduction steps, drafting summaries, and standardising decision explanations are all areas where automation can help, if it is grounded in mature workflows and reviewed by humans. 

Platforms like HackerOne’s Hai Triage combine automated intake checks with analyst led validation, moving reports through structured stages such as first response, needs more information, and validate or close. During this process, triage analysts enrich reports with reproduction steps, impact summaries, and severity suggestions before escalation, allowing internal teams to focus on remediation rather than intake. If you're interested, you can take a look at the Hai Triage docs.

The key qualifier is oversight. HackerOne’s Hai Triage is positioned as AI agents with expert human in the loop oversight. That framing is worth copying even if you build your own system, because it reflects the reality of modern triage: AI accelerates investigation and consistency, but humans still verify findings, adjudicate edge cases, and manage communication with researchers and internal teams.

The CISO perspective should be conservative here. Do not adopt AI to replace triagers. Adopt AI to increase triage throughput, reduce inconsistency, and preserve human attention for the judgement calls that actually change risk.

Good Triage Turns Reports Into Reduced Risk

The closing point is simple: internal triage is difficult to get right. 

If you want to do it in-house, be prepared to do it properly with a dedicated team, strict processes, efficient systems and documented guidelines. A good triage model produces real outcomes: validated signal, predictable prioritization and ultimately, reduced risk.

Want validated signal without the operational burden? Meet Hai Triage.

About the Authors

Luke Stephens Headshot
Luke Stephens
Founder & CEO @ Haksec

Luke Stephens (known online as Hakluke) is an Australian cybersecurity professional, entrepreneur, and content strategist. He’s the founder of Haksec, a leading offensive security consultancy, and HackerContent, a marketing agency dedicated to cybersecurity brands.

Jewel-Timpe
Jewel Timpe
Triage Leader
HackerOne

Jewel Timpe is a Triage Leader at HackerOne, leading elite event triage and bespoke delivery where security research meets operational reality. She holds an MSc in Communications.