Research Report

The Expanding AI Attack Surface

Closing Risk Gaps Through Continuous Security

In this report

More than a third of security leaders fall into the AI Security Coverage Gap

HackerOne surveyed 303 security leaders overseeing security testing for their organization’s AI/ML systems. The findings show a widening coverage gap: AI innovation is accelerating, but formal testing coverage is uneven. As AI systems multiply, risk compounds, especially when visibility and testing cadence don’t keep pace with what’s actually deployed.

Read the report

94%
expanded their AI/ML footprint this year
66%
formally test most systems
$2M+
in reported remediation costs

AI is Scaling Faster Than Validation

When AI footprints expand, the attack surface expands with them. Our research links higher testing coverage to fewer organizations reporting attacks, but also shows coverage isn’t evenly distributed. A significant share of organizations are still testing large portions of their AI environment inconsistently, creating unmanaged surface area and avoidable exposure.
 

Image
AI Security Gap
Image
ai security gap

In the report

  • A clear view of the AI Security Coverage Gap, when organizations test 60% or less of their AI footprint.
  • Evidence that closing the gap improves outcomes: organizations testing most systems are 16% less likely to report an AI-related attack or vulnerability.
  • A practical way to think about AI security testing maturity: coverage vs breadth, and why “just more testing” isn’t the answer.
  • A breakdown of common AI security testing methods and when to use them to improve visibility and reduce blind spots.
  • Guidance on where testing is often uneven, and why risk grows fastest where models meet real systems.