Making the Business Case for AI Security: How to Show Value with Return on Mitigation

Image
Cyber brain

If you’ve ever tried to explain AI-specific threats, such as prompt injection, model leakage, or data poisoning, to nontechnical stakeholders, you’ve probably seen their eyes glaze over. This makes it a challenge to convey why securing your AI stack matters now, and why it deserves budget next to more familiar priorities like cloud or endpoint protection.

Security leaders need a different approach. That’s where return on mitigation (RoM) comes in.

RoM is about flipping the conversation from “what could go wrong” to “what’s the financial value of fixing it before it does?”

How Is Return on Mitigation Different from ROI?

RoM is a way to quantify the value of proactive security work. Instead of trying to calculate a fuzzy ROI for an investment that doesn’t directly generate revenue, RoM asks: “What losses are we avoiding by mitigating this risk, and how much did it cost us to do that?”

And when applied to AI security, it’s one of the clearest ways to explain to leadership why the work you’re doing matters.

We asked more than 550 security leaders how RoM can align security initiatives with critical business goals in our latest whitepaper: When ROI Falls Short: A Guide to Measuring the Value of Cybersecurity Investments with Return on Mitigation

RoM and AI Security: Why It Matters Now

AI deployments, whether predictive or generative AI, don’t behave like traditional software. They introduce new risks, operate on huge volumes of sensitive data, and can be manipulated in ways that other systems can’t.

Yet most organizations still apply traditional security metrics to these systems. RoM gives you a way to talk about AI security (and your entire security program) in business terms, especially when you need to:

  • Secure budget for additional testing or adversarial assessments
  • Prioritize remediation in complex AI pipelines
  • Justify security controls in AI governance reviews

A Hypothetical Example

Let’s say your company builds a predictive AI model that evaluates loan applications. The model uses sensitive customer data (PII) to determine creditworthiness in real time.

A researcher discovers a vulnerability in the API layer: with a few inputs, an attacker could manipulate decision outcomes, approving fraudulent applications or rejecting legitimate ones. This could lead to regulatory penalties, financial fraud, and reputational damage.

If exploited, you estimate the total breach impact at $3.5 million, broken down as:

  • $2M in potential fraud losses
  • $1M in regulatory penalties due to Fair Lending laws (U.S.)
  • $500K in reputational damage and response costs (for an idea of how you can estimate such numbers, check out this mini RoM calculator.)

Because this issue affects financial decision integrity and regulatory compliance, it qualifies as a critical vulnerability under the RoM model. Based on severity and exposure, your team assigns an Exploitation Likelihood Score (ELS) of 0.15 (i.e., a 15% chance of exploitation within the year, given the vulnerability’s exposure and impact), at the upper end of the “critical” range defined in the RoM white paper

Return on Mitigation Calculation

Total mitigation costs, including a $15,000 bounty reward paid directly to the researcher, internal patching, security validation, and enhanced monitoring, come to $65,000.

  • RoM = ($525,000 – $65,000) / $65,000 × 100 = 708%

That means every $1 spent on fixing this critical AI vulnerability prevented over $7 in potential loss, a 7x mitigation return on the security investment. This is exactly the kind of outcome that helps security teams win executive buy-in: quantifiable, risk-adjusted, and grounded in real-world threats.

Tips for Communicating RoM to the Business

  • Speak in dollars, not vulnerabilities. Frame risk in terms of losses avoided.
  • Tie threats to operations. Make it clear how an AI exploit could disrupt workflows, violate policies, or expose the business.
    Anchor in likelihood and impact. Not every AI issue is high-risk, but when one is, help stakeholders understand why it’s worth addressing now.
  • Avoid the “what if” trap. Use RoM to keep the conversation rooted in quantifiable scenarios, not just fear of the unknown.

Focus on the Bottom Line with RoM

Boards and business leaders may not care about prompt injection, but they care deeply about financial losses, regulatory scrutiny, and brand reputation.

RoM gives security leaders a way to connect the dots between AI security work and business outcomes, turning technical diligence into strategic foresight. 

Learn more about how HackerOne’s AI Copilot Hai assists with calculating RoM.