AI Security vs. AI Safety: Two Pillars CISOs Must Operationalize
Trust is eroding faster than AI is advancing.
A year ago, less than half of security leaders worried about AI risks. Today, nearly 8 in 10 do*. That’s a 63% increase in AI anxiety, and a direct threat to the trust that underpins every digital relationship.
It’s not hard to see why. A rapidly increasing share of enterprise applications now incorporate some kind of AI-powered capability. AI has evolved from isolated pilots to connected ecosystems, models wired to tools, data, and users, expanding both misuse paths and attack surface.
On one side, AI safety risks like harmful outputs, bias, and misuse make headlines; on the other, AI security risks such as exploitation, data poisoning, and model theft are quieter but just as dangerous. CISOs now face a dual challenge: protecting against what AI systems might do and defending against what others might do to them.
A modern security strategy must do both. It should integrate two inseparable pillars, system security and behavioral safety, and validate each through outside-in, continuous, adversarial testing before real users are exposed.
AI Security: Protecting Systems From Exploitation
AI security is about protecting AI systems against exploitation. It’s focused on preventing bad actors from compromising system confidentiality, integrity, or availability. This includes protecting the data that is used in training and inference. Think about dependency attacks on open-source AI libraries, model theft, or data poisoning.
These are not hypothetical scenarios: they are real attack vectors we see tested every day.
AI Safety: Protecting Users From Harmful Output
AI safety is about ensuring that AI systems behave responsibly. This includes preventing harmful or unethical outputs, whether that’s generating offensive language, revealing sensitive information, or providing dangerous instructions.
HackerOne co-founder Michiel Prins described this critical distinction well in a recent webinar.
Why Both Matter for CISOs
Today’s CISOs can’t afford to treat AI as “someone else’s problem.” In our 2025 CISO Report**, we found more than 80% of security leaders are now responsible for AI security and safety.

The need to manage AI security is clear. But AI safety risks grow just as fast, especially when models can be manipulated into producing harmful, biased, or confidential outputs. Strong defenses depend on addressing both.
- If AI security fails, attackers can weaponize your systems against you and your customers.
- If AI safety fails, your organization risks reputational damage, regulatory scrutiny, and public mistrust.
Neither is optional. CISOs must champion both dimensions to build resilient AI systems that can withstand the dual pressures of innovation and adversarial attack.
The Role of Outside-In Testing
Internal controls and automation aren’t enough to simulate creative misuse at runtime. Industry leaders are using outside-in, continuous, adversarial testing to validate controls before real users are exposed.
The data bears this out: Only a third of security leaders feel they have the in-house resources to manage AI-related risks, while 65% say external testing is important or critical for protecting AI systems.
Crowdsourced security adds the human ingenuity that finds blind spots and systemic weaknesses that tools miss, then delivers artifacts your security, legal, and compliance teams can use.
A Call to Action for Security Leaders
The question isn’t whether safety or security are your responsibility—they already are. The real question is how quickly and effectively you can make AI safety and security a seamless part of your offensive security strategy.
Waiting is costly. Acting now builds the trust your board, regulators, and customers expect. Contain the risk before it defines you.
See how enterprises address both AI Security and Safety in the Hacker-Powered Security Report
*Survey methodology: HackerOne and UserEvidence surveyed 99 HackerOne customer representatives between June and August 2025. Respondents represented organizations across industries and maturity levels, including 6% from Fortune 500 companies, 43% from large enterprises, and 31% in executive or senior management roles. In parallel, HackerOne conducted a researcher survey of 1,825 active HackerOne researchers, fielded between July and August 2025. Findings were supplemented with HackerOne platform data from July 1, 2024 to June 30, 2025, covering all active customer programs. Payload analysis: HackerOne also analyzed over 45,000 payload signatures from 23,579 redacted vulnerability reports submitted during the same period.
**Survey methodology: Oxford Economics surveyed 400 CISOs from April to May of 2025. Respondents represented four countries (US, UK, Australia and Singapore) and 13 industries (Telecommunications, Real Estate/Construction, Utilities, Government/Public Sector, Consumer Goods, Education, Retail, Banking/Financial Services/Insurance, Retail/Ecommerce, Manufacturing, Healthcare, Transport/Logistics, and Not-for-profit/Non-profit). 70.5% of respondents worked at publicly-held organizations, while the other 29.5% worked for private organizations. Roughly 2 out of 5 respondents work at smaller organizations (between 1,000 and 2,500 employees); respondents from organizations with at least 10,000 FTEs make up 27% of the sample. Finally, revenue breakdowns are evenly split across 5 revenue buckets: Less than $500m; $501m to $999m; $1b to $4.9b; $5b to $9.9b; and $10b and more.