Europe’s New AI Security Standard: What It Means for Vulnerability Disclosure and AI Red Teaming

Michael Woolslayer
Policy Counsel
Image
EU AI Regulation

Governments are moving quickly from AI risk conversations to real requirements. The challenge is that AI changes constantly after launch: models are updated, guardrails evolve, integrations shift, and new data flows appear.

That’s why the newest AI security standards are emphasizing something many organizations still struggle to operationalize: continuous vulnerability disclosure and continuous validation, not just a one-time pre-release check.

We participated in a workshop last week with a variety of stakeholders from industry and the UK’s Department for Science, Innovation & Technology (DSIT) to discuss practical steps toward addressing that challenge, notably the new AI security standard released late last year.  

In late 2025, the European Telecommunications Standards Institute (ETSI) published ETSI EN 304 223, Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems, which sets baseline cybersecurity requirements for AI models and systems.

While the document covers many aspects of AI security, it strongly signals the importance of vulnerability management and security testing, areas where independent researchers and red teams play a critical role.

An Overview of ETSI EN 304 223

Aligning closely with the United Kingdom’s voluntary Code of Practice for the Cyber Security of AI finalized in January 2025, the new standard applies to organizations that develop, deploy, or operate AI systems, from model creators to system operators that embed AI into their products or services. It is designed for AI systems that are actually deployed, not purely academic research models.

The standard organizes security requirements across the full AI lifecycle, including design, development, deployment, maintenance, and end of life. It uses a mix of mandatory “shall” requirements and recommended “should” practices, making it suitable both for compliance assessments and practical implementation.

As a European Standard, ETSI EN 304 223 will be adopted across EU member states, with national implementations expected by late 2026. While it is not itself a requirement, it is likely to influence regulatory expectations, audits, and procurement requirements, especially alongside the EU AI Act.

Vulnerability Disclosure Becomes a Baseline Expectation

One of the clearest signals in ETSI EN 304 223 is that vulnerability management is a baseline requirement for secure AI systems.

The standard explicitly requires organizations that develop or operate AI systems to publish a clear and accessible vulnerability disclosure policy.

In simple terms, companies must explain how security researchers and others can report vulnerabilities in AI systems, and how those reports will be handled.

This matters because many AI systems today lack basic reporting pathways, even as they are deployed in high-impact settings. ETSI treats AI vulnerabilities as real cybersecurity risks, not hypothetical concerns, and expects organizations to engage constructively with external reports.

The standard also addresses what happens after a vulnerability is disclosed. Developers are expected to provide security updates and patches where possible, notify system operators, and have contingency plans for situations where fixes are difficult, such as models that cannot easily be updated. Logging and monitoring are required to support investigation and remediation.

The overall message is straightforward: responsible AI deployment includes coordinated vulnerability disclosure and meaningful follow-through.

Security Testing is Required, Not Assumed

ETSI EN 304 223 also takes a firm stance on testing. AI systems must undergo security testing before release, and system operators are expected to test systems again before deployment.

This testing is not limited to whether a system works as intended. The standard calls for evaluation of how AI systems behave under adversarial or unexpected conditions, including misuse, manipulation, or attempts to extract sensitive information from model outputs.

Importantly, ETSI encourages the use of independent security testers with expertise relevant to AI systems. It clearly recognizes the value of external, adversarial testing like AI red teaming in uncovering real-world risks.

AI Red Teaming is an Ongoing Responsibility

Another key theme in the standard is that testing does not end at launch. AI systems change over time as models are updated, data shifts, and configurations evolve. Each change can introduce new security risks.

ETSI responds by requiring organizations to re-run evaluations on models they use and to treat major updates as effectively new releases from a security standpoint. The standard also highlights AI-specific risks that continuous testing should address, such as:

  • Outputs that reveal sensitive model or training data
  • Responses that give users unintended control or influence
  • Behavioral changes caused by data drift or poisoning

These are precisely the kinds of issues AI red teaming is designed to uncover.

What Security Leaders Can Do to Prepare for the New Standard

If you’re building or deploying AI systems in Europe, ETSI EN 304 223 is a strong preview of what external stakeholders will soon expect. Here are four steps that can reduce risk and improve readiness.

  1. Publish or refresh your vulnerability disclosure policy

Make it easy for researchers and users to report issues. Define scope, safe harbor language, and response expectations.

  1. Prove you can validate real risk, not just generate findings

AI testing can create noise. Align your testing approach to outcomes like exploitability, impact, and fixability.

  1. Define when you re-test

Set clear triggers: new model versions, new tools and connectors, major guardrail or policy changes, and new agentic workflows.

  1. Decide where you need independent testing

Different needs call for different methods:

A Practical Path to Continuous Validation

ETSI EN 304 223 helps translate high-level AI governance goals into concrete security practices. For policymakers, it reinforces an important principle: secure AI depends in part on openness to external scrutiny. Vulnerability disclosure programs, independent testing, and ongoing evaluation are signs of maturity, not weakness.

For organizations building or deploying AI, the takeaway is clear. Vulnerability management and AI red teaming are becoming baseline expectations for trustworthy AI and essential tools for protecting users, markets, and society as a whole.

If you’re preparing for European requirements or just trying to keep pace with AI risk, HackerOne can help you operationalize this approach through Vulnerability Disclosure ProgramsLLM Application Pentesting, and AI Red Teaming that focuses on real-world failure modes before and after launch.

Continuous Vulnerability Discovery helps you catch exploitable issues between formal assessments

About the Author

Michael Woolslayer
Michael Woolslayer
Policy Counsel

Michael is Policy Counsel at HackerOne, where he supports public policy efforts to address cybersecurity and AI security challenges and enable good faith security and safety research.