What You Should Know About The Next Wave of EU AI Act Requirements
The European Union’s Artificial Intelligence Act (EU AI Act) continues phased enforcement with the largest batch of requirements yet entering into force on August 2, 2025.
This landmark regulation establishes a risk-based framework for AI development and deployment in the EU and introduces requirements for AI system transparency, accountability, and security.
Organizations in scope of the Act should evaluate how current security practices, such as red teaming, coordinated vulnerability disclosure, and bug bounty programs, can support both compliance and trust in AI systems.
What Is the EU AI Act?
The AI Act is the EU’s flagship regulation for artificial intelligence. It is the first framework of its kind and classifies AI systems based on their potential impact and risk. The regulation introduces four levels of risk: unacceptable, high-risk, limited-risk, and minimal-risk.
Requirements include conformity assessments, detailed documentation, post-market monitoring, and security testing throughout the system’s lifecycle. The Act’s goal is to ensure these technologies perform as intended and do not cause harm.
The regulation also introduces penalties for non-compliance, with fines reaching up to €35 million or 7% of global annual turnover for the most serious infringements. Other violations may carry fines up to €15 million or 3% of turnover, depending on the nature and severity of the violation.
What Requirements Begin on August 2, 2025?
The EU AI Act’s next major enforcement phase starts on August 2, 2025, when several key chapters, sections, and articles take effect:
- High-Risk AI System Requirements (Articles 9–15): Covering risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and cybersecurity/robustness.
- Post-Market Monitoring and Incident Reporting (Articles 72 and 73): Obligations for continuous risk monitoring and reporting serious incidents related to high-risk AI systems.
- GPAI Model Obligations (Non-Systemic Risk) (Chapter V): Transparency and documentation rules for providers of general-purpose AI models not classified as systemic risk, including training summaries, intended use guidance, and copyright disclosures.
- Notified Bodies and Authorities (Chapter 3, Section 4): Regulations governing conformity assessments and the supervision of high-risk AI systems.
- Governance Framework (Chapter 7): Establishment of national supervisory authorities and the EU AI Office.
- Penalties (Chapter 12): Enforcement rules and fines, excluding Article 101 which applies to systemic-risk GPAI providers and will take effect later.
- Confidentiality Obligations (Article 78): Requirements for handling sensitive information during post-market monitoring.
Other parts of the Act will come into force later, with full implementation scheduled for August 2, 2026.
Who the AI Act Applies To
The Act applies to a broad set of actors, including AI system providers, deployers, importers, and distributors. Among these, providers of high-risk AI systems and general-purpose AI (GPAI) model providers face the most requirements.
The Act has extraterritorial reach, meaning companies outside the EU must comply if their AI systems are used in the EU or affect individuals based in the EU. This includes organizations developing GPAI models, integrating third-party AI services, or customizing foundation models for specific EU use cases. Even activities like repackaging or fine-tuning existing models can trigger obligations depending on the risk classification of the final system.
Security Requirements Under the EU AI Act
Several key security provisions focus on resilience, testing, and ongoing risk management:
- Annex XI applies to providers of general-purpose AI models, especially those with systemic risk. They must document evaluation strategies and any adversarial testing like red teaming. AI red teaming can help GPAI providers identify risks before they affect users. These obligations are currently expected to come into effect sometime in 2026 following formal designation from the European Commission.
- Post-market monitoring (Article 72) requires providers of high-risk AI systems to continuously monitor their AI systems for emerging risks and take corrective actions as needed to maintain compliance throughout the system’s lifecycle. Best practices like bug bounty programs and vulnerability disclosure programs can help organizations detect and remediate security issues promptly, enhancing system resilience. This requirement goes into effect August 2, 2025.
- Incident reporting obligations under Article 73 require providers of high-risk AI systems to report serious incidents or malfunctions (e.g., those that impact health, safety, or fundamental rights) to national authorities. While this obligation focuses on regulatory notification, internal escalation processes and structured input channels can help organizations identify and address issues early. This requirement also takes effect on August 2, 2025.
Preparing For What Comes Next
As enforcement of the EU AI Act begins, covered organizations should ensure their development, deployment, and monitoring practices align with its requirements. Security remains a key focus, with best practices like red teaming, bug bounty, and coordinated vulnerability disclosure aligning with the Act’s security obligations.
Whether developing new models, integrating AI into regulated workflows, or adapting foundation models for high-risk use cases, proactive security is critical. Contact us to learn how we can help your organization align with the Act and build secure, resilient AI systems.