Aligning Global Standards: Reflections from the UK AI Cybersecurity Code of Practice Workshop

On April 23, we joined stakeholders for a workshop with representatives from the UK’s Department for Science, Innovation & Technology (DSIT) and the National Cyber Security Centre (NCSC). The discussion centered on the UK AI Cyber Security Code of Practice—a set of evolving guidelines aimed at enhancing the security posture of AI systems deployed in both the private and public sectors.
This event hosted by the Center for Cybersecurity Policy & Law in Washington, D.C., brought together voices from government, industry, and civil society to unpack recent policy shifts and share cross-border perspectives on AI cybersecurity.
The UK’s Role in Shaping Secure AI Policy
The UK’s initial Call for Views on the Cyber Security of AI, released in July 2024, set the stage for developing a Code of Practice informed by real-world insights from security professionals, developers, and ethical hackers. As we noted in our submission to DSIT, HackerOne welcomed the UK’s open, collaborative approach and emphasized four of the core principles we believe are key to the secure development and deployment of AI:
- Staff training and awareness
- Secure infrastructure with robust vulnerability disclosure processes
- Rigorous testing and evaluation including AI red-teaming
- Responsible disclosure and update mechanisms
DSIT subsequently released the final versions of the voluntary Code of Practice for the Cyber Security of AI and Implementation Guide for the AI Cyber Security Code of Practice in January 2025, reflecting the feedback received.
Highlights from the Workshop
At the April workshop, DSIT and NCSC presented an updated roadmap for the Code of Practice, including plans to standardize it through international bodies like the European Telecommunications Standards Institute (ETSI). Attendees were invited to provide feedback on the practical implementation of key provisions, particularly those impacting cybersecurity testing, red-teaming, and vulnerability disclosure.
Here are a few key takeaways from the discussion:
- Lifecycle and Decommissioning Focus: The Code now covers the full AI system lifecycle, including a new principle on secure decommissioning.
- Updated Code + New Guide: Based on stakeholder feedback, DSIT released an updated Code of Practice and an accompanying Implementation Guide to support practical adoption.
- Global Alignment in Progress: DSIT is working with ETSI and others to ensure the UK Code is harmonized with other emerging international standards.
- Transparency in the Process: HackerOne highlighted that additional clarity on how industry input is used by DSIT and NCSC in shaping the Code and their engagement in other standards-setting processes would be helpful—something DSIT is actively addressing.
Connecting Policy to Practice: HackerOne’s Role
At HackerOne, we’re committed to improving AI security and trustworthiness through human-powered testing and collaboration. Whether it’s helping governments design bug bounty programs or contributing to global standards, we believe a proactive, multistakeholder approach to AI security is essential.
As policymakers continue refining the UK Code of Practice, we’ll remain engaged—providing technical insights, promoting ethical hacker collaboration, and supporting thoughtful regulation that advances both innovation and security.
What’s Next
We applaud DSIT and NCSC for prioritizing collaboration and transparency as they continue their efforts in the area of AI security. We look forward to continuing our engagement as the Code evolves, and encourage others in the security community to join the conversation.
For updates on HackerOne’s public policy efforts around AI and cybersecurity, follow us here or reach out to our team directly.