Shopify's Playbook: How to Scale Secure AI Adoption
Designed for security leaders operationalizing AI across enterprise workflows. Inspired by Jill Moné-Corallo and the Shopify team.
Why This Playbook Exists
AI is now embedded across modern engineering and security workflows, dramatically increasing the pace of change across the attack surface.
At Shopify, this created a fundamental tension. As Jill Moné-Corallo puts it: “How are we securing things as fast as AI is allowing us to speed up?” A lean security team of four analysts was handling hundreds of submissions per week, operating in a constant state of drowning and never reaching inbox zero. To solve this, Shopify built a scalable model for integrating AI responsibly across their security operations. Using Hai alongside an internal AI agent, they effectively added a teammate that retains institutional knowledge across every report. The result: 62% faster triage, a 50% reduction in ramp time, and a consistent ~93% response efficiency.
This playbook distills their approach into practical steps you can adapt for deploying AI at enterprise scale.
Get Started
What You'll Learn
- Assessment of how AI changes your attack surface and risk model
- How to embed security controls directly into AI workflows
- Move from one-time AI reviews to continuous AI risk evaluation
- Assessment of how AI changes your attack surface and risk model
- How to embed security controls directly into AI workflows
- Move from one-time AI reviews to continuous AI risk evaluation
What You'll Need
- Ownership of AI security across engineering and security teams
- AI tooling that supports auditability, access controls, and human approval
- Bug Bounty
- Hai
- AI Red Teaming
- Ownership of AI security across engineering and security teams
- AI tooling that supports auditability, access controls, and human approval
- Bug Bounty
- Hai
- AI Red Teaming
Step by Step
Step 1: Understand how AI changes your security landscape
Step 2: Embed security into AI workflow design
Step 3: Use AI to scale and automate security operations
Step 4: Expand testing with the security researcher community
Ways to Source AI Hackers
- Tap ML research labs and academia (grad students and postdocs)
- Recruit from ML platforms, OSS model communities, and GitHub
- Host AI red-team hackathons, CTFs, or competitions.
- Quick tip: Provide sandboxes, sample datasets, and API credits to lower friction.
- Run specialized private bounty programs with AI scopes.
- Quick tip: Add special “AI expert” tiers and invite past top performers from ML competitions.
- Partner with HackerOne AI Red Teaming or industry AI/red-team consultancies.
- Work with model vendors and platform partners.
- Quick tip: Negotiate joint responsible-disclosure processes and co-branded exercises.
- Source multilingual and multimodal testers.
- Quick tip: Run language-tagged mini-programs with tailored rewards.
- Upskill existing security researchers into AI attack roles.
- Quick tip: Run a paid apprenticeship that culminates in entry to your private AI program.