From Response Gaps to Signal: How KOHO Scaled Bug Bounty
A bug bounty program isn’t something you can simply launch and leave on autopilot. It requires iteration to stay effective against evolving threats. At KOHO Financial, where the mission is to make financial services transparent and accessible to all Canadians, security is foundational to earning customer trust. Scott Brown, who led KOHO’s bug bounty transformation, recognized that building a strong program meant investing in researcher relationships. Guided by the clear principle of “people first,” he focused on scaling the program while creating a more responsive, rewarding experience for the security community.
No Smoke Alarm
Scott arrived at KOHO in 2023, determined to elevate the bug bounty program’s operations and earn the respect of the researcher community. Researchers would submit reports and sometimes wait for updates, frequently following up to no avail. To Scott, that was a missed opportunity to build trust with the people helping keep KOHO safe.
Processes were largely manual, with no standardized templates or workflows.
Responses were written from scratch, creating inconsistency and slowing the team down.
Researcher participation was steady, but not high enough to confidently reflect full coverage of potential risk.
To scale the bug bounty program without adding overhead, KOHO combined platform automations with HackerOne’s agentic AI capability, Hai, to streamline triage, improve consistency, and strengthen communication with researchers.
Scott first focused on eliminating response gaps. By introducing automated acknowledgments, every report received an immediate response from KOHO, removing uncertainty for researchers and setting clear expectations from the start. From there, Scott built automations to improve program hygiene: repetitive tasks like handling stale reports, updating metrics, and filtering low-quality submissions were automated, allowing the team to focus on high-impact work.
Hai complemented these structured workflows by improving how reports were understood and communicated. AI-powered summarization standardized incoming reports, making triage faster and more accessible across the team. Hai also helped generate consistent responses, ensuring every interaction with researchers remained professional and constructive.
With validation workflows in place, KOHO invested further in the researcher community as a lever for program growth and adjusted bounty payouts depending on industry benchmarks within the HackerOne Platform (Bounty Insights). This tactic became a lever for visible program growth, attracting stronger researchers and increasing engagement.
Scott combined platform automations, Hai, and data-driven benchmarking to build a scalable program that improved efficiency, increased signal quality, and attracted higher-value researcher engagement.
From Reactive to Continuously Validated
Scott Brown’s security leadership shifted KOHO from a reactive, "set-and-forget" program to a continuously validated system where risks are identified and neutralized.
As a result of the efficiency gains, KOHO transformed the level of researcher engagement and trust by eliminating uncertainty and long wait times. When researchers no longer feel like their reports disappear into a void– they know what to expect and when.
- Time to first response dropped from 8–9 hours to immediate acknowledgment
- Clear next steps and consistent updates replaced long periods of silence
Automation and structured workflows dramatically reduced how long it takes to validate and reward findings.
- Reduced median triage time from well over 100+ hours to ~22 hours
- Average time to bounty now ~9 hours
This speed keeps researchers engaged and ensures vulnerabilities are validated and addressed quickly before they can become real risk.
KOHO successfully adapted to a public program by expanding coverage without overwhelming the team. Although moving to a public program increased volume, the program’s efficiency contributed to noise reduction and strong signal maintenance.
- Report volume stabilized at ~20–40 submissions per month
- Maintained strong signal rate at ~4–8% in a public environment
- Automated filtering removes low-quality and stale reports
This enabled clearer decision-making across the team and across experience levels. Scott shares:
This becomes even more important with the new frontier AI models (like Mythos) where researchers will be able to use AI to increase the quantity and quality of reports, necessitating program owners to manage workload and response times.
Scott emphasized the felt impact of prioritizing cross-functional relationships to improve security outcomes. Scott built a system and culture where security and engineering work together by focusing on shared ownership of outcomes, not just handoffs. Instead of passing issues downstream, the security team validates, reproduces, and provides context upfront. Fixing faster relies on strong partnership
Validation Cycles Shrink From Weeks to Hours, Reducing Friction and Incident Risk
Agentic PTaaS transformed a point-in-time assessment into a validated, meaningful risk reduction. It identified injection patterns, understood the exploitation context, systematically validated whether the pattern existed elsewhere, and enabled rapid revalidation after fixes.
Release approval was supported by evidence-backed assurance rather than assumption.
Coverage expanded significantly without expanding scope or budget.
The team avoided overconfidence and strengthened its internal risk narrative.
The team avoided overconfidence and strengthened its internal risk narrative.
Validation cycles shortened.
Remediation could be confirmed within the same operational window, reducing friction and improving release confidence.
Remediation could be confirmed within the same operational window, reducing friction and improving release confidence.