Playbook

KOHO's Playbook to Building a High-Signal Bug Bounty Program

Designed for security program managers who have a functional but inconsistent bug bounty program, and are looking to scale without adding headcount. Inspired by Scott Brown and KOHO Financial.

What You'll Learn

Build a bug bounty program that moves faster, reduces manual overhead, improves researcher experience, and helps a lean security team scale without lowering standards.

Step by Step

These are the concrete steps and program design changes that earned the trust and respect of the researcher community.

Scott’s first priority was not volume. It was trust. He looked at how long researchers waited for a first response, how often they were left without updates, and how inconsistent communication felt from one report to the next.

What to replicate: Audit your current researcher journey. Look for delays, unclear next steps, and moments where people are left guessing what happens next.

Before KOHO had automation, Scott introduced common response templates that anyone on his team could leverage. That gave the team a consistent tone, next steps, and saved time.

What to replicate: Create standard responses for intake, triage completion, requests for more information, bounty decisions, and invalid findings. Make every message explain the next step and expected timing.

Scott used HackerOne automations to ensure every incoming report got an instant response. That change alone moved KOHO’s time to first touch from 8 to 9 hours down to zero, according to Scott.

How to apply it:

  1. Auto-close reports with no response after defined SLA.
  2. Filter or deprioritize low-reputation submissions.
  3. Normalize severity and metadata automatically.
  4. Remove duplicate or invalid reports early.

Scott said people are more comfortable when expectations are clear, even if the answer is not immediate. He used triage and bounty messaging to tell researchers when they should expect a decision or update.

What to replicate: Do not leave timing open-ended. Include a clear next checkpoint in every message.

KOHO added AI-assisted summarization to convert reports into a more consistent format. Scott said this helped reduce tedium, made reports easier to parse, and lowered the learning curve for junior team members.

What to replicate:

  • Implement summarization or structured intake formats.

  • Normalize reports into:

    • Issue summary

    • Impact

    • Reproduction steps

  • Use this to train and onboard junior analysts faster.

Scott added automations to close stale “needs more information” reports and to zero out severity on not-applicable reports so KOHO’s internal metrics stayed clean.

What to replicate: Identify every repetitive housekeeping task in your workflow and automate it. Focus especially on backlog cleanup and metric hygiene.

Once KOHO was public, Scott said noise increased. He responded by using signal score-aware automation to treat new researchers respectfully, keep the door open for improving contributors, and automatically close submissions from the lowest-signal accounts that violated policy.

What to replicate: Define quality gates and enforce them consistently. Explain the boundary clearly, but leave a path for legitimate researchers to improve and return.

Scott used AI assistance to help KOHO write diplomatic, educational explanations when a finding was invalid or needed more detail. He said that helped create a single voice and improved interactions with researchers who genuinely wanted to learn.

What to replicate: When declining a report, explain why. When asking for more information, guide the submitter toward what would make the report actionable.

Scott said he does not want security to “chuck” reports over the fence. KOHO’s approach is to validate findings as much as possible before involving engineering, and in some cases even prepare a pull request first.

How to apply it:

  1. Require internal validation before escalation.
  2. Provide confirmed impact, repro steps, suggested fix (if possible).
  3. Position security as an enabler, not a blocker.

When KOHO’s engagement dropped, Scott used HackerOne’s industry benchmark data to reassess payouts. He concluded KOHO was paying below the 50th percentile for its market, increased rewards, and said that change led to more engagement and better reports.

What to replicate: Review payout competitiveness regularly. If participation quality or volume is slipping, compare your reward structure against relevant benchmarks before making changes.

  1. Benchmark your bounty payouts.
  2. Adjust toward at least median (P50) market rates. Scott’s advice: Do not go straight to P99, but start at p50.  Then look at the bump in reports and when that starts to taper off, look at aligning with P75, then repeat as necessary. You want to give your program room to increase reward payouts.
  3. Pay quickly to reinforce positive behavior.

Scott’s advice was to start private, learn how to triage and communicate well, then open the program to the public once you have the right guardrails in place. This is where automations can play a big role in scaling efficiently.

What to replicate: Do not use a public launch to figure out your process. Build the process first, then scale it. When it comes time to switch to a public program, expect a 10-15x increase in reports for 1 month before it stabilizes. Scott urges customers to work with your HackerOne CSM to prepare for the switch.