A researcher emailed saying they found a security issue in your product. Here's how to evaluate it, what to say, what not to do, and how the companies that handled this badly ended up in criminal court.
Some version of this email has landed in the inbox of almost every SaaS founder with a product that's gotten any traction:
"Hi, I'm a security researcher. I've discovered what appears to be a significant vulnerability in your platform that could expose user data. I wanted to reach out responsibly before going public. Please let me know how you'd like to proceed."
Most founders don't know what to do with this. Is it legitimate? Is it a scam? Should you pay them? Can you tell them to go away? What if you can't fix it quickly?
The good news: most of these emails are legitimate, and handling them correctly is straightforward. The bad news: handling them wrong can turn a manageable security issue into a public relations crisis — or, in the worst cases, a criminal investigation.
Responsible disclosure — the practice of privately notifying a company about a vulnerability before going public — is a well-established norm in the security community. Most security researchers follow it because they want vulnerabilities to be fixed, not exploited. The community has developed shared standards around timing, communication, and expectations.
A legitimate disclosure email typically includes:
What it doesn't usually include:
A scam or extortion attempt looks different: urgent deadline, specific dollar amount demanded, threat of exposure, sometimes accompanied by a sample of data as "proof." These should be forwarded to law enforcement and handled with your legal counsel. Do not negotiate, and do not pay.
The other category is researchers who are rough around the edges but legitimate — they may be blunt, impatient, or use aggressive language, but they've found a real issue and they're trying to tell you. The technical merit of the report matters more than the tone.
Before you know how urgently to respond, you need to understand how serious the vulnerability is. The standard framework is CVSS — the Common Vulnerability Scoring System — which produces a numerical score from 0 to 10 and a corresponding severity level:
| CVSS Score | Severity | What it typically means |
|---|---|---|
| 9.0–10.0 | Critical | Unauthenticated remote code execution, widespread data exposure affecting all users, complete system takeover possible |
| 7.0–8.9 | High | Significant unauthorized access, meaningful data exposure, significant privilege escalation |
| 4.0–6.9 | Medium | Limited data exposure, requires some conditions to exploit, authenticated access typically required |
| 0.1–3.9 | Low | Minimal impact, difficult to exploit, limited scope |
| 0 | None | Informational, no exploitable risk |
Researchers often include a CVSS score in their report. If they don't, you can assess it roughly by asking:
A critical finding — remote code execution, full database access exposed — requires an emergency response. A low finding — information disclosure of non-sensitive data that requires a specific configuration — can be handled at normal pace.
Don't let a researcher's urgency override your severity assessment. Some researchers inflate severity because they believe it gets faster responses. Evaluate the technical claim independently.
Acknowledge the report within 24 hours. This is the most important thing you can do.
A 24-hour acknowledgment does several things: it confirms the report was received, it shows good faith, and it starts the clock on responsible disclosure norms from a position of positive engagement rather than radio silence.
Here's a template for an appropriate first response:
"Thanks for reaching out. We've received your report and take security issues seriously. We're reviewing what you've described and will follow up with more detail as our investigation proceeds. Can you confirm you haven't shared this information with anyone else while we investigate? We'll be back in touch within [X days] with an update."
What this response does:
What to avoid in the first response:
One important legal note: everything you write can potentially be discoverable if this ends up in litigation. Keep your tone professional and factual. "We are investigating" is appropriate. "We believe this is a serious problem that we'll fix by X" creates commitments you may not be able to keep.
The 90-day standard is the norm in responsible disclosure — pioneered by Google Project Zero, adopted broadly across the security research community. The general expectation: the researcher will give you 90 days to fix the vulnerability before they publish their findings publicly.
After 90 days, researchers typically go public regardless of whether you've patched it. The rationale: public disclosure creates pressure that pushes fixes out faster, and delayed disclosure means users remain exposed without knowing it.
What triggers early disclosure (before 90 days):
What typically earns an extension:
Google Project Zero published that 95.5% of vulnerabilities they've reported get fixed before their disclosure deadline — which suggests that for vendors who engage constructively, the 90 days is workable. The companies that face forced disclosure are generally the ones that didn't respond, not the ones that asked for more time.
If you genuinely need more than 90 days — complex architectural issue, dependency on a third-party fix, regulatory constraints — communicate that clearly and honestly. Most researchers will grant a reasonable extension.
The legal instinct many founders have when receiving a disclosure email is to protect the company. "We didn't ask for this. They may have accessed our systems without authorization. Can we threaten them?"
This is the wrong instinct and can make things dramatically worse.
Security researchers have been threatened under the Computer Fraud and Abuse Act (CFAA) — the US law that makes unauthorized computer access illegal. Some of these threats have had a chilling effect on legitimate security research. The security research community has a long memory, and companies that threaten researchers tend to find their products attracting more scrutiny, not less.
More practically: threatening a researcher typically guarantees they go public — and now the story is "company threatened researcher rather than fixing vulnerability," not just "company had a vulnerability."
In one well-documented case, HackerOne researchers were threatened by a company via a cease-and-desist. The case went public, the original vulnerability became widely known, and the company's security reputation suffered significantly more from the threat than the vulnerability would have on its own.
The legal principle that matters: if a researcher accessed your systems in a limited way necessary to demonstrate a vulnerability, and they reported it to you rather than exploiting it, prosecution under CFAA is legally uncertain and practically counterproductive.
In 2016, Uber experienced a data breach. Hackers stole records for approximately 57 million users and drivers. Rather than report the breach as required, Uber's then-Chief Security Officer Joseph Sullivan arranged to pay the hackers $100,000 in Bitcoin through the bug bounty program — framed as a legitimate bounty — and had them sign NDAs representing that they hadn't stored the data.
Sullivan told a subordinate they "can't let this get out."
In October 2022, Sullivan was convicted by a jury on charges of obstruction of justice (obstructing an ongoing FTC investigation) and failing to report a felony. In May 2023, he was sentenced to three years' probation and 200 hours of community service.
The conviction set a precedent for the cybersecurity industry: concealing a breach through payment to attackers, regardless of how it's framed, is a crime. The CSO of a company can be personally criminally liable for covering up a breach.
The lesson is not subtle: the cover-up is worse than the breach. Every company that has handled disclosure transparently — published a post-mortem, notified affected users, fixed the issue — has recovered. The companies that concealed breaches, paid off researchers to stay quiet, or threatened disclosure have faced outcomes ranging from massive regulatory fines to criminal prosecution.
Uber would have survived the 2016 breach with a straightforward disclosure and notification process. The cover-up cost Uber's CSO his freedom.
A Vulnerability Disclosure Policy (VDP) is a public page — usually at yourdomain.com/security or linked from your footer — that tells security researchers how to report vulnerabilities and what to expect from you.
A minimal VDP needs to address:
Scope: What systems and domains are in scope for reports? What's out of scope? (Usually: third-party services, social engineering, physical attacks, denial of service.)
How to report: An email address (security@yourdomain.com is standard) or a form. Some companies use HackerOne or Bugcrowd's free vulnerability disclosure platform.
What researchers can expect: Your commitment to acknowledge within X days, investigate in good faith, not pursue legal action against researchers acting in good faith, and keep them informed.
What you expect from researchers: No exploitation beyond what's needed to demonstrate the issue. No accessing data that doesn't belong to you. No sharing the vulnerability publicly while it's being investigated.
That's it. A VDP doesn't need to be lengthy. It signals that you have a process, that you won't threaten researchers, and that there's a legitimate channel for disclosure. This alone meaningfully affects how security researchers interact with your product — they're more likely to report to you before publishing if they know you've committed to a fair process.
A Vulnerability Disclosure Policy establishes rules and expectations for reporting. No monetary rewards are committed or implied.
A Bug Bounty Program adds financial incentives — researchers are paid a defined amount based on severity if their finding is accepted as valid.
Bug bounties attract more researchers and higher-quality findings. They're also expensive to run well. You need:
HackerOne and Bugcrowd are the two dominant managed platforms. Both offer free tiers for VDPs and paid tiers for bounty programs. Running a bounty on these platforms provides structure and credibility, but it also creates obligations.
For most early-stage startups, the right order is: start with a VDP, run it for 6–12 months, build the internal process, then evaluate whether a bounty adds enough value to justify the cost and operational overhead.
A bug bounty program is not a substitute for a security review of your own product. Bounty programs find issues attackers find. A proper security review finds issues that researchers haven't found yet.
A researcher reporting a vulnerability is much easier to handle when you already know what your real exposure is. Hunchbite's security audit gives you an independent review of your product's security before someone else finds the problems first.
Call +91 90358 61690 · Book a free call · Contact form
If this guide resonated with your situation, let's talk. We offer a free 30-minute discovery call — no pitch, just honest advice on your specific project.
A practical guide for non-technical founders on how to split equity with a technical co-founder — what factors should drive the number, what vesting protects you both, and how to have the conversation before it becomes a crisis.
11 min readChoosing a PartnerAn honest comparison of fixed-price and hourly billing for software development — when each model makes sense, the hidden risks of both, and how to structure an engagement that protects you.
10 min read