Hunchbite
ServicesGuidesCase StudiesAboutContact
Start a project
Hunchbite

Software development studio focused on craft, speed, and outcomes that matter. Production-grade software shipped in under two weeks.

+91 90358 61690hello@hunchbite.com
Services
All ServicesSolutionsIndustriesTechnologyOur ProcessFree Audit
Company
AboutCase StudiesWhat We're BuildingGuidesToolsPartnersGlossaryFAQ
Popular Guides
Cost to Build a Web AppShopify vs CustomCost of Bad Software
Start a Project
Get StartedBook a CallContactVelocity Program
Social
GitHubLinkedInTwitter

Hunchbite Technologies Private Limited

CIN: U62012KA2024PTC192589

Registered Office: HD-258, Site No. 26, Prestige Cube, WeWork, Laskar Hosur Road, Adugodi, Bangalore South, Karnataka, 560030, India

Incorporated: August 30, 2024

© 2026 Hunchbite Technologies Pvt. Ltd. All rights reserved.· Site updated February 2026

Privacy PolicyTerms of Service
Home/Guides/What to Do When a Security Researcher Reports a Vulnerability
Choosing a Partner

What to Do When a Security Researcher Reports a Vulnerability

A researcher emailed saying they found a security issue in your product. Here's how to evaluate it, what to say, what not to do, and how the companies that handled this badly ended up in criminal court.

By HunchbiteMarch 30, 202611 min read
vulnerability disclosuresecuritybug bounty

Some version of this email has landed in the inbox of almost every SaaS founder with a product that's gotten any traction:

"Hi, I'm a security researcher. I've discovered what appears to be a significant vulnerability in your platform that could expose user data. I wanted to reach out responsibly before going public. Please let me know how you'd like to proceed."

Most founders don't know what to do with this. Is it legitimate? Is it a scam? Should you pay them? Can you tell them to go away? What if you can't fix it quickly?

The good news: most of these emails are legitimate, and handling them correctly is straightforward. The bad news: handling them wrong can turn a manageable security issue into a public relations crisis — or, in the worst cases, a criminal investigation.

What a legitimate disclosure email actually looks like

Responsible disclosure — the practice of privately notifying a company about a vulnerability before going public — is a well-established norm in the security community. Most security researchers follow it because they want vulnerabilities to be fixed, not exploited. The community has developed shared standards around timing, communication, and expectations.

A legitimate disclosure email typically includes:

  • A clear description of what they found
  • Steps to reproduce the issue (sometimes)
  • What data or access the vulnerability could expose
  • A statement that they haven't shared or exploited the information
  • A request to know how you'd like to proceed

What it doesn't usually include:

  • An immediate demand for payment
  • A threat to publish unless paid by a specific date
  • Claims of having already exploited the vulnerability against real users
  • Requests to wire money to an overseas account

A scam or extortion attempt looks different: urgent deadline, specific dollar amount demanded, threat of exposure, sometimes accompanied by a sample of data as "proof." These should be forwarded to law enforcement and handled with your legal counsel. Do not negotiate, and do not pay.

The other category is researchers who are rough around the edges but legitimate — they may be blunt, impatient, or use aggressive language, but they've found a real issue and they're trying to tell you. The technical merit of the report matters more than the tone.

How to evaluate severity: the CVSS score

Before you know how urgently to respond, you need to understand how serious the vulnerability is. The standard framework is CVSS — the Common Vulnerability Scoring System — which produces a numerical score from 0 to 10 and a corresponding severity level:

CVSS Score Severity What it typically means
9.0–10.0 Critical Unauthenticated remote code execution, widespread data exposure affecting all users, complete system takeover possible
7.0–8.9 High Significant unauthorized access, meaningful data exposure, significant privilege escalation
4.0–6.9 Medium Limited data exposure, requires some conditions to exploit, authenticated access typically required
0.1–3.9 Low Minimal impact, difficult to exploit, limited scope
0 None Informational, no exploitable risk

Researchers often include a CVSS score in their report. If they don't, you can assess it roughly by asking:

  • Does this require authentication? An unauthenticated vulnerability is more severe.
  • What's the impact? Can an attacker read data, modify data, or execute code?
  • Who is affected? All users, or specific conditions needed?
  • How difficult is it to exploit? Does it require special knowledge or tools?

A critical finding — remote code execution, full database access exposed — requires an emergency response. A low finding — information disclosure of non-sensitive data that requires a specific configuration — can be handled at normal pace.

Don't let a researcher's urgency override your severity assessment. Some researchers inflate severity because they believe it gets faster responses. Evaluate the technical claim independently.

The first response you should send

Acknowledge the report within 24 hours. This is the most important thing you can do.

A 24-hour acknowledgment does several things: it confirms the report was received, it shows good faith, and it starts the clock on responsible disclosure norms from a position of positive engagement rather than radio silence.

Here's a template for an appropriate first response:

"Thanks for reaching out. We've received your report and take security issues seriously. We're reviewing what you've described and will follow up with more detail as our investigation proceeds. Can you confirm you haven't shared this information with anyone else while we investigate? We'll be back in touch within [X days] with an update."

What this response does:

  • Acknowledges receipt without admitting the vulnerability is real
  • Commits to a follow-up timeline
  • Asks (politely) for continued confidentiality
  • Does not admit fault or liability

What to avoid in the first response:

  • Demanding they prove the vulnerability with further exploitation attempts
  • Threatening legal action
  • Dismissing the report
  • Making any commitments about payment
  • Sharing your investigation findings before you've completed them

One important legal note: everything you write can potentially be discoverable if this ends up in litigation. Keep your tone professional and factual. "We are investigating" is appropriate. "We believe this is a serious problem that we'll fix by X" creates commitments you may not be able to keep.

The 90-day disclosure timeline and what it actually means

The 90-day standard is the norm in responsible disclosure — pioneered by Google Project Zero, adopted broadly across the security research community. The general expectation: the researcher will give you 90 days to fix the vulnerability before they publish their findings publicly.

After 90 days, researchers typically go public regardless of whether you've patched it. The rationale: public disclosure creates pressure that pushes fixes out faster, and delayed disclosure means users remain exposed without knowing it.

What triggers early disclosure (before 90 days):

  • The researcher has evidence the vulnerability is being actively exploited by others
  • You've refused to communicate or dismissed the report
  • You've made legal threats
  • You've been dismissive or hostile

What typically earns an extension:

  • Good faith communication
  • Evidence that a fix is in progress
  • A clear, honest timeline with a reasonable ask

Google Project Zero published that 95.5% of vulnerabilities they've reported get fixed before their disclosure deadline — which suggests that for vendors who engage constructively, the 90 days is workable. The companies that face forced disclosure are generally the ones that didn't respond, not the ones that asked for more time.

If you genuinely need more than 90 days — complex architectural issue, dependency on a third-party fix, regulatory constraints — communicate that clearly and honestly. Most researchers will grant a reasonable extension.

The legal mistake: threatening researchers

The legal instinct many founders have when receiving a disclosure email is to protect the company. "We didn't ask for this. They may have accessed our systems without authorization. Can we threaten them?"

This is the wrong instinct and can make things dramatically worse.

Security researchers have been threatened under the Computer Fraud and Abuse Act (CFAA) — the US law that makes unauthorized computer access illegal. Some of these threats have had a chilling effect on legitimate security research. The security research community has a long memory, and companies that threaten researchers tend to find their products attracting more scrutiny, not less.

More practically: threatening a researcher typically guarantees they go public — and now the story is "company threatened researcher rather than fixing vulnerability," not just "company had a vulnerability."

In one well-documented case, HackerOne researchers were threatened by a company via a cease-and-desist. The case went public, the original vulnerability became widely known, and the company's security reputation suffered significantly more from the threat than the vulnerability would have on its own.

The legal principle that matters: if a researcher accessed your systems in a limited way necessary to demonstrate a vulnerability, and they reported it to you rather than exploiting it, prosecution under CFAA is legally uncertain and practically counterproductive.

The Uber cover-up: why transparency beats silence

In 2016, Uber experienced a data breach. Hackers stole records for approximately 57 million users and drivers. Rather than report the breach as required, Uber's then-Chief Security Officer Joseph Sullivan arranged to pay the hackers $100,000 in Bitcoin through the bug bounty program — framed as a legitimate bounty — and had them sign NDAs representing that they hadn't stored the data.

Sullivan told a subordinate they "can't let this get out."

In October 2022, Sullivan was convicted by a jury on charges of obstruction of justice (obstructing an ongoing FTC investigation) and failing to report a felony. In May 2023, he was sentenced to three years' probation and 200 hours of community service.

The conviction set a precedent for the cybersecurity industry: concealing a breach through payment to attackers, regardless of how it's framed, is a crime. The CSO of a company can be personally criminally liable for covering up a breach.

The lesson is not subtle: the cover-up is worse than the breach. Every company that has handled disclosure transparently — published a post-mortem, notified affected users, fixed the issue — has recovered. The companies that concealed breaches, paid off researchers to stay quiet, or threatened disclosure have faced outcomes ranging from massive regulatory fines to criminal prosecution.

Uber would have survived the 2016 breach with a straightforward disclosure and notification process. The cover-up cost Uber's CSO his freedom.

How to set up a Vulnerability Disclosure Policy

A Vulnerability Disclosure Policy (VDP) is a public page — usually at yourdomain.com/security or linked from your footer — that tells security researchers how to report vulnerabilities and what to expect from you.

A minimal VDP needs to address:

Scope: What systems and domains are in scope for reports? What's out of scope? (Usually: third-party services, social engineering, physical attacks, denial of service.)

How to report: An email address (security@yourdomain.com is standard) or a form. Some companies use HackerOne or Bugcrowd's free vulnerability disclosure platform.

What researchers can expect: Your commitment to acknowledge within X days, investigate in good faith, not pursue legal action against researchers acting in good faith, and keep them informed.

What you expect from researchers: No exploitation beyond what's needed to demonstrate the issue. No accessing data that doesn't belong to you. No sharing the vulnerability publicly while it's being investigated.

That's it. A VDP doesn't need to be lengthy. It signals that you have a process, that you won't threaten researchers, and that there's a legitimate channel for disclosure. This alone meaningfully affects how security researchers interact with your product — they're more likely to report to you before publishing if they know you've committed to a fair process.

VDP vs. bug bounty: the difference

A Vulnerability Disclosure Policy establishes rules and expectations for reporting. No monetary rewards are committed or implied.

A Bug Bounty Program adds financial incentives — researchers are paid a defined amount based on severity if their finding is accepted as valid.

Bug bounties attract more researchers and higher-quality findings. They're also expensive to run well. You need:

  • A defined scope (what's in scope for bounty)
  • A clear reward structure
  • Fast response times (researchers on bounty platforms have expectations around response and triage speed)
  • Budget (a critical finding on a mature bounty program can pay $5,000–$50,000+)
  • Internal capacity to triage and validate reports

HackerOne and Bugcrowd are the two dominant managed platforms. Both offer free tiers for VDPs and paid tiers for bounty programs. Running a bounty on these platforms provides structure and credibility, but it also creates obligations.

For most early-stage startups, the right order is: start with a VDP, run it for 6–12 months, build the internal process, then evaluate whether a bounty adds enough value to justify the cost and operational overhead.

A bug bounty program is not a substitute for a security review of your own product. Bounty programs find issues attackers find. A proper security review finds issues that researchers haven't found yet.


Build the security posture that lets you handle disclosures confidently

A researcher reporting a vulnerability is much easier to handle when you already know what your real exposure is. Hunchbite's security audit gives you an independent review of your product's security before someone else finds the problems first.

→ Technical Audit

Call +91 90358 61690 · Book a free call · Contact form

FAQ
Do I have to pay a security researcher who reports a vulnerability?
No, unless you've set up a bug bounty program with a published reward structure. Outside of a formal bug bounty, there's no legal obligation to pay for a vulnerability report. That said, some companies choose to offer token payments or gift cards as goodwill gestures — which is fine. What you should never do is offer a payment as part of a non-disclosure agreement designed to keep the vulnerability secret. This is the line Uber crossed: paying $100,000 to researchers in exchange for them signing NDAs and deleting the data — which a jury found constituted obstruction of justice. A goodwill payment is fine; a hush payment is a crime.
What if I can't fix the vulnerability in time?
Communicate proactively. The standard responsible disclosure timeline is 90 days — researchers typically go public after 90 days whether or not you've patched it. If you can't hit that deadline, contact the researcher, explain the situation honestly, and ask for an extension. Most researchers will grant a reasonable extension for good-faith communication — Google Project Zero grants extensions when vendors make genuine progress. What triggers early disclosure is silence, dismissal, or legal threats. Keep the researcher informed, give them a realistic timeline, and ask for what you need. Most responsible researchers would rather see a good patch than a rushed one.
Should I set up a bug bounty program?
Probably not until you're ready to manage it well. A bug bounty program creates an expectation of rapid response, clear scope, and fair payouts. Running one badly — slow responses, scope disputes, low payouts — creates more reputational damage than not having one. For most early-stage startups, a Vulnerability Disclosure Policy (VDP) is the right starting point: a public page that tells researchers how to report issues, what to expect from you, and your commitment to investigate in good faith. No money required. You can layer in a bug bounty later when you have the process and budget to run it well.
Next step

Ready to move forward?

If this guide resonated with your situation, let's talk. We offer a free 30-minute discovery call — no pitch, just honest advice on your specific project.

Book a Free CallSend a Message
Continue Reading
Choosing a Partner

How to Structure Equity for a Technical Co-Founder

A practical guide for non-technical founders on how to split equity with a technical co-founder — what factors should drive the number, what vesting protects you both, and how to have the conversation before it becomes a crisis.

11 min read
Choosing a Partner

Fixed Price vs Hourly Development: Which Model Actually Works?

An honest comparison of fixed-price and hourly billing for software development — when each model makes sense, the hidden risks of both, and how to structure an engagement that protects you.

10 min read
All Guides