AWS secures the data center. Everything you put in it is your problem. Here's what the Shared Responsibility Model actually means, where startups get it wrong, and what a real cloud security baseline looks like.
"We're on AWS" has become a shorthand for "our infrastructure is solid." Founders say it in investor pitches, enterprise sales conversations, and security questionnaires. It carries weight. AWS is where Netflix runs. It's where most of the internet runs.
It just doesn't mean what most founders think it means.
AWS can be — and regularly is — breached. Not because AWS failed, but because the company using AWS misconfigured something. The most significant cloud breach in recent memory wasn't AWS's fault at all. And the pattern it represents is the norm, not the exception.
AWS publishes something called the Shared Responsibility Model. It's their official answer to the question "who is responsible for security when you use AWS?" The model divides security into two domains:
AWS is responsible for "security of the cloud":
You are responsible for "security in the cloud":
The apartment building analogy works well here: AWS secures the building — the locks on the front door, the physical structure, the utilities. You're responsible for what happens inside your apartment. If you leave your apartment door unlocked, that's not the building management's problem.
Most founders understand this conceptually. The problem is they don't trace the implications. "We're on AWS" sounds like a security statement. What it actually means is "we're using secure infrastructure that we may or may not have configured correctly." The word "we" is doing all the work.
In 2019, Capital One experienced one of the largest financial data breaches in US history. More than 100 million credit card applications were exposed — names, addresses, credit scores, Social Security numbers, bank account numbers.
Capital One was running on AWS. AWS did not fail.
Here's what happened: Capital One ran a web application firewall (WAF) on an EC2 instance. That WAF was misconfigured — it allowed Server-Side Request Forgery (SSRF) requests to reach an internal AWS metadata service endpoint. The attacker, Paige Thompson, a former AWS engineer, exploited this misconfiguration to retrieve temporary credentials from the metadata service. With those credentials, she was able to access S3 buckets containing the customer data.
Every failure in that chain was a Capital One configuration mistake:
AWS actually updated their metadata service four months after the breach — moving from IMDSv1 to IMDSv2, which adds a layer of protection against SSRF attacks. But the root cause was configuration, not infrastructure.
Capital One was fined $80 million by banking regulators. Not AWS.
The industry pattern: according to various cloud security reports, over 60% of cloud security incidents trace back to customer misconfiguration rather than provider failures. The attack vector that compromised Capital One — misconfigured access to an internal metadata endpoint — was not sophisticated. It required knowing the technique, not building novel capabilities.
None of these require a sophisticated attacker. They're exploitable by anyone who knows to look.
1. Public S3 buckets
S3 is AWS's object storage service — basically cloud file storage. By default, buckets are private. But the AWS console makes it very easy to change this, and developers often make buckets public while testing or when trying to quickly share files, then forget to change it back.
A public S3 bucket means every file in it is accessible to anyone on the internet who knows or guesses the URL. Bucket names often follow predictable patterns (companyname-backups, companyname-uploads). There are automated tools that scan for public buckets.
AWS added an "S3 Block Public Access" feature in 2018 and made it the default for new buckets in 2022. But older buckets, buckets created through certain processes, and buckets where the setting was manually disabled remain exposed. A 2024 incident involved a company's internal customer data being publicly accessible in an S3 bucket for months before discovery.
What to check: In the S3 console, enable "Block all public access" at the account level. Audit any existing buckets for public access settings.
2. Overpermissioned IAM roles
IAM (Identity and Access Management) controls what actions different AWS users, services, and applications are allowed to take. The principle of least privilege says each entity should have the minimum permissions necessary for its function.
In practice, it's much easier to give something AdministratorAccess than to figure out exactly what permissions it needs. Teams do this during development and never clean it up. The result: a single compromised service can perform actions across your entire AWS account.
This was the mechanism in the Capital One breach. The WAF server had an IAM role with permissions it didn't need — and those excess permissions were what the attacker used to access the S3 buckets.
What to check: Review your IAM roles. Does any role have AdministratorAccess or * on all resources that doesn't genuinely need it? EC2 instances running your application should not have permissions to access all your S3 buckets.
3. No MFA on the root account
Every AWS account has a "root" account — the master account created when you first signed up. It has unrestricted access to everything in the account. If someone compromises the root account email and password, they own your entire AWS environment.
Most AWS resources should be managed through IAM users and roles, not the root account. The root account should have MFA enabled, a strong unique password, and should almost never be used for routine tasks.
A surprising number of companies — including funded startups — have no MFA on their root account. A phished password is all it takes.
What to check: Log in as root. Is MFA enabled? It takes five minutes to set up.
4. Hardcoded credentials in source code
API keys, database passwords, AWS access keys — these sometimes end up in code. A developer adds a quick test, commits with the key in the environment variable hardcoded, pushes to GitHub. Even if they delete the key from the file later, it's in the git history.
GitHub is constantly scanned — by attackers and by automated tools — for accidentally committed credentials. AWS access keys are particularly high-value: finding one means instant access to the AWS account. There are bots that scan public GitHub repositories for AWS keys in real time and attempt to use them within minutes of a push.
AWS has a service called IAM Access Analyzer that can detect certain types of exposed credentials. GitHub has secret scanning for public repositories. Neither is foolproof.
What to check: Scan your git history for any committed credentials. Search for AWS_ACCESS_KEY_ID, AKIA, password, secret in your repository history. Use environment variables and secrets managers, not hardcoded values.
5. Unrestricted security groups
Security groups are AWS's firewall controls for EC2 instances and other resources. They control which IP addresses and ports can connect to your resources.
A security group rule allowing inbound traffic from 0.0.0.0/0 (any IP) on port 22 (SSH) or port 3306 (MySQL) means your server or database is accessible to anyone on the internet. Attackers run automated scans of the entire internet looking for open SSH and database ports. Finding one with default or weak credentials means immediate access.
This is how most database breaches happen — not through sophisticated exploitation, but through a database that had no business being internet-accessible, sitting exposed because a security group was misconfigured.
What to check: Review your security group inbound rules. Any rule allowing 0.0.0.0/0 on administrative ports (22, 3389, 3306, 5432, 27017, etc.) should be restricted to your specific IP ranges or VPN.
Even if you get all of the above right, you have a second category of security to address: your application code itself.
Infrastructure security (the domain of the Shared Responsibility Model) covers how your AWS resources are configured. Application security covers the code you write that runs on that infrastructure.
Common application-layer vulnerabilities have nothing to do with AWS:
A perfectly configured AWS environment can be breached via a SQL injection vulnerability in your application. These are different security domains and both require attention.
The OWASP Top 10 — a list of the most common web application vulnerabilities — hasn't changed dramatically in years. Most application-layer breaches involve vulnerabilities that have been understood and documented for a decade. They persist because they're easy to introduce and easy to miss without systematic review.
Here's a practical checklist for a startup on AWS. Not everything on this list is a single task — some require engineering work — but these are the 10 things that should be verified before you're handling customer data at scale:
AdministratorAccess for services that don't need itThis list doesn't make you comprehensively secure. There's no such thing. But it addresses the most common attack vectors and the configurations that turn a minor mistake into a major breach.
When you tell an enterprise customer "we run on AWS," what they're really asking is: "how have you configured AWS?" The infrastructure choice matters; the configuration matters more.
Enterprise customers — especially in regulated industries — often send security questionnaires that ask specifically about IAM policies, encryption configurations, network architecture, and audit logging. "We use AWS" does not answer those questions. Having the configurations above in place does.
The same applies to security certifications like SOC 2. A SOC 2 audit evaluates your actual security controls, not just your infrastructure provider. Getting a SOC 2 report requires having implemented and documented controls — the underlying infrastructure choice is one input among many.
The honest framing: AWS gives you enterprise-grade components to build with. Whether the result is enterprise-grade security depends on how you build.
Hunchbite's security audit reviews your AWS configuration, application security, and infrastructure setup — and gives you a prioritised list of what needs to change, with specifics, not just a traffic light report.
Call +91 90358 61690 · Book a free call · Contact form
If this guide resonated with your situation, let's talk. We offer a free 30-minute discovery call — no pitch, just honest advice on your specific project.
A practical guide for non-technical founders on how to split equity with a technical co-founder — what factors should drive the number, what vesting protects you both, and how to have the conversation before it becomes a crisis.
11 min readChoosing a PartnerAn honest comparison of fixed-price and hourly billing for software development — when each model makes sense, the hidden risks of both, and how to structure an engagement that protects you.
10 min read