Hunchbite
ServicesGuidesCase StudiesAboutContact
Start a project
Hunchbite

Software development studio focused on craft, speed, and outcomes that matter. Production-grade software shipped in under two weeks.

+91 90358 61690hello@hunchbite.com
Services
All ServicesSolutionsIndustriesTechnologyOur ProcessFree Audit
Company
AboutCase StudiesWhat We're BuildingGuidesToolsPartnersGlossaryFAQ
Popular Guides
Cost to Build a Web AppShopify vs CustomCost of Bad Software
Start a Project
Get StartedBook a CallContactVelocity Program
Social
GitHubLinkedInTwitter

Hunchbite Technologies Private Limited

CIN: U62012KA2024PTC192589

Registered Office: HD-258, Site No. 26, Prestige Cube, WeWork, Laskar Hosur Road, Adugodi, Bangalore South, Karnataka, 560030, India

Incorporated: August 30, 2024

© 2026 Hunchbite Technologies Pvt. Ltd. All rights reserved.· Site updated February 2026

Privacy PolicyTerms of Service
Home/Guides/Why 'We Use AWS' Doesn't Mean You're Secure
Choosing a Partner

Why 'We Use AWS' Doesn't Mean You're Secure

AWS secures the data center. Everything you put in it is your problem. Here's what the Shared Responsibility Model actually means, where startups get it wrong, and what a real cloud security baseline looks like.

By HunchbiteMarch 30, 202611 min read
AWSsecuritycloud security

"We're on AWS" has become a shorthand for "our infrastructure is solid." Founders say it in investor pitches, enterprise sales conversations, and security questionnaires. It carries weight. AWS is where Netflix runs. It's where most of the internet runs.

It just doesn't mean what most founders think it means.

AWS can be — and regularly is — breached. Not because AWS failed, but because the company using AWS misconfigured something. The most significant cloud breach in recent memory wasn't AWS's fault at all. And the pattern it represents is the norm, not the exception.

The Shared Responsibility Model, explained plainly

AWS publishes something called the Shared Responsibility Model. It's their official answer to the question "who is responsible for security when you use AWS?" The model divides security into two domains:

AWS is responsible for "security of the cloud":

  • Physical data center security
  • The global network infrastructure
  • The hardware your virtual machines run on
  • The hypervisor layer that separates different customers' workloads
  • The managed services AWS operates (like RDS database engines, Lambda runtimes)

You are responsible for "security in the cloud":

  • The operating system on your virtual machines (patching, configuration)
  • Your application code
  • Your data — encryption, access controls, backups
  • Your network configuration (security groups, VPC settings)
  • Identity and access management (who has permission to do what in your AWS account)
  • Everything you build on top of AWS services

The apartment building analogy works well here: AWS secures the building — the locks on the front door, the physical structure, the utilities. You're responsible for what happens inside your apartment. If you leave your apartment door unlocked, that's not the building management's problem.

Most founders understand this conceptually. The problem is they don't trace the implications. "We're on AWS" sounds like a security statement. What it actually means is "we're using secure infrastructure that we may or may not have configured correctly." The word "we" is doing all the work.

The Capital One breach: when AWS wasn't at fault

In 2019, Capital One experienced one of the largest financial data breaches in US history. More than 100 million credit card applications were exposed — names, addresses, credit scores, Social Security numbers, bank account numbers.

Capital One was running on AWS. AWS did not fail.

Here's what happened: Capital One ran a web application firewall (WAF) on an EC2 instance. That WAF was misconfigured — it allowed Server-Side Request Forgery (SSRF) requests to reach an internal AWS metadata service endpoint. The attacker, Paige Thompson, a former AWS engineer, exploited this misconfiguration to retrieve temporary credentials from the metadata service. With those credentials, she was able to access S3 buckets containing the customer data.

Every failure in that chain was a Capital One configuration mistake:

  • The WAF misconfiguration was theirs
  • The overly permissive IAM role was theirs
  • The S3 bucket access controls were theirs

AWS actually updated their metadata service four months after the breach — moving from IMDSv1 to IMDSv2, which adds a layer of protection against SSRF attacks. But the root cause was configuration, not infrastructure.

Capital One was fined $80 million by banking regulators. Not AWS.

The industry pattern: according to various cloud security reports, over 60% of cloud security incidents trace back to customer misconfiguration rather than provider failures. The attack vector that compromised Capital One — misconfigured access to an internal metadata endpoint — was not sophisticated. It required knowing the technique, not building novel capabilities.

The 5 most common AWS security mistakes

None of these require a sophisticated attacker. They're exploitable by anyone who knows to look.

1. Public S3 buckets

S3 is AWS's object storage service — basically cloud file storage. By default, buckets are private. But the AWS console makes it very easy to change this, and developers often make buckets public while testing or when trying to quickly share files, then forget to change it back.

A public S3 bucket means every file in it is accessible to anyone on the internet who knows or guesses the URL. Bucket names often follow predictable patterns (companyname-backups, companyname-uploads). There are automated tools that scan for public buckets.

AWS added an "S3 Block Public Access" feature in 2018 and made it the default for new buckets in 2022. But older buckets, buckets created through certain processes, and buckets where the setting was manually disabled remain exposed. A 2024 incident involved a company's internal customer data being publicly accessible in an S3 bucket for months before discovery.

What to check: In the S3 console, enable "Block all public access" at the account level. Audit any existing buckets for public access settings.

2. Overpermissioned IAM roles

IAM (Identity and Access Management) controls what actions different AWS users, services, and applications are allowed to take. The principle of least privilege says each entity should have the minimum permissions necessary for its function.

In practice, it's much easier to give something AdministratorAccess than to figure out exactly what permissions it needs. Teams do this during development and never clean it up. The result: a single compromised service can perform actions across your entire AWS account.

This was the mechanism in the Capital One breach. The WAF server had an IAM role with permissions it didn't need — and those excess permissions were what the attacker used to access the S3 buckets.

What to check: Review your IAM roles. Does any role have AdministratorAccess or * on all resources that doesn't genuinely need it? EC2 instances running your application should not have permissions to access all your S3 buckets.

3. No MFA on the root account

Every AWS account has a "root" account — the master account created when you first signed up. It has unrestricted access to everything in the account. If someone compromises the root account email and password, they own your entire AWS environment.

Most AWS resources should be managed through IAM users and roles, not the root account. The root account should have MFA enabled, a strong unique password, and should almost never be used for routine tasks.

A surprising number of companies — including funded startups — have no MFA on their root account. A phished password is all it takes.

What to check: Log in as root. Is MFA enabled? It takes five minutes to set up.

4. Hardcoded credentials in source code

API keys, database passwords, AWS access keys — these sometimes end up in code. A developer adds a quick test, commits with the key in the environment variable hardcoded, pushes to GitHub. Even if they delete the key from the file later, it's in the git history.

GitHub is constantly scanned — by attackers and by automated tools — for accidentally committed credentials. AWS access keys are particularly high-value: finding one means instant access to the AWS account. There are bots that scan public GitHub repositories for AWS keys in real time and attempt to use them within minutes of a push.

AWS has a service called IAM Access Analyzer that can detect certain types of exposed credentials. GitHub has secret scanning for public repositories. Neither is foolproof.

What to check: Scan your git history for any committed credentials. Search for AWS_ACCESS_KEY_ID, AKIA, password, secret in your repository history. Use environment variables and secrets managers, not hardcoded values.

5. Unrestricted security groups

Security groups are AWS's firewall controls for EC2 instances and other resources. They control which IP addresses and ports can connect to your resources.

A security group rule allowing inbound traffic from 0.0.0.0/0 (any IP) on port 22 (SSH) or port 3306 (MySQL) means your server or database is accessible to anyone on the internet. Attackers run automated scans of the entire internet looking for open SSH and database ports. Finding one with default or weak credentials means immediate access.

This is how most database breaches happen — not through sophisticated exploitation, but through a database that had no business being internet-accessible, sitting exposed because a security group was misconfigured.

What to check: Review your security group inbound rules. Any rule allowing 0.0.0.0/0 on administrative ports (22, 3389, 3306, 5432, 27017, etc.) should be restricted to your specific IP ranges or VPN.

Infrastructure security vs. application security

Even if you get all of the above right, you have a second category of security to address: your application code itself.

Infrastructure security (the domain of the Shared Responsibility Model) covers how your AWS resources are configured. Application security covers the code you write that runs on that infrastructure.

Common application-layer vulnerabilities have nothing to do with AWS:

  • SQL injection in your database queries
  • Broken authentication (weak session management, no rate limiting on login)
  • Insecure direct object references (user A can access user B's data by changing an ID in the URL)
  • Sensitive data exposure (logging credentials or PHI in application logs)
  • Missing authorization checks (authenticated users accessing resources they shouldn't)

A perfectly configured AWS environment can be breached via a SQL injection vulnerability in your application. These are different security domains and both require attention.

The OWASP Top 10 — a list of the most common web application vulnerabilities — hasn't changed dramatically in years. Most application-layer breaches involve vulnerabilities that have been understood and documented for a decade. They persist because they're easy to introduce and easy to miss without systematic review.

What a real cloud security baseline looks like for a SaaS startup

Here's a practical checklist for a startup on AWS. Not everything on this list is a single task — some require engineering work — but these are the 10 things that should be verified before you're handling customer data at scale:

  1. S3 Block Public Access enabled at the account level; audit existing buckets
  2. MFA on root account and root account not used for routine operations
  3. IAM users or roles for all human access — no shared credentials
  4. MFA enforced for IAM users with console access
  5. Least-privilege IAM policies — no AdministratorAccess for services that don't need it
  6. No hardcoded credentials in source code or environment — use AWS Secrets Manager or Parameter Store
  7. Security groups restrict administrative port access to specific IP ranges or VPN
  8. CloudTrail enabled — AWS's audit log of all API calls in your account; retained for at least 90 days
  9. Encryption at rest enabled for S3 buckets and RDS databases with customer-managed keys for sensitive data
  10. VPC with private subnets for databases — databases should not have public IP addresses

This list doesn't make you comprehensively secure. There's no such thing. But it addresses the most common attack vectors and the configurations that turn a minor mistake into a major breach.

What "enterprise-grade infrastructure" actually means

When you tell an enterprise customer "we run on AWS," what they're really asking is: "how have you configured AWS?" The infrastructure choice matters; the configuration matters more.

Enterprise customers — especially in regulated industries — often send security questionnaires that ask specifically about IAM policies, encryption configurations, network architecture, and audit logging. "We use AWS" does not answer those questions. Having the configurations above in place does.

The same applies to security certifications like SOC 2. A SOC 2 audit evaluates your actual security controls, not just your infrastructure provider. Getting a SOC 2 report requires having implemented and documented controls — the underlying infrastructure choice is one input among many.

The honest framing: AWS gives you enterprise-grade components to build with. Whether the result is enterprise-grade security depends on how you build.


Find out what's actually misconfigured before an attacker does

Hunchbite's security audit reviews your AWS configuration, application security, and infrastructure setup — and gives you a prioritised list of what needs to change, with specifics, not just a traffic light report.

→ Technical Audit

Call +91 90358 61690 · Book a free call · Contact form

FAQ
Is my data safe if I'm on AWS?
That depends entirely on how you've configured AWS, not on AWS itself. AWS guarantees the security of its physical data centers, global network infrastructure, and the hypervisor layer that runs your virtual machines. Everything above that layer — your operating systems, application code, database configuration, access controls, network security groups, encryption settings, and IAM permissions — is your responsibility. Most real-world breaches of AWS-hosted companies involve misconfiguration by the customer, not failures in AWS's infrastructure.
What is the AWS Shared Responsibility Model?
The AWS Shared Responsibility Model divides security obligations between AWS and its customers. AWS is responsible for 'security of the cloud': the physical infrastructure, hardware, networking, and the virtualization layer. You are responsible for 'security in the cloud': your operating systems, applications, data, access management, network configuration, and encryption. A useful shorthand: AWS secures the building; you secure what's inside your apartment. The model is clearly documented by AWS, but it's widely misunderstood in practice — especially by founders who interpret 'enterprise-grade infrastructure' as meaning their application is secure.
What are the most common security mistakes on AWS?
The five most common, all of which are exploitable without sophisticated attackers: (1) Public S3 buckets — storage buckets left open to the internet, exposing files to anyone who knows the URL. (2) Overpermissioned IAM roles — giving services or people more AWS permissions than they need, so one compromise cascades. (3) No MFA on the root account — the most privileged account in your AWS environment sitting behind only a password. (4) Hardcoded credentials in source code — API keys or database passwords committed to a repository, where they're findable in version history even after deletion. (5) Unrestricted security groups — firewall rules that allow inbound access from any IP (0.0.0.0/0) on ports that should be restricted.
Next step

Ready to move forward?

If this guide resonated with your situation, let's talk. We offer a free 30-minute discovery call — no pitch, just honest advice on your specific project.

Book a Free CallSend a Message
Continue Reading
Choosing a Partner

How to Structure Equity for a Technical Co-Founder

A practical guide for non-technical founders on how to split equity with a technical co-founder — what factors should drive the number, what vesting protects you both, and how to have the conversation before it becomes a crisis.

11 min read
Choosing a Partner

Fixed Price vs Hourly Development: Which Model Actually Works?

An honest comparison of fixed-price and hourly billing for software development — when each model makes sense, the hidden risks of both, and how to structure an engagement that protects you.

10 min read
All Guides