Your team works in sprints, but sprint goals keep getting missed and you're not sure what to make of it. This guide explains what sprints are actually for, what healthy ones look like from the outside, and what to do when they're consistently going wrong.
At some point your engineering team started "working in sprints," and now you hear terms like sprint planning, standups, velocity, and retrospectives on a regular basis. You might be attending some of these ceremonies without being fully sure what they're for. You're definitely watching sprint goals get missed and wondering what to make of it.
This guide explains what sprints and agile actually are, what the ceremonies are supposed to accomplish, and how to tell whether your team's agile process is healthy or just going through the motions.
A sprint is a fixed-length block of time — usually two weeks — during which a team commits to completing a defined set of work. At the end of the sprint, the team should have something to show: working software that can be reviewed, tested, and potentially shipped.
The key word is "fixed." The sprint doesn't extend when work isn't done. Instead, the work that wasn't completed gets assessed: can it be finished in the next day or two, or does it carry over to the next sprint? This constraint is intentional — it forces the team to regularly reckon with whether they're planning well.
Sprints come from a methodology called Scrum, which is one flavor of Agile development. "Agile" is a broader term for development approaches that emphasize iterative delivery, flexibility, and continuous feedback — as opposed to "waterfall" development, where you plan everything upfront, build it all, and deliver at the end.
The core premise of sprints: you can't perfectly predict everything that needs to be built, so instead of trying to plan six months in detail, you plan two weeks at a time. After each sprint, you learn something — about what works, what users need, what's harder than expected — and use that to plan the next sprint better.
There are four core ceremonies in a typical sprint process. Each has a specific purpose.
Sprint planning happens at the start of each sprint. The team reviews the backlog (the prioritized list of work to be done), selects what can realistically be completed in the sprint, and defines what "done" looks like for each item. This is where scope is set.
As a founder, sprint planning is where you have the most leverage. Coming in with priorities clear and requirements well-defined means the team can commit to the right work. Coming in unprepared or changing priorities mid-planning means the team leaves with an unclear mandate.
Daily standups (or daily scrums) are short — ideally 15 minutes — daily team check-ins. The classic format: what did you do yesterday, what are you doing today, what's blocking you? The point is to surface blockers fast so they can be resolved, and to keep the team coordinated without lengthy meetings.
The standup is a team communication tool, not a status report for management. If engineers feel like they're performing for a founder or manager rather than communicating with each other, standups become theater — people say what they think you want to hear.
Sprint review (also called the sprint demo) happens at the end of the sprint. The team shows what was built. This is the most important ceremony for you as a founder: it's where you see the actual software, ask questions, provide feedback, and confirm whether what was built matches what you wanted. A sprint review without a real product demo is a red flag — the team should be showing working software, not slides about work in progress.
Sprint retrospective happens after the review. The team reflects on their own process: what went well, what didn't, what should change next sprint. This is a team improvement conversation, and it works best when you're not present. Engineers need to be able to raise concerns about the process — including things that might reflect poorly on how the team is managed — without feeling observed.
Velocity is the measure of how much work a team completes per sprint, typically measured in "story points" — an abstract unit that represents the estimated effort of a task.
If your team estimates that each task is worth some number of points, and they complete tasks worth 40 points in a sprint, their velocity is 40. Over time, this number should stabilize, and you can use it to forecast: if the remaining backlog is 200 points and the team's velocity is 40, you have roughly 5 sprints (10 weeks) of work ahead.
This is a useful internal forecasting tool. But it's often misused, and there are important things velocity does not mean:
Velocity is not comparable between teams. A team with a velocity of 60 is not necessarily faster or more productive than a team with a velocity of 30. Story point estimates are defined within each team — what one team calls "5 points" another team might call "2 points." Comparing velocities across teams is meaningless.
Velocity should not be a performance target. If you say "I want the team's velocity to be higher," the easiest way to achieve that is to inflate story point estimates. The number goes up; nothing changes about how much work actually gets done. Velocity is a planning tool, not a performance metric.
Swings in velocity are normal. A sprint where a key engineer is out, where an unexpected technical problem requires significant investigation, or where a large chunk of work turns out to be more complex than estimated — all of these produce lower velocity. This isn't failure; it's how software projects work.
There's a common misconception that adopting agile/sprints will fix deadline problems. It usually doesn't — for a specific reason.
Sprints improve your visibility into whether you're on track. At the end of every sprint, you have a clear picture of what was built versus what was planned, and you can update your forecast accordingly. This is genuinely valuable: you find out you're behind after two weeks, not six months.
But sprints don't eliminate the underlying causes of missed deadlines: over-commitment, unclear requirements, scope creep, technical complexity that's harder than expected, and unexpected interruptions. Sprints just surface these problems faster.
If every sprint ends with significant unfinished work, the problem isn't with the sprint process — it's with either the volume of work being committed to, the clarity of requirements, or something in the team's working environment that's creating unplanned interruptions. Sprints make this visible; they don't fix it.
A sprint board (whether in Jira, Linear, Trello, or another tool) shows the current state of sprint work as tickets moving through columns: typically "to do," "in progress," "in review," and "done."
You don't need to understand the tickets' technical content to get useful information from the board. Look for:
How much is in "in progress" simultaneously? If there are 12 tickets all marked "in progress" for a team of 3 engineers, something is wrong. Either the team isn't keeping the board updated, or they're context-switching too much. A healthy sprint board shows each engineer working on 1–2 things at a time.
What's been sitting in "in review" for days? Code review is sometimes a bottleneck. A ticket that's been waiting for review for three days is blocked. Ask why.
What's still in "to do" in the last two days of the sprint? If there are significant unstarted tickets near the end of the sprint, the team is behind and those items are unlikely to ship.
What's in "done"? This is the number that matters most. Does it represent real, testable software, or are tickets being marked done before they've been fully tested?
The sprint board is a snapshot, not a complete picture. Use it as a starting point for questions, not as a substitute for actually reviewing the software in staging.
If your team regularly ends sprints with 40–50% of planned tickets unfinished, this pattern has a cause. The most common ones:
Systematic over-commitment. The team consistently plans more work than they can realistically complete in two weeks. The fix: use historical velocity to plan, not optimism. If the team completes 30 points per sprint on average, don't plan 50. Sprint planning should be a negotiation, not a wish list.
Unplanned work. Bugs from the previous sprint, urgent business requests, technical incidents — all of these consume sprint capacity that wasn't planned for. One approach is to explicitly reserve a "buffer" of capacity (say, 20%) for unplanned work. If nothing unplanned comes up, the buffer absorbs overflow from sprint work. If things come up, the sprint can still close cleanly.
Unclear requirements. An engineer starts a ticket and discovers the requirements aren't defined clearly enough to build without asking questions. If those questions take a day to get answered, the ticket is effectively blocked for a day. Better-defined tickets before sprint planning reduces this significantly.
External dependencies. The ticket requires something from a third party — a design asset, an API from another team, a decision from a stakeholder — that isn't ready. These are blockers, and they should be tracked explicitly.
The signal to look for: is the unfinished work random (different tickets each sprint), or is it systematic (the same types of tickets, the same causes)? Systematic unfinished work has a fixable root cause. Random variance is just the nature of complex projects.
You don't need to be in every meeting or read every ticket to assess sprint health. Here's what to look for:
The unhealthy version: the sprint review is a brief summary with no demo. Unfinished work is explained with vague references to complexity. The same types of tickets are unfinished sprint after sprint. You only find out about problems when you ask directly.
Hunchbite gives agency clients full sprint visibility — sprint planning access, weekly staging demos, and written summaries of what shipped and what didn't, so you always know where things stand.
Call +91 90358 61690 · Book a free call · Contact form
If this guide resonated with your situation, let's talk. We offer a free 30-minute discovery call — no pitch, just honest advice on your specific project.
A practical guide for non-technical founders on how to split equity with a technical co-founder — what factors should drive the number, what vesting protects you both, and how to have the conversation before it becomes a crisis.
11 min readChoosing a PartnerAn honest comparison of fixed-price and hourly billing for software development — when each model makes sense, the hidden risks of both, and how to structure an engagement that protects you.
10 min read