Everyone Said We Were Agile. Nobody Could Name the Last Thing We Changed Our Minds About.
I remember the exact meeting where it clicked for me.
We were in sprint planning, coffee in hand, sticky notes on the wall, burn-down chart projected on the screen behind our Scrum Master. It looked exactly like what Agile is supposed to look like. Two-week sprints. Daily standups. Retrospectives. The whole liturgy.
And yet, something felt deeply wrong.
I looked at our backlog and realized: every single story in it had been written four months ago, in a room I wasn't invited to, by people who hadn't spoken to a single user. The "sprints" were just chopped-up pieces of a plan that had already been decided. The retrospectives were polite theater. The demo at the end of the sprint had a pre-approved audience and pre-approved feedback.
We weren't doing Agile. We were doing Waterfall in a costume.
And here's the uncomfortable part: nobody was lying. Everyone genuinely believed we were being agile. The vocabulary was right. The rituals were right. The intentions were right. But the thinking — the foundational assumptions about how products get built — was still firmly rooted in the old world.
That day changed how I evaluate any team's process, including my own.
What Iterative Waterfall Actually Looks Like
The disguise is convincing because it borrows all the right aesthetics. Sprints. Epics. Story points. Kanban boards. Velocity charts. But underneath the visual noise, the same waterfall assumptions are running the show:
The scope was fixed before development started. Someone, somewhere usually in a strategy session or a series of executive stakeholder meetings, determined what would be built months in advance. The backlog isn't a living hypothesis board. It's a project plan in disguise.
Sprints are just mini-phases. In true waterfall, you have Design, then Build, then Test. In iterative waterfall, you have Sprint 1 (Design), Sprint 2-8 (Build), Sprint 9-10 (Test), Sprint 11 ("Hardening"). The sprints have different names than phases but serve the same function: sequential gates that still gate the same assumptions.
Learning doesn't change the plan. This is the most revealing tell. In genuine Agile thinking, a sprint should be a learning unit. You build the smallest possible thing, ship it to real users, observe what happens, and let that change what you build next. In iterative waterfall, the sprint is a delivery unit. You build what was planned. If users tell you something different, you log a new ticket for a future sprint, which is already fully planned, and you keep building what was decided.
Velocity is used as a management tool, not a reflection tool. When leadership talks about velocity, are they asking "what did we learn?" or "are we on track?" If it's the latter, you're on a train, not a learning cycle.
The Red Flags — And What to Say When You Spot Them
Here are the signs I've learned to look for, and how to probe when you see them.
Red Flag #1: The roadmap is quarter-locked.
If your roadmap looks like a Gantt chart that's been broken into "Q1 / Q2 / Q3" buckets with features already assigned, that's waterfall. The dates are the commitment. The features are already non-negotiable.
What to say: "What would have to be true for us to drop a feature from Q2 based on user feedback from Q1?" If people look at you like you just asked to delete the database, you have your answer.
Red Flag #2: Demos are dress rehearsals, not experiments.
When the sprint demo feels like a performance for stakeholders, where the goal is to show progress, not to learn, it's no longer a feedback loop. It's a milestone check.
What to say: "Has anyone from outside our team seen this? What did they actually try to do with it?" If the answer is "we'll do user testing in Q4" you're in iterative waterfall.
Red Flag #3: Discovery happens once, at the beginning.
In many teams I've worked in or audited, user research and discovery happen during a "Sprint 0" or a "discovery phase" that precedes development. Once it's done, it's done. The backlog is built from that discovery and then nobody talks to users again for six months.
Real continuous discovery means users are a standing meeting, not a project phase.
What to say: "When is the last time a user interaction changed something that was already planned?" If the answer requires a moment of thinking, it's probably been a while.
Red Flag #4: Every story has a predetermined acceptance criterion.
Acceptance criteria are useful. But when every single story has a perfectly specified, testable AC written in BDD format before the sprint starts and that AC was written by someone who's never talked to users — you're just writing requirements in a different template.
The goal of a story isn't to close a ticket. It's to test a hypothesis about what a user needs.
What to say: "What are we hoping to learn from building this?" If nobody can answer without checking the PRD, the story is a specification, not an experiment.
Red Flag #5: The team never kills a feature mid-flight.
This is the most damning signal. In healthy Agile teams, it's normal, and even celebrated, to stop building something because you've learned it won't work. "We discovered mid-sprint that users don't actually navigate that way, so we pivoted to a different approach."
In iterative waterfall, that never happens. Stopping something feels like failure. Changing direction mid-sprint requires stakeholder approval. The sprint becomes a commitment rather than a container for thinking.
What to say: "What's the last thing we stopped building? Why did we stop?" If nobody can name something in the last few months, the team is executing a plan, not exploring a problem.
Red Flag #6: "Agile" is a delivery framework, not a product philosophy.
This is perhaps the most subtle and most pervasive form. Organizations adopt Agile as a project management improvement, fewer missed deadlines, better team coordination, more predictable delivery, without ever changing how decisions are made about what to build.
The result is teams that are incredibly efficient at building the wrong things, faster.
What to say: "Why are we building this sprint's work? What user behavior are we trying to change?" If the answer is "because it's in the roadmap" without a deeper why, Agile has been reduced to a workflow, not a philosophy.
What It Actually Feels Like on the Inside
I want to be honest about something. When you're living inside iterative waterfall, it feels like Agile. The sprints feel fast. The demos feel energizing. The retrospectives feel healthy. The velocity charts feel like proof of momentum.
The feeling only breaks when you look at outcomes rather than outputs.
Ask yourself: in the last six months, has your team shipped something that meaningfully changed user behavior, reduced churn, or improved a metric that matters? Not "shipped features", shipped change. If the honest answer is "we shipped a lot but we're not really sure what moved the needle," that's the signature of iterative waterfall.
Outputs went out the door. Outcomes are still waiting.
How to Shift It? (If You Have the Appetite)
I won't pretend this is easy. Iterative waterfall is sticky because it's what stakeholders understand, what executives report to boards, and what sales teams promise to customers. The quarterly roadmap is a social contract, not just a planning artifact.
But here's where I've seen small cracks of genuine Agile thinking open up:
Start protecting one sprint per quarter for pure experimentation. Call it a "spike sprint" or a "discovery sprint", whatever language your org accepts. Use it to test something nobody has yet committed to building. Watch what you learn.
Replace one epic's acceptance criteria with a learning question. Instead of "the user can filter by date range," try "we'll know this feature is valuable if 30% of active users interact with it within 7 days of launch." Now you have a hypothesis, not a specification.
Invite a real user to your next sprint review. Not a proxy user. Not an internal stakeholder role-playing a user. An actual person who has the problem you're solving. Let them interact with your demo without coaching. It will be uncomfortable. It will also be the most useful 30 minutes your team has had in months.
A Final Note
I don't write this as a purist. I've shipped products under iterative waterfall conditions and they weren't failures. You can build decent things inside a flawed process.
But there's a ceiling you hit. You spend a lot of energy building and not enough energy learning (and I love learning). You start measuring success by what you shipped instead of what changed for users. And slowly, the team stops asking "should we build this?" and starts asking only "when does this need to be done?"
That's the moment the costume becomes the identity.
The question worth asking yourself today isn't "are we Agile?" Everyone says yes. The question is: when did a user interaction last change what your team built?
If you're struggling to remember, you know what you're actually running.was.