When Requirements Are Unclear Should You Work On A Product?
I always have in the back of my head when thinking about a feature, product, or even a user story: "Start with the end in mind." Over the years working with all sorts of professionals, I've been part of teams that would move mountains to make sure every detail was known before a single estimate was written down. I've been thinking and writing about iterative waterfall a lot lately, but here I want to get back to something more fundamental — what is the right amount of detail before it is good enough to start?
When do you go into development and testing without all of the acceptance criteria being clear? When do you put the fun of working and exploring by doing in front of strict acceptance criteria, all angles sorted, every edge case mapped?
I'm 100% sure there is no universal formula, nor should there be, but where do you put a stop to endless discussions and say: it is good enough, let us work on it?
Endless Spiking
There is a particular kind of meeting I've been in more times than I can count. Everyone is smart, everyone is engaged, and everyone is discovering more questions as fast as they answer old ones. An hour in, nobody has decided anything. The spike becomes a research project. The research project becomes a dependency. The dependency becomes a reason to wait.
This is what I call the continuous research trap.
Spiking has a place. When your team genuinely does not know whether a technical approach is feasible, a spike can save you weeks of going down the wrong path. But spiking becomes a problem when it is used as a proxy for comfort. When the team says "we need to spike this" and what they really mean is "we are not comfortable starting yet." Those are two very different things.
Take a team building a notification system for a B2B platform — let's call the product Notifio. The product manager had a clear direction: users need to receive in-app and email alerts for key events. The engineers started a spike to understand API rate limits from third-party integrations. Fair enough. But that spike expanded into message queuing architecture, database indexing strategies, delivery failure retry logic, and eventually a six-week detour into evaluating three different event-streaming tools.
Meanwhile, the one thing they actually needed to validate — whether users would engage with notifications at all — went untested for two months.
Spiking should be time-boxed and question-specific. What is the one thing we do not know that, if we got it wrong, would invalidate everything else? Answer that. Then start.
Designs and Analysis Fully Done Prior to Developing?
There is something reassuring about a complete design. Every screen mocked up, every state covered, every edge case annotated. It feels responsible. It feels like good planning. But at some point, completeness becomes a different kind of risk.
It is absolutely worth having visuals before you develop. A rough wireframe does more for alignment in a ten-minute meeting than three pages of written requirements. A mind map of the most common user scenarios can surface gaps in logic before a single line of code is written. These things have real value. Nobody is arguing against them.
The question is where you stop. Do you need to account for every possible user behaviour before development begins? Almost certainly not. And trying to do so often means you are designing for users you have not yet talked to, solving problems they may not actually have.
Going back to Notifio — the designer spent three weeks building out a comprehensive notification preferences screen with dozens of toggle options. Grouped by category, by channel, by frequency. Gorgeous. Thorough. When they eventually tested it with five users, four of them said they would just turn everything off if they saw that screen. What they actually wanted was one simple question: "How often do you want to hear from us?"
Three weeks of design, unmade in a thirty-minute session.
The goal of early design is to align the team and reduce ambiguity on the core flows. Cover the happy path well. Cover the one or two most likely failure states. Create a shared language. Then build. You will learn more from a working prototype in the hands of a real user than from a design review that covers every scenario in theory.
Design enough to build confidently. Not enough to feel invincible.
Why Burden the Team With Knowing Everything Upfront?
Here is the real question worth asking: when a team resists starting work without complete requirements, what is actually driving that?
Sometimes it is genuine and legitimate. If you are building something where a wrong assumption costs six months of rework — a core data model, a payment integration, a compliance-sensitive feature — then yes, clarity upfront is worth the investment. Not everything is equally forgiving of late discoveries.
But often, the resistance is about something else entirely.
It can be about accountability. If the requirements are incomplete and something goes wrong, who is responsible? Undefined acceptance criteria make it easy to argue that something was built "correctly" even when it is wrong. Demanding complete requirements is sometimes a way of distributing risk rather than building the right thing.
It can be about estimation. Teams that are measured on velocity or delivery against a plan need precise scope to protect their commitments. Incomplete requirements feel like traps, because they often are — when stakeholders interpret "we started" as "we committed to everything."
It can be about trust. When a team has been burned before — when they built something and were told it was wrong despite meeting all stated criteria — they learn to ask for more documentation as a form of protection. More paper, more cover.
None of these are unreasonable. But none of them are actually about building better products, either. The real answer to most of these concerns is not more requirements. It is a clearer working agreement about what "done" means at each stage, and a culture where it is safe to learn as you go.
What If a Story or Epic Gets Rejected Due to Lack of Details?
This happens more than people admit. The PM brings something to refinement, the team digs in, and the conclusion is: not enough detail to proceed. Back it goes to the backlog. It will be refined again next sprint. Or the one after.
So what do you actually do when that happens?
First, understand what kind of "unclear" you are dealing with. There is a difference between "we don't know what we are building" and "we don't know every edge case of what we are building." The first is a real blocker. The second is almost always survivable. Ask the team to name the single most critical unknown. If they can name it, you can usually resolve it — or at least time-box the effort to do so.
Second, try a slimmer slice. If the full epic is rejected, propose working on just the core user journey. What is the smallest version of this thing that would be worth shipping? Sometimes the resistance melts when the scope shrinks. The team was not actually objecting to uncertainty — they were objecting to the feeling of being dropped into the deep end of it.
Third, make the uncertainty explicit and visible. Create a dedicated spike story, time-boxed to a sprint, to answer the top two or three open questions. This is different from continuous research. It has a clear end state: after this sprint, we will know X, and then we will be ready to start. This gives the team the structure they need without turning research into an indefinite holding pattern.
Fourth, have an honest conversation about what "ready" actually means on your team. If "ready" means every acceptance criterion is written, every edge case is covered, and every dependency is resolved you may be creating a standard that is functionally impossible to meet for any genuinely novel work. That standard might feel safe. But it is also a very effective mechanism for never shipping anything interesting.
And if stakeholders are the ones pushing back, not the team? That is a different conversation, but the same principles apply. Ask what specific risk they are trying to mitigate. Usually there is a legitimate concern underneath the vague call for "more clarity." Surface it, name it, address it directly and then keep moving.
Final Thoughts
Here is what I have come to believe after years of refinement sessions, planning discussions, and more backlog grooming than I care to count: the question is never really whether you have enough detail. The question is whether you have enough detail to take the next meaningful step.
Not the whole journey. Just the next step.
There is a version of thoroughness that is actually courage in disguise — the courage to commit, to build, to put something in front of a user and learn. And there is a version of preparation that is actually fear wearing the clothes of professionalism. The endless spike. The complete design before the first line of code. The acceptance criteria that covers every scenario, including ones users will never encounter.
The best teams I have seen do not wait until everything is clear. They find the irreducible minimum of clarity required to move forward responsibly and then they move. They keep learning. They adjust.
Certainty is not the goal. Progress is.
Start with the end in mind, yes. But do not let the search for a perfect beginning stop you from actually beginning.