Your Team, But Faster: How to Run a Roundtable Brainstorm with AI Playing Every Role

There's a meeting I used to dread. Not because it was long, because it was premature.

You'd walk into a refinement session with a half-baked story, and the next 45 minutes would be the team discovering, out loud, all the things nobody thought about. Valid discoveries. But expensive ones, in time, in focus, in the goodwill you burn when engineers feel like they're doing the PO's homework for them.

What if you could have that meeting before the meeting, just you and an AI standing in for your whole team?

That's what this post is about.

The Idea: Give AI a Seat at Every Chair

Large language models are genuinely good at perspective-taking. They can hold a role, argue from it, and surface objections you didn't think to write down. Most people use AI as a writer or a summariser. But you can also use it as a room full of colleagues, each one filtered through a specific technical lens.

The trick is the prompt. You need to tell the AI exactly who's in the room, what each person cares about, and how they should behave. If you do that upfront, what follows is surprisingly close to an actual refinement conversation.

The Prompts: Copy, Modify, Use

Below is the full prompt. It's modular by design, just delete the roles your team doesn't have. Running with only FE + BE + Automation QA? Remove the UX and BA blocks. Solo full-stack and a designer? Keep just those two. The rest of the structure stays intact.

AI Roundtable Prompt — Copy & Modify

Paste into ChatGPT, Claude, or Gemini

You are facilitating a simulated refinement roundtable for a product development team.

For this session, you will embody multiple team roles simultaneously. Each role should respond in character — with the concerns, language, and priorities typical of that discipline. Roles do not just agree with each other; they push back, ask clarifying questions, flag risks, and occasionally disagree.

The roles active for this session are:

[ ROLE: UX DESIGNER ]
Focuses on user flows, accessibility, visual consistency, and interaction patterns. Asks: "What does the user actually see and feel here?" Flags when a story lacks design clarity or when edge cases in the UI haven't been considered.
→ Remove this block if you have no UX Designer
[ ROLE: BUSINESS ANALYST / BA ]
Focuses on requirements completeness, business rules, and acceptance criteria quality. Asks: "Is this story well-defined enough to build?" Flags ambiguity, missing edge cases, and stories that are trying to do too much.
→ Remove this block if you have no BA
[ ROLE: FRONTEND DEVELOPER / FE ]
Focuses on implementation feasibility, component reuse, browser compatibility, and state management. Asks: "How does this actually work in the UI layer?" Flags stories where design assumptions don't match what's buildable in the sprint.
→ Remove this block if you have no FE Developer
[ ROLE: BACKEND DEVELOPER / BE ]
Focuses on API design, data models, security, performance, and integration complexity. Asks: "What does the system need to do to support this?" Flags stories where the data or logic implications haven't been thought through.
→ Remove this block if you have no BE Developer
[ ROLE: AUTOMATION QA ]
Focuses on testability, regression risk, and acceptance criteria that can be automated. Asks: "How do we know this works, and how do we know we haven't broken anything else?" Flags vague criteria, untestable assertions, and high regression surface.
→ Remove this block if you have no Automation QA
[ ROLE: PRODUCT OWNER / PO ] — this is me
You represent my intent — but also challenge me if the story is undercooked. Focuses on user value, priority, and scope. Keeps the conversation moving.

The story we are refining today:

[ Paste your story title and brief description here ]

What I need from this session:

  • Surface any gaps, risks, or ambiguities in the story
  • Draft acceptance criteria in BDD format (Given / When / Then)
  • Identify any non-functional requirements
  • Note any open questions or "team notes" — things to consider but not block on
  • Summarise the outcome: ready, needs splitting, or needs more discovery?

Begin the roundtable. Each role should introduce their initial read of the story, then let the conversation develop naturally — including disagreements.

That's it. That's the engine. What you get back is a conversation, not a list — and that matters. Lists flatten nuance. A conversation surfaces it.

Let's See It in Action

Below is a simulated roundtable I ran for a real-ish story: adding a visual styling layer to an existing AES (Advanced Electronic Signature) function in a document signing platform. The team for this one is FE, BE, and Automation QA.

Story: Style the AES Signature Capture UI

As a signatory, I want the signature capture modal to have a consistent visual style, including branded colours, typography, and clear action labels, so that signing feels intentional and trustworthy, not like an afterthought.

FE Developer:
First read — this feels like it's mostly a CSS and component update. Swap colours, update button labels, maybe adjust the modal layout. But I want to know: are we working within the existing modal component, or are we replacing it? Because if the modal is shared across other signature types, any change we make here will ripple. Also — is there a design spec, or are we guessing at "branded colours"?

BE Developer:
From my side, I don't see much backend impact on the surface. But I'd want to confirm: does the styling change affect anything we're storing? Some implementations embed a rendered image of the signature UI as part of the audit trail or evidence bundle. If we change the visual output, we need to check whether that breaks any comparison logic downstream. That's not necessarily a blocker — but it's a "let's not find out in UAT" kind of question.

Automation QA:
I like that this is relatively contained. But "consistent visual style" is not a testable criterion as written. I need specifics. What are the exact colour values? What's the typography stack? What do the button labels say? Without that, I can't write assertions — I can only do visual checks manually, which defeats the point. Also — is there a regression suite for the existing AES flow? Because if we're touching the modal, I want to run the full signing journey before we ship.

FE Developer:
Agreed on the regression point. The AES modal currently has three states: capture, confirm, and error. I'd want acceptance criteria for all three, not just the happy path. The error state especially — right now it's generic. Is that in scope to style, or are we limiting this story to the capture and confirm states?

BE Developer:
One more flag from me. The AES function sits behind the signature service, which is covered by 21 CFR Part 11 audit requirements. I want to be clear: this story changes the UI, not the underlying signing logic — but we should document that explicitly. The last thing we need is a compliance reviewer asking why the signature UI changed without a corresponding record.

Automation QA:
That's a good call. Change documentation might be a non-functional requirement here — not just "the UI looks right" but "the change is traceable." That's worth calling out separately.

FE Developer:
Okay — so to summarise what I think we need before this is ready to build: a design spec or at minimum a defined colour palette and type scale, explicit scope on which modal states are in this story, and confirmation that the modal component is either isolated or that we're accepting the cross-component impact.

Outcome

User Story (refined):

As a signatory, I want the AES signature capture modal to display consistent branding — defined colour tokens, typography, and labelled action buttons — across the capture and confirm states, so that the signing experience feels deliberate and aligned with the platform's visual identity.

(Error state styling to be addressed in a follow-up story.)

✦ Acceptance Criteria
Scenario 1: Signatory views the AES capture modal
Given I am on the document signing page
And I have been prompted to apply an AES signature
When the signature capture modal opens
Then the modal background uses the platform's primary background token
And the primary CTA button uses the brand action colour (--color-action-primary)
And the button label reads "Sign Document"
And the modal title reads "Sign Here" in the defined heading font
Scenario 2: Signatory confirms their signature
Given I have drawn or uploaded my signature in the capture state
When I proceed to the confirm state
Then the confirm modal displays the signature preview
And the confirm CTA reads "Confirm Signature"
And the cancel option reads "Start Over"
And all typography and colour tokens match the capture state
Scenario 3: Signatory views modal on mobile (375px)
Given I am signing on a mobile device
When the AES capture modal opens
Then the modal is full-width with 16px horizontal padding
And all interactive elements meet the 44px minimum touch target
And no content is clipped or horizontally scrollable
◈ Non-Functional Requirements
  • Styling must use design tokens (CSS custom properties), not hardcoded values
  • All text and interactive elements must meet WCAG 2.1 AA contrast ratios
  • No change to the underlying AES signing logic, API calls, or audit event structure
  • Any UI change affecting the rendered output in the evidence bundle must be assessed by BE and documented in the change log
  • Full AES signing journey regression suite must pass before merge
⚑ Team Notes — things to consider, not blockers
  • Confirm whether the modal component is shared across SES / WES / AES flows — coordinate with any parallel stories to avoid merge conflicts
  • Screenshot-based test baselines will likely need updating after the styling change — QA should plan for this, it's expected
  • Quick design call before sprint start: is error state styling worth including in scope now, or genuinely cleaner as a follow-up?
  • Compliance note: log as a UI-only change in the release notes — no signing logic was altered
Story verdict: Ready to build — pending design token confirmation and modal component isolation check.

Final Thoughts

Here's what I've started doing before almost every refinement session: I run the story through a prompt like the one above. Not to replace the real conversation, to improve it.

Because when I walk in with acceptance criteria already sketched, with the BE concern about the evidence bundle already flagged, with QA's testability questions already answered, the room moves faster. The conversation elevates. Instead of discovering the story, we're pressure-testing it.

The 15 minutes I spend with AI before the meeting saves at least 30 minutes of the meeting. Often more. And it saves the quieter kind of cost too, the one where engineers stop trusting that stories are thought through before they land in the sprint.

Any story, high-level or nearly-ready, can be quickly simulated in a way that surfaces the structural questions before they become mid-sprint surprises. You don't need a full team. You don't need everyone in a room. You need a good prompt and a clear description of what you're trying to build.

The roundtable is always available. It doesn't have calendar conflicts. It doesn't need a 30-minute slot.

And honestly? It doesn't judge you for bringing a half-baked story to it.

That's what it's there for.

Next
Next

RICE and WSJF Don't Work. And I Used to Swear By Them.