← All guides
GUIDE

How to Run a Prioritisation Workshop That Actually Works

Most prioritisation workshops produce a list that nobody follows. Here's a structure that produces decisions people actually commit to.

01

Why These Workshops Fail

The most common reason prioritisation workshops fail is that the question is too vague. "What should we focus on?" sounds like a reasonable prompt, but it's so broad that everyone in the room is answering a different question. The product manager is thinking about features. The engineering lead is thinking about tech debt. The designer is thinking about user research. Without a shared frame, the discussion becomes a polite contest between competing worldviews, and the outcome is whatever the facilitator manages to synthesise from the noise.

The second failure mode is domination by the loudest voice. Workshops are supposed to be collaborative, but in practice they often devolve into a dialogue between the two or three most assertive people in the room. Everyone else checks out, doodles in their notebook, and waits for it to end. Their input is never captured, which means the workshop output represents a fraction of the group's actual knowledge and preferences. The quiet person in the corner who has the most customer contact might have the best insight in the room — and nobody will ever know.

The third failure is confusing discussion with decision. The group talks for an hour, the facilitator writes up a summary, and everyone nods because they're tired of the conversation. But nodding is not committing. If people don't feel genuine ownership of the output — if they feel it was imposed by the facilitator's interpretation rather than produced by a fair process — they'll quietly ignore it. The list goes into a slide deck, the slide deck goes into a shared drive, and nothing changes. Three months later, someone suggests having a prioritisation workshop.

Nodding is not committing.

02

Before the Workshop

Frame the Question

"What should we prioritise for Q3?" is dramatically better than "What matters?" Be specific about the timeframe, the constraint, and the type of decision. "Which five initiatives should our team commit to for the next quarter, given that we have four engineers and one designer?" gives people something concrete to rank. The more specific the frame, the less time you'll spend in the workshop debating what the question actually means.

Choose Items Carefully

The sweet spot is 5-12 options. Fewer than five and the exercise feels pointless — you could just discuss it. More than twelve and people become fatigued, comparisons take too long, and the ranking loses resolution. Each item should be at roughly the same level of abstraction. Don't mix "Redesign the homepage" with "Fix the typo on the about page." If items are at different scales, people are ranking apples against orchards, and the results will be meaningless. Spend time before the workshop getting the list right — it's the single highest-leverage preparation you can do.

Who Needs to Be in the Room

Include people who will do the work AND people who have context about why the work matters. The people closest to execution often have different priorities than the people closest to the customer or the strategy — and that tension is exactly what you want to surface. Aim for 4-10 people. Fewer than four and you don't get meaningful diversity of perspective. More than twelve and you need to split into groups and merge results, which adds complexity. If you have more than twelve stakeholders, consider having everyone rank individually and then bringing the top eight into a discussion session.

03

Workshop Structure

1
Opening — 5 minutes

State the question clearly. Read out the list of items. Clarify anything ambiguous — if someone doesn't understand what "Improve onboarding flow" means, define it now. Don't allow debate on the items yet. This phase is about shared understanding, not evaluation. If an item isn't clear enough to rank, either clarify it or remove it.

2
Silent Individual Ranking — 10 minutes

Everyone ranks independently using a pairwise comparison tool. Share the session link, and each person works through the comparisons on their own device. This is the critical step that prevents anchoring, conformity, and domination by loud voices. The quietest person in the room contributes exactly as much as the most senior. Nobody can see what anyone else is choosing. The comparison format — "Which matters more, A or B?" — is simple enough that no expertise or context is needed to participate.

3
Review Results — 10 minutes

Show the group ranking on a shared screen. Focus on what surprises people, not on arguing about the order. The most productive questions at this stage are: "Did anything rank higher or lower than you expected?" and "What does this tell us about how the group sees things differently?" The ranking is data — treat it as input to a conversation, not as the final answer.

4
Discuss Close Results — 10 minutes

If items 3 and 4 are within a few points of each other, that's genuinely useful information — the group doesn't have a strong preference either way. Discuss what would tip the balance. Is there a dependency? A deadline? A resource constraint? Don't re-rank — use the data as the foundation for a focused conversation. The ranking has already done the hard work of narrowing the field. Your job now is to resolve the close calls with judgment, not to re-litigate the entire list.

5
Commit to Next Steps — 5 minutes

Decide what the top 3-5 items mean in practice. Who owns them? What happens by when? What does "done" look like? And equally important: explicitly name what you're not doing. Items 6 through 12 are deferred — not rejected, not forgotten, but consciously deprioritised. This makes the trade-offs visible and prevents the all-too-common pattern where everything stays on the list at equal priority and nothing actually gets focused attention.

Total time: approximately 40 minutes. That's short enough to fit into a regular meeting slot and long enough to produce a genuine decision with real commitment.

04

Handling Close Results and Senior Disagreement

Close results are not a problem — they're information. When items 3 and 4 are within a point or two of each other, the data is telling you that the group genuinely doesn't have a strong preference between them. That's useful to know. It means either choice is reasonable, and the deciding factor should be practical — dependencies, timing, resource availability — rather than political. Don't force a false distinction. "These two are roughly tied, so let's pick based on which unblocks other work" is a perfectly good outcome.

Senior disagreement is harder. When a VP looks at the ranking and says "I disagree — item 7 should be our top priority," you have a moment that will define whether the workshop was worth running. The key is to acknowledge their perspective without overriding the data. "The group ranked it seventh. You clearly see something others don't — can you share what's driving your view?" This opens a productive conversation rather than a power play. Maybe the VP has information the group doesn't have, and the group should hear it. Maybe the VP has a bias the group has correctly overridden.

Either way, the ranking has changed the dynamic. Without data, the VP would simply announce their priority and everyone would comply. With data, the VP can still override — but now they know they're overriding the collective judgment of their team, not "summarising the discussion." That transparency is valuable even when the senior person ultimately makes the call. It makes the decision legible and creates accountability for the override.

Without data, the VP would simply announce their priority and everyone would comply. With data, the VP can still override — but now they know they're overriding the collective judgment of their team.

05

Remote vs In-Person

The ranking itself works identically whether people are remote or in-person. Everyone uses their own device regardless of location, so the input quality doesn't change. The discussion phase is where remote workshops require more discipline. Use video so people can read the room. Share the results screen so everyone is looking at the same data. Timebox strictly — remote conversations drift more easily than in-person ones, and a facilitator who lets the discussion wander will lose the room.

The real advantage of remote-friendly tools is the async option. You don't need everyone in the same room, or even the same timezone, for the ranking step. Share the session link, give people 24 hours to complete their comparisons, and then bring the group together only for the 20-minute discussion and decision phase. This means you spend meeting time on discussion and decisions — the parts that genuinely require synchronous interaction — instead of burning it on voting, which doesn't.

READY TO TRY IT?

Start a free group ranking session — no account needed for participants.

START A GROUP RANKING →