← All guides
GUIDE

How to Prioritise With Your Team (Without the Endless Debate)

Most teams don't have a prioritisation problem. They have a process problem. Here's how to fix it.

01

Why Team Prioritisation Goes Wrong

Every team has been there. You walk into a meeting to decide what matters most, and forty-five minutes later you leave with a vague sense that something was agreed upon, but nobody can quite articulate what it was. The roadmap stays unchanged. The backlog keeps growing. And three weeks later, you have the same meeting again.

The most common failure mode is the HiPPO problem — the Highest Paid Person's Opinion. When a senior leader speaks first, the conversation anchors around their view. It's not that people are afraid to disagree (although sometimes they are). It's that once a direction is stated with authority, the cognitive cost of proposing an alternative goes up dramatically. People self-censor without realising it.

Then there's the loudest-voice problem. In any group, some people are naturally more assertive, more articulate, or simply more comfortable talking in meetings. Their ideas get more airtime, which makes them feel more important, which makes them more likely to end up at the top of the list. This has nothing to do with the quality of the idea and everything to do with the confidence of the person presenting it.

The most insidious failure is false consensus. The room nods along, the facilitator writes down the top three priorities, and everyone leaves thinking they agreed. But they didn't — they agreed to stop talking about it. There's an enormous difference between genuine alignment and collective fatigue. You can tell which one you have by checking whether anything actually changes after the meeting. If the same items keep coming up month after month, you had fatigue, not consensus.

There's an enormous difference between genuine alignment and collective fatigue.

02

What Good Prioritisation Actually Looks Like

Good prioritisation has four properties that most teams never achieve simultaneously. First, it gives equal input regardless of seniority. The intern's ranking counts exactly as much as the VP's ranking. Not because all opinions are equally informed, but because the point of group prioritisation is to capture the group's genuine preferences. If you just want the VP's opinion, ask the VP and skip the meeting.

Second, it produces genuine commitment because people feel heard. When you know your input was counted — not just tolerated — you're far more likely to support the outcome even if your top choice didn't win. The process is the product. A fair process produces buy-in that no amount of top-down communication can replicate.

Third, it produces a ranked list, not vague agreement. "We should focus on customer retention and also growth and maybe some tech debt" is not a prioritisation outcome. A ranked list forces trade-offs. Item 1 is more important than item 2, which is more important than item 3. You can't rank everything as "high priority" — the method won't let you. That discomfort is a feature, not a bug.

Fourth, the process is fast enough to actually use. If prioritisation takes a full-day workshop, you'll do it once a quarter at best. If it takes fifteen minutes, you can reprioritise whenever circumstances change. Speed is not the enemy of quality here — it's the enabler. The best prioritisation method is the one your team will actually use regularly.

03

Four Methods Compared

Dot Voting

Dot voting is fast and familiar. Everyone gets a few dots, sticks them on options, and the option with the most dots wins. Most teams learn this in their first retrospective and never question it again. The problem is that dot voting is easily gamed. People watch where early dots land and follow along. Seniority bias is baked in — when the VP votes first, everyone sees it. And every dot is equal, so you can't distinguish between "I feel strongly about this" and "I guess this is fine." It's good enough for low-stakes shortlisting, but it produces unreliable results for anything that matters.

Impact/Effort Matrix

The two-by-two matrix is visual and intuitive. Plot each option by its expected impact and required effort, then pick the high-impact, low-effort quadrant. But in groups, "impact" means different things to different people — the sales team and the engineering team have genuinely different definitions. You end up debating the axes instead of the items, which defeats the purpose. It works best when one person fills it in as a thinking tool, not as a group exercise.

RICE Scoring

RICE (Reach, Impact, Confidence, Effort) is rigorous and data-driven. Each item gets a numerical score across four dimensions, producing a weighted total. The problem is that it's slow, it requires numerical estimates that feel arbitrary ("Is this a 2 or a 3 for impact?"), and teams inevitably game the numbers to get their preferred outcome to the top. RICE works well for product teams with strong quantitative data. It works poorly for strategy discussions, workshops, or any context where the inputs are subjective.

Pairwise Comparison

Pairwise comparison asks a simple question: "Which matters more, A or B?" You compare every possible pair, and the results produce a mathematically defensible ranking. It's cognitively easy — choosing between two things is a question anyone can answer, regardless of expertise. It's hard to game because you can't see what others are choosing and you can't see the aggregate results while voting. And it captures intensity naturally: an item that wins every comparison has strong consensus behind it. Pairwise comparison works well for 4-15 items and groups of any size.

04

How to Run a Pairwise Ranking Session

Running a pairwise ranking session is straightforward. Start by framing the question clearly. "What should our team prioritise for Q3?" is better than "What matters?" The more specific the question, the more useful the ranking.

Next, list your items. Aim for 4-12 options. Fewer than four feels trivial — you could just discuss it. More than twelve and the number of pairwise comparisons grows large enough to cause fatigue. Each item should be at roughly the same level of abstraction. Don't mix "Redesign the entire product" with "Fix the broken dropdown menu."

Share the session link with your group. Everyone compares pairs individually on their own device. There's no need to be in the same room — this works asynchronously. Each person sees two items at a time and picks the one they consider more important. The whole thing takes 5-10 minutes for most people.

Once everyone has completed their comparisons, review the ranked results together. The aggregated ranking reflects the group's collective preference. Focus on what surprises people, not on arguing about the order. The data is already in — the discussion should be about what the ranking reveals, not about re-litigating it.

For most groups, the entire process takes 10-15 minutes. That's fast enough to use regularly, which means your priorities can actually keep up with reality.

05

When to Use Which Method

Different situations call for different approaches. Here's a simple guide:

Scenario Recommended Method
Need a quick shortlist from 20+ options Use dot voting. It's fast, it reduces a long list to a short one, and the stakes are low enough that its biases don't matter much.
Evaluating 3-5 options with clear data Use a scoring framework like RICE. When you have real numbers — usage data, revenue projections, engineering estimates — a structured scoring model makes those numbers work for you.
Ranking 4-15 options with a group Use pairwise comparison. It's fair, fast, hard to game, and produces a clear ranked output that the group can rally behind.
Complex multi-criteria decision Use weighted scoring with explicitly defined criteria. When a decision affects budget, strategy, or organisational structure, the extra rigour is worth the extra time.

The mistake most teams make is using the same method for everything. Dot voting your annual strategy is as wrong as running a full RICE analysis for your team offsite agenda. Match the method to the stakes.

GET STARTED

READY TO TRY IT?

Start a free group ranking session — no account needed for participants.

START A GROUP RANKING →