How to Decide What to Work On Next
When everything feels urgent, you don't need more discussion. You need a structured way to make the trade-offs visible.
The Real Problem
Every team has more ideas than capacity. The product backlog has 200 items. The strategy document lists twelve priorities. The executive team has "three focus areas," each of which contains six sub-initiatives. And yet, when someone asks "What are we working on next?" the answer is usually determined by whichever request arrived most recently, whichever stakeholder shouted loudest, or whatever the boss happened to mention in a corridor conversation on Tuesday.
This isn't a knowledge problem. Most teams know roughly what their options are. It's a trade-off problem. Choosing to work on A means not working on B, and nobody wants to be the person who explicitly says "B doesn't matter." So teams avoid the choice. Everything stays on the list. Resources get spread across too many initiatives. Nothing gets the focused attention it needs to actually succeed. And six months later, leadership wonders why nothing shipped.
The uncomfortable truth is that when everything is priority 1, nothing is. Prioritisation isn't about identifying what's important — most things on your list are important, or they wouldn't be there. Prioritisation is about deciding what's most important, right now, given your actual constraints. It requires saying "not this, not yet" to things that genuinely matter. That's why it feels hard. It is hard. But avoiding the decision doesn't make it go away — it just makes it invisible, which is worse.
When everything is priority 1, nothing is.
Four Questions Before Picking a Method
What Are We Choosing Between?
Be specific. "Features" is too broad. "Features for Q3 that require less than two weeks of engineering" is actionable. The more precisely you define the options, the more useful the ranking will be. If your list mixes strategic initiatives with bug fixes with technical debt items, people are ranking incommensurable things, and the output will be noise. Get the list right before you worry about the method.
Who Needs Input?
The people doing the work often have different priorities than the people requesting it. Engineers know what's technically risky. Customer support knows what's causing the most pain. Product managers know what aligns with strategy. Sales knows what's losing deals. A good prioritisation process includes all of these perspectives, but it's also clear about who has final say. Input is democratic. The final call might not be — and that's fine, as long as it's explicit.
What Does "Best" Mean Here?
Impact on revenue? User satisfaction? Technical debt reduction? Speed to market? If you can't agree on the criteria, no prioritisation method will help — you'll just argue about the ranking instead of arguing about the criteria. Sometimes the most valuable pre-work is spending fifteen minutes aligning on what "most important" means for this specific decision. Once the criteria are shared, the ranking practically does itself.
What Happens After?
A ranked list is useful. A ranked list with owners, deadlines, and explicit "not now" decisions is transformational. Before you start ranking, decide what you'll do with the output. Will the top three items become the sprint goal? Will items below the line be explicitly deferred? Will you revisit the ranking in four weeks? If the output goes into a document that nobody reads, the exercise is wasted regardless of how rigorous the method was.
Methods Matched to Stakes
Examples: retro action items, team event planning, deciding which topics to cover in an all-hands, choosing which prototypes to user-test first.
Method: Dot vote. Five minutes, good enough. The biases inherent in dot voting don't matter when the consequences of picking the "wrong" option are trivial. Speed is the priority.
Examples: quarterly priorities, feature ranking, workshop items, project sequencing, hiring criteria, customer segment targeting.
Method: Pairwise comparison. Takes 10-15 minutes, produces a fair and defensible ranking, works with any group size, and handles the politics of group decision-making gracefully. The private voting format means you get genuine preferences instead of performative consensus. This is the sweet spot for most team decisions.
Examples: annual strategy, budget allocation, build-vs-buy decisions, multi-criteria vendor selection, organisational restructuring.
Method: Weighted scoring with explicitly defined criteria. Takes longer — possibly a full workshop — but the rigour is justified when the decision affects the organisation for a year or more. Define your criteria (impact, feasibility, alignment, risk), weight them, score each option, and use the results as structured input to a leadership discussion. The process matters as much as the output, because it forces alignment on what "good" means before you start evaluating options.
The Async Advantage
Most teams only prioritise in meetings. Someone pulls up a list, the group discusses for an hour, and by the end they've debated three items thoroughly and speed-ranked the rest because time ran out. The items at the bottom of the list — which might include the most important option — never get fair consideration because the meeting ended before the group reached them.
Ranking individually before the meeting produces better outcomes for several reasons. First, there's no anchoring — nobody has heard the VP's opinion before they form their own. Second, there's no time pressure — people can take five minutes to think through each comparison rather than being rushed by a ticking clock. Third, quieter people contribute equally — the format doesn't reward assertiveness or verbal fluency. And fourth, the meeting itself becomes dramatically more productive, because you arrive with data instead of spending the first forty-five minutes generating it.
The practical workflow is simple. Share the ranking session link with your group. Give people 24 hours to complete their comparisons — it takes 5-10 minutes, so this is not a big ask. Then bring the group together for a 20-minute meeting to review the results, discuss surprises, resolve close calls, and assign next steps. You've replaced a painful 60-minute meeting with a painless 10-minute async task plus a focused 20-minute discussion. The output is better, and it takes less total time.
You've replaced a painful 60-minute meeting with a painless 10-minute async task plus a focused 20-minute discussion.
What to Do With the Output
A ranked list is a starting point, not a commitment. Its value lies not in the specific order but in the conversations it enables. "Why did this rank so low?" is a much more productive question than "What should our priorities be?" because it starts from shared data and asks for explanation, rather than starting from nothing and asking for consensus. The ranking gives the group a common reference point — something to react to rather than create from scratch.
Use the ranking to make trade-offs visible. "The group ranked these seven items. We have capacity for three. That means items 4 through 7 are explicitly deferred — not forgotten, not deprioritised in the passive-aggressive way where they stay on the list but never get staffed, but consciously set aside with a plan to revisit them next quarter." This kind of explicit deferral is uncomfortable, but it's the difference between a team that's focused and a team that's busy.
Finally, create accountability. The top-ranked items need owners, and those owners need clear expectations. "You own item 1. What does progress look like in two weeks?" The worst thing you can do is run a rigorous prioritisation exercise and then let the results gather dust. The ranking earned the group's trust by being fair — honour that trust by actually following through on what it produced. If you consistently ignore the ranking, people will stop taking the process seriously, and you'll be back to deciding by corridor conversation.
READY TO TRY IT?
Start a free group ranking session — no account needed for participants.
START A GROUP RANKING →