How2 prioritise

Let's face it: as a person building a product, you're constantly being asked to choose.

Which feature do we build next?

Which bug do we fix?

Which project gets resources?

If your prioritisation process feels like a mix of gut feelings and who shouts the loudest, you're not alone. This is the one-way ticket to building a product that's more a collection of random features than a cohesive solution.

The good news? You don't have to guess. Prioritisation is designed to bring clarity, data, and objectivity to these tough decisions. If you chose a framework or method which works for your organisation, it can provide a common language for your team and stakeholders, moving the conversation from "my idea is best" to "based on our criteria, this is the most valuable thing to do now."

So, with so many frameworks out there, which one is right for you?

Most Popular Prioritisation Frameworks (and When to Use Them)

So this is a tricky one, in my experience there's no single "best" framework. The right one depends on your team's size, your product's maturity, your organisation’s maturity and your strategic goals. The most important one is to find one that works, and be consistent sharing it far and wide so people understand it.

1. RICE

Reach, Impact, Confidence, Effort. This is a very well known framework in both product and project circles. Each factor is scored, and the final score is calculated as: (Reach x Impact x Confidence) / Effort. It's excellent for balancing potential value with the cost of execution.

  • Best for: Teams of any size, especially those who want a simple, quantitative score to compare different initiatives.

  • When to use it: This framework is excellent when you need a data-driven approach to compare a large number of diverse ideas. The quantitative score makes it easy to defend your decisions to stakeholders and team members, as it’s grounded in numbers rather than opinion. It’s particularly useful for product-led companies that have access to good usage data.

  • When to avoid it: RICE is less effective when you don't have good data. If your reach or impact numbers are just educated guesses, the final score can be misleading. It can also lead to a "tyranny of the numbers" where the highest-scoring idea is chosen even if there's a strong qualitative argument against it. It's not a replacement for a deep understanding of user needs.

2. MoSCoW

Must have, Should have, Could have, Won't have. This is a framework for getting a team to align on priorities quickly. Instead of a number, it places features into one of four buckets.

  • Best for: Projects with strict deadlines, where clarity on what is absolutely essential is critical.

  • When to use it: MoSCoW is perfect for projects with a tight deadline or when a key milestone needs to be hit. It's a fantastic tool for getting a team to quickly align on what is absolutely essential. It helps set clear expectations with stakeholders by moving from a vague wish list to a concrete, agreed-upon scope.

  • When to avoid it: This framework can become a problem if the "Must-haves" list grows too long. Without a strong facilitator to keep things in check, every stakeholder might argue their item is a "Must-have," defeating the purpose of the framework. It's also less useful for continuous product development where you aren't focused on a single delivery date.

3. ICE

Impact, Confidence, Effort. This is a simplified version of RICE that's probably best suited for early-stage products or when you're moving fast. It's often used for brainstorming and quickly ranking a list of ideas.

  • Best for: Startups, small teams, or when you need to make rapid-fire decisions based on limited data.

  • When to use it: ICE is the ideal framework for a fast-moving environment like a startup or a new product team. It's simple and quick to use, allowing you to prioritize a backlog of ideas in minutes rather than hours. It's great for brainstorming and a first-pass prioritization.

  • When to avoid it: Because it's so quick, ICE can be very subjective. The scores for impact and confidence are often based on intuition, which can be heavily biased. It's not the best choice for making critical, high-stakes decisions where more robust data and discussion are needed.

4. The Value vs. Effort Matrix

This is one of the simplest and most visual prioritisation frameworks. Initiatives are plotted on a 2x2 grid based on their business value and the effort required to complete them. This allows you to quickly identify "Quick Wins" (high value, low effort) and "Big Bets" (high value, high effort).

  • Best for: Getting a team aligned in a workshop setting, or for a quick, high-level prioritization of many ideas.

  • When to use it: This is a fantastic framework for group discussions and workshops. The visual nature of the matrix makes it easy for everyone—from engineers to marketing leads—to understand the trade-offs. It's excellent for identifying "Quick Wins" that can deliver value fast and build momentum.

  • When to avoid it: The matrix is a qualitative framework, and it can be difficult to agree on where to place an item. What one person considers "high value," another may see as "low." Without clear definitions for "value" and "effort," it can lead to endless debates instead of quick decisions.

5. The Kano Model

This framework is all about user emotion. It categorizes features into three types:

  1. Must-haves: Features users expect and take for granted. Their absence causes dissatisfaction, but their presence doesn't lead to delight.

  1. Performance: Features where more is better. The more you invest in them, the happier the user will be.

  2. Delighters: Unexpected features that users don't even know they want. They create immense satisfaction but their absence doesn't cause dissatisfaction.

  • Best for: Understanding user needs on a deeper, more emotional level and ensuring you're not just building features but creating a great user experience.

  • When to use it: Use the Kano Model when your goal is to understand your users on a deeper, more emotional level. It's particularly powerful when you're trying to identify "Delighters"—those unexpected features that will give you a competitive edge. It helps you avoid just building features and instead focus on creating a great user experience.

  • When to avoid it: The Kano Model is a qualitative tool that relies on user research and surveys to be effective. It is not a good framework for making quick, tactical decisions, and it's not well-suited for prioritising technical debt or infrastructure projects that don't directly impact the user interface.

A Sophisticated Approach for a Scaling Team

As your organisation grows, a simple framework might not capture the full picture. The needs of a large company are complex, and you might need to account for a wider levels of stakeholder and factors such from business goals to technical debt.

This is where custom scoring models come in. When working in a larger organisation, we developed a more complex scoring model to ensure a more holistic and objective approach. It helped us move past personal biases and treat every project fairly.

Our model looked at six key factors and scored each out of 5:

  • A: OKR Fit (1-5): How well does this initiative align with our quarterly or yearly Objectives and Key Results?

  • B: Positive Impact on Business (1-5): What is the financial or strategic benefit to the company (e.g., revenue, market share)?

  • C: Potential Reach (1-5): How many customers or users will this impact?

  • D: Impact on User (1-5): This factor was unique because we scored it based on the delight factor. A score of 5 meant it was a "must-have" that solves a major pain point, while a 1 was a "delighter" or a nice-to-have.

  • E: Confidence in Estimation (1-5): How confident are we in our estimates for business impact and technical effort?

  • F: Tech Effort (1-5): How much work will this require from the engineering team?

The final score was calculated using a formula that allowed us to weigh certain factors more heavily:

((A×B)+(C×D))×(E+F)

This formula allowed us to prioritise projects/initiatives that were a combination of high business value, high user impact, and high confidence, while also accounting for the total effort required.

The Most Important Rule: Your Framework Should Be Dynamic

The key takeaway is that you don't have to be a slave to any single framework. Whether you're a lean startup or a complex enterprise, the best prioritisation method is one that is transparent, understood by the whole team, and customised to your unique challenges.

Most importantly, your scoring model should be dynamic and adaptable. If business priorities change, your scoring model should change too. For instance, if your company is in a growth phase, you might give more weight to "Potential Reach." If you're focused on retention, "Impact on User" might become a more critical factor so you can multiply that score by 2 for example.

The truth is, it's not about how you do prioritisation, but that you do it. The value is in the process itself. By making your prioritisation process transparent, you bring everyone into the conversation and empower them with a shared understanding of why decisions are being made.

Final hints:

  • You must make it simple for people to see what's being prioritised and why

  • You must commit to regularly re-evaluating your priorities and your scoring model.

  • What was a top priority last quarter may not be today, and that's okay.

So, is your next priority your …. priorities? We know making the wrong decision can cost time and money, so if you need any expert guidance please check out our services or reach out and we would be happy to jump in, help you ruthlessly prioritise and find a way to make it sustainable for you and your team.

Previous
Previous

How2 fight back if Corporate Red Tape is Killing Your Product?

Next
Next

How2 not drown in stakeholders