Product Prioritization: RICE, ICE & OKR Framework Guide
By Marcus Adolfsson · Published January 31, 2026
The Art and Science of Prioritization: Lessons from Managing Multiple Product Portfolios
I started my career as a developer, writing code and building websites. Over two decades later, I'm managing product portfolios across multiple organizations. The journey from individual contributor to product leadership has taught me many lessons, but none more critical than this: prioritization is the make-or-break skill for product leadership.
When you're a developer, prioritization is straightforward, your backlog is ordered, and you work through it. When you're managing a single product, it gets more complex. When you're overseeing multiple product portfolios simultaneously, each with their own P&Ls, stakeholders, markets, and competitive pressures, prioritization becomes existential.
If there's one skill that separates good product leaders from great ones, it's the ability to prioritize ruthlessly and intelligently.
Early in my career, I thought prioritization was about making the "right" choices. I've since learned it's about making the best choices with the information you have, then having the courage to revisit those choices when the landscape shifts. That distinction matters more than most people realize.
Why Prioritization Matters More Than Ever
When you're managing a single product, prioritization is challenging. When you're juggling multiple products, each with their own roadmaps, stakeholders, and market pressures, it becomes existential. Every feature you build, every bug you fix, every experiment you run comes at the opportunity cost of something else.
I've seen product organizations grind to a halt not because they lacked good ideas, but because they tried to pursue too many simultaneously. I've also seen teams ship mediocre products on time by focusing relentlessly on what truly matters.
The difference? A shared understanding of what success looks like and a disciplined approach to deciding what gets built next.

The Multi-Portfolio Challenge: Lessons from iGaming
Managing product strategy for multiple products across different markets, each with their own regulatory requirements, competitive landscapes, and customer expectations - creates unique prioritization challenges. You might have products in different regions with entirely different customer preferences and requirements.
The challenge isn't just managing complexity. It's maintaining strategic clarity when you're context-switching between radically different priorities multiple times per day.
Here's what I learned:
You need product-specific OKRs, but portfolio-level alignment. Each product team needs their own objectives, but they must ladder up to company-wide goals. Product-specific metrics should support a shared objective across the portfolio.
Resource allocation is zero-sum. The engineering team working on Product A isn't available for Product B. The designer focused on the mobile app isn't designing the web experience. When everything is a priority, nothing is a priority. SWJF (which I'll cover later) became essential for making these trade-offs transparent.
Customer journeys matter more than feature parity. Early in my career, I made the mistake of trying to achieve feature parity across products. Customer in Market A has feature X, so customers in Markets B and C should too, right? Wrong. What matters is whether each product delivers a complete, compelling customer journey for its specific market. Prioritize based on journey completion, not feature checklists.
The Foundation: OKRs and Shared Goals
Before we dive into prioritization frameworks, let's talk about the foundation that makes them work: Objectives and Key Results.
OKRs aren't just a planning tool. They're a communication tool that creates alignment across product teams, engineering, design, sales, and leadership. When everyone understands not just what we're building but why we're building it and how we'll measure success, prioritization conversations become exponentially easier.
Here's what I've learned about OKRs across multiple portfolios:
Clear objectives reduce bike-shedding. When stakeholders argue about which feature to build next, grounding the conversation in objectives cuts through the noise. Does this feature move us closer to our revenue target? Does it improve our activation rate? If not, it goes to the backlog.
Key results create accountability. Vague objectives like "improve the user experience" invite endless debate. Specific key results like "reduce onboarding time" or "achieve target NPS" give your team clear targets to rally around.
Re-prioritization requires re-scoring. This is crucial and often overlooked. When your objectives change, and they will change - your prioritization scores become stale. I've seen teams waste months building features that scored high against last quarter's objectives but are irrelevant to this quarter's reality.
I make it a practice to review our OKRs monthly with each product team. If objectives have shifted significantly, say we've pivoted from growth to retention, or from one market to another - we re-score everything in the backlog. It's painful but necessary. Better to spend a day re-evaluating your roadmap than three months building the wrong thing.
The Toolbox: Prioritization Frameworks
Over the years, I've experimented with virtually every prioritization framework out there. Each has its strengths and contexts where it shines. Let me walk you through the ones worth knowing.
ICE: Impact, Confidence, Ease
Formula: (Impact × Confidence × Ease) / 3
ICE is beautifully simple. For each initiative, you score three factors on a scale of 1-10:
Impact: How much will this move the needle?
Confidence: How certain are we about the impact?
Ease: How simple is this to implement?
When I use it: Early-stage products or when you need quick, gut-level prioritization. It's great for getting stakeholders aligned without drowning in spreadsheets.
The limitation: It weights all three factors equally, which doesn't always reflect reality. A high-impact, high-confidence feature that's moderately difficult might score lower than a low-impact quick win.
RICE: Reach, Impact, Confidence, Effort
Formula: (Reach × Impact × Confidence) / Effort
RICE is my go-to framework for most situations, and I'll explain why in a moment. It extends ICE by adding Reach, how many users or customers will this affect?
Each component:
Reach: Number of users/customers affected per time period. This can be set in percentage or real value.
Impact: How much will this improve their experience? (0.25 = minimal, 0.5 = low, 1 = medium, 2 = high, 3 = massive)
Confidence: How certain are we? (Percentage: 100% = high, 80% = medium, 50% = low). We need data to back up high values on this.
Effort: Person-months required. Ive used this in different formats where effort will reflect number of teams or studios involved.
Why I prefer RICE: It forces you to think about scale. A feature that delights 100 users isn't the same as one that delights 100,000 users. RICE makes that explicit.
It also separates effort from ease. "Easy" is subjective and political. "This will take three person-months" is objective and harder to manipulate.
Real example: We once debated two features for our onboarding flow. Feature A would dramatically improve conversion for a high-value customer segment. Feature B would modestly improve conversion for all new customers.
Stakeholders passionately argued for Feature A - naturally, since these customers represented significant revenue. ICE scoring would have ranked them similarly. But RICE clearly showed Feature B had significantly higher total impact due to reach. We shipped B first, captured the value, then built A. Both got built. One just created more value sooner.
MoSCoW: Must Have, Should Have, Could Have, Won't Have
MoSCoW isn't really a scoring system. It's a categorization framework that forces binary decisions.
When I use it: Release planning, especially when there's a hard deadline. MoSCoW prevents scope creep by making you commit to what's absolutely essential versus what's nice-to-have.
The catch: It requires discipline. Everyone wants their feature to be "Must Have." You need strong product leadership to enforce the categories honestly.
I typically use MoSCoW after RICE scoring. RICE helps me rank features objectively; MoSCoW helps me draw lines for specific releases.
WSJF: Weighted Shortest Job First
Formula: Cost of Delay / Job Duration
SWJF comes from SAFe (Scaled Agile Framework) and is particularly useful for managing portfolios where different products compete for shared engineering resources.
Cost of Delay factors in:
User/Business Value
Time Criticality (does this opportunity have a window?)
Risk Reduction/Opportunity Enablement (does this unlock future work?)
When I use it: Portfolio-level decisions where we're allocating teams across products. It's especially powerful when you have time-sensitive opportunities - say a partnership launch, competitive response, or regulatory deadline.
Engineering resources are often shared across products. Every sprint requires decisions about resource allocation between competing priorities.
Example: Product A had a feature that would generate significant annual revenue. Product B had a feature generating less revenue. On pure revenue, you'd pick A.
But Product A's feature required substantial development time. Product B's feature was much quicker to implement. SWJF accounting for cost of delay made Product B the clear winner. Ship B, start generating revenue quickly, then build A. The total delay cost of building B first was minimal; the opportunity cost of building A first was huge.
Others Worth Knowing
Kano Model: Categorizes features as basic expectations, performance attributes, or delighters. Great for understanding customer psychology but harder to quantify.
Value vs. Complexity Matrix: Simple 2x2 grid. Quick wins in one quadrant, strategic bets in another. Good for stakeholder presentations but lacks nuance.
Buy-a-Feature: Stakeholders get "budget" to "buy" features. Reveals true priorities when people have to make trade-offs. Fun exercise but gimmicky for regular use.
My Recommendation: Start with RICE, Adapt as Needed
If you're new to structured prioritization, start with RICE. Here's why:
It's data-friendly. Unlike ICE's subjective "ease" or MoSCoW's binary categories, RICE encourages you to gather actual numbers. How many users does this reach? How many person-weeks will it take?
It scales. RICE works for individual features, epics, and entire initiatives. You can use it within a single product team or across a portfolio.
It exposes assumptions. When you force yourself to quantify reach, impact, and confidence, you quickly discover what you actually know versus what you're assuming. That's valuable regardless of which features you ultimately build.
It's flexible. You can adjust the impact scoring scale to your context. For enterprise products, maybe you weight retention impact higher than acquisition. For consumer products, maybe the reverse.
That said, I rarely use RICE in isolation. My typical workflow:
Quarterly: Re-validate OKRs with leadership and product teams
Monthly: RICE scoring session for all proposed initiatives
Sprint planning: MoSCoW categorization for the next release, informed by RICE scores
Ad hoc: SWJF analysis when we need to reallocate resources across products
The Data-Driven Difference
Here's the truth about prioritization frameworks: they're only as good as the data you feed them.
I've seen teams meticulously score features using RICE, then realize their "reach" numbers were pulled from thin air. Or their "impact" estimates were based on what stakeholders hoped would happen, not what data suggested would happen.
This is where product maturity shows. Junior PMs guess. Senior PMs measure.
Better data on impact: Look at past features. When you shipped that onboarding improvement, did activation actually increase as expected? Use that track record to calibrate future impact estimates.
In iGaming, we have abundant data - every interaction and customer journey was tracked. But we also have the curse of too much data. I learned to focus on leading indicators that actually correlated with our objectives, rather than vanity metrics.
Better data on reach: Instrument your product. Know exactly how many users interact with different flows, features, and entry points. Segment by user type, acquisition channel, subscription tier. Reach isn't "all users", it's the specific cohort this feature serves.
One mistake I see repeatedly: teams count total users when they should count affected users. Segment your reach data to understand the actual impact.
Better data on effort: Track actual delivery time versus estimates. Engineering teams notoriously underestimate complexity. Build a historical database of effort estimates versus actuals. Adjust your scoring accordingly.
We started tracking our estimation accuracy and learned significantly over time. The secret? We stopped estimating in abstract story points and started estimating in reference to similar past work.
Better data on confidence: This is the most honest score. If you haven't talked to users about this problem, confidence should be low. If you've run experiments, interviewed customers, and validated the prototype, confidence can be high.
I've learned to challenge high confidence scores with a simple question: "What could we learn that would change our mind?" If the team can't answer that, the confidence is probably overinflated.
The more data you have, the more your prioritization becomes science rather than art.

Cross-Functional Leadership: Making Prioritization Stick
Frameworks and data are necessary but not sufficient. You also need organizational buy-in.
Prioritization decisions don't happen in a product vacuum, they require alignment across engineering, design, sales, operations, and executive leadership.
Engineering needs to understand why, not just what. I made it a practice to share not just the prioritized backlog but the scoring rationale. "Here's why we're building Feature X before Feature Y. Here's the RICE score. Here's the OKR it supports. Here are the assumptions we're making."
Transparency builds trust. When priorities inevitably shift, engineering teams are more understanding if they've been part of the reasoning process all along.
Sales and customer success need realistic expectations. I've been in countless meetings where sales promises a customer a feature "next quarter" without checking the roadmap. Then the product team gets blamed when we don't deliver.
My solution: quarterly roadmap reviews with sales leadership where we walk through the prioritized backlog, explain the scoring, and identify dependencies. Sales gets visibility. Product gets feedback on market needs. Everyone aligns on what's realistic.
Leadership needs to see portfolio-level trade-offs. Executive stakeholders don't want to micromanage feature prioritization. What they want is confidence that you have a rational process and that you're maximizing ROI across the portfolio.
Quarterly portfolio reviews should show:
OKRs for each product and their progress
Top initiatives per product, with RICE scores
Resource allocation across products
Trade-offs being made and why
This level of transparency turns prioritization from a black box into a strategic conversation.
From Tactical to Strategic: The Evolution of Prioritization
One pattern I've noticed: the nature of prioritization changes as you move from IC to leadership roles.
As a developer: Prioritization was simple. My manager gave me a prioritized list. I built the things at the top.
As a Product Manager: I was prioritizing features for a single product. The question was: "Which features will best serve our customers and move our metrics?"
As Head of Product: I was prioritizing across multiple workstreams - acquisition features, retention features, operational improvements, regulatory compliance. The question became: "How do we balance different types of work to maximize overall product health?"
As Director of Product: I was prioritizing across multiple products in different markets. The question evolved to: "How do we allocate shared resources across products to maximize portfolio value?"
At the executive level: I'm prioritizing at the strategic level - which product lines to invest in, which markets to enter or exit, which capabilities to build versus buy. The question is now: "How do we position the portfolio for sustainable competitive advantage?"
The frameworks are the same. RICE works at every level. But what you're scoring changes.
At the feature level, you're scoring individual user stories. At the product level, you're scoring epics and initiatives. At the portfolio level, you're scoring entire products or market entries.
The skill that transfers across all these levels? Customer journey thinking. Whether prioritizing small changes or major platform investments, the question remains: "How does this improve the end-to-end customer experience? Where are the friction points? What's preventing customers from getting value?"
When to Re-Prioritize (And How to Know)
One of the hardest lessons I've learned: yesterday's priorities aren't always today's priorities.
Markets shift. Competitors launch. Customers churn for unexpected reasons. A key partnership falls through or a new one materializes. Your biggest customer suddenly needs a feature you'd deprioritized.
Re-prioritization triggers I watch for:
OKR changes: If your objectives shift, your backlog scores are instantly stale
Market shocks: New competitor, regulatory change, economic shift
Learning from shipped features: If your impact estimates were systematically wrong, recalibrate
Resource changes: Team size changes, key person leaves, new skillsets join
Customer feedback patterns: When the same request comes from multiple high-value customers
How to re-prioritize without chaos:
I've made the mistake of constantly shifting priorities based on the latest fire drill. It destroys team morale and velocity.
My rule: OKRs and top-level priorities are stable quarterly. Mid-level priorities can shift monthly. Only true emergencies change priorities mid-sprint.
When re-prioritization is needed, I:
Acknowledge the change explicitly with the team. Don't pretend we always planned this.
Re-score affected items using the same framework we used initially
Show the comparison. Here's what we thought three months ago, here's what we know now, here's why our priorities have shifted
Make trade-offs visible. If we're building Feature X now, Feature Y moves to next quarter. Everyone needs to see and accept that trade-off.
Closing Thoughts
From writing code as a developer to leading product portfolios across multiple organizations - I've learned that prioritization isn't just about frameworks and formulas. It's about clarity, honesty, and courage.
Clarity about what success looks like. Honesty about what you know versus what you're guessing. Courage to say no to good ideas in service of great ones.
The best prioritization framework is the one your team actually uses consistently. Whether that's RICE, ICE, WSJF, or something else, what matters is having shared language for making trade-offs and the discipline to revisit those decisions when the facts change.
Through various roles in product leadership, one constant remains: the teams that ship great products are the teams that prioritize ruthlessly.
At Luminbrane, we work with product teams who are struggling with exactly these challenges. Whether you're managing a single product trying to find product-market fit, or overseeing a portfolio of products competing for resources, the fundamentals remain the same.
Start with data. Build on frameworks. Stay aligned on goals. And remember: the roadmap is never finished. It's a living document that reflects our best current understanding of how to create value for customers and businesses.
What prioritization challenges are you facing in your product work? Whether you're a PM trying to get stakeholder buy-in, a product leader managing multiple teams, or an executive trying to allocate resources across a portfolio, I'd love to hear how you're approaching these decisions.