PM fundamentals in an AI world

Unit 06 of 12

Unit 6: Prioritization: deciding what matters most

Learning objectives

By the end of this unit, you should be able to apply at least two prioritization frameworks, explain why saying no is the PM's most important skill, and evaluate trade-offs between competing priorities using structured reasoning.

Video script

Reading material

The art of saying no

Saying no is uncomfortable. People took the time to make a request. They believe in it. Saying no feels dismissive. So most PMs avoid it by saying "not now" or "it's on the backlog" or "let me look into it." These are all ways of saying no without saying no, and they create two problems: the requester thinks their thing is coming (it isn't), and the backlog becomes a graveyard of false promises.

A better approach is what I call "transparent prioritization." When you say no, explain why in terms of the strategy. "We're focused on reducing onboarding friction this quarter because that's our biggest leverage point for retention. Your request addresses a different problem, and while it's valid, it's not the highest-impact use of our team's time right now."

This is honest. It gives the requester context they can evaluate. And it opens a conversation: if they have information that changes the trade-off (maybe the feature would affect onboarding in ways you hadn't considered), they can share it.

Prioritization with AI features

Prioritizing AI features adds a layer of complexity because the feasibility and impact of AI features are harder to estimate.

Feasibility uncertainty is higher. An AI feature might work 90% of the time in testing and 70% in production. The gap between prototype and production-ready is wider and less predictable than for traditional features.

Impact is harder to forecast. Users might not adopt an AI feature even if it works, because they don't trust it, don't understand it, or don't need it. User research and prototype testing are more important for AI features than for traditional ones.

Maintenance cost is ongoing. AI features require monitoring, tuning, and data pipeline maintenance after launch. Factor this into your effort estimates.

My recommendation: treat AI features as high-uncertainty bets and apply a discount to your confidence scores. This naturally pushes them toward smaller, more testable increments rather than large, all-or-nothing launches.

Beyond frameworks: building prioritization judgment

Frameworks are training wheels. They help you learn the thinking patterns, but experienced PMs internalize these patterns and don't need to run every decision through a scoring model.

What experienced PMs do instead: they maintain a strong mental model of their product strategy and use it as a filter. When a new request comes in, they can quickly evaluate whether it fits the strategic direction, how it compares to alternatives already in consideration, and whether the timing is right. This judgment comes from practice, from making prioritization decisions and seeing the results over time.

The best way to build this judgment: every time you prioritize something, write down why. After it ships, check whether your reasoning was correct. Did the high-impact feature actually have high impact? Did the easy fix actually turn out to be easy? Over time, you'll calibrate your estimates and develop better intuition.

Practical exercise

Exercise: Prioritize a backlog

Here's a fictional backlog of five items for a project management SaaS product. Use RICE to score each one, then use the impact-effort matrix to plot them. Compare the two results.

  1. AI-powered task assignment (suggests who should own a task based on team history)
  2. Calendar integration (sync project deadlines with Google Calendar / Outlook)
  3. Custom dashboard builder (let users create their own metrics views)
  4. Mobile app improvements (faster load times, offline access)
  5. Slack integration for task notifications

For each item, estimate Reach (how many users affected, 1-10 scale), Impact (how much it would improve their experience, 1-3 scale), Confidence (how sure you are about these estimates, 0-100%), and Effort (person-months to build, lower is better).

After scoring, write a short recommendation: what would you build first and why? Does the framework output match your gut feeling? Where do they differ, and what does that tell you?