PM fundamentals in an AI world

Unit 11 of 12

Unit 11: Building AI-aware product sense

Learning objectives

By the end of this unit, you should be able to evaluate AI features with a product lens (not just a technical one), understand common AI product pitfalls, and develop intuition for when AI adds genuine user value versus when it adds complexity.

Video script

Reading material

Evaluating AI feature ideas

When someone proposes an AI feature, ask these questions before committing to build it.

What problem does this solve, and does it require AI? Sometimes the problem has a simpler solution. Don't use AI because it's available. Use it because it's the best approach.

What does failure look like for the user? An AI feature that's wrong 10% of the time might be fine for low-stakes suggestions and terrible for high-stakes decisions. The failure mode determines the quality bar.

Can users tell when the AI is wrong? If users can evaluate the output (e.g., a text summary they can read and verify), the accuracy bar is lower because they'll catch mistakes. If they can't evaluate the output (e.g., a risk score based on data they haven't seen), the accuracy bar is much higher.

What's the user's fallback? If the AI feature doesn't work well, what does the user do instead? If the fallback is easy (manually type instead of accepting the suggestion), adoption pressure is lower. If the fallback is painful (redo the entire workflow), the feature had better work well.

Does this create value or just novelty? Some AI features are impressive on first use but don't integrate into real workflows. If users won't use it after the first week, it's novelty, not value.

The trust design toolkit

Trust is the single biggest factor in AI feature adoption. Here are design approaches that build it.

Show your work. When the AI makes a suggestion, show why. "Recommended because similar projects took 3 weeks" is better than "Estimated: 3 weeks." The explanation doesn't need to be technically detailed. It just needs to give the user a mental model for evaluating the suggestion.

Make it easy to correct. If the AI gets it wrong, the user should be able to fix it with minimal effort. Worse: the user has to fight the AI to change its suggestion. Better: a single click to override, and the AI learns from the correction.

Set expectations honestly. Don't present AI features as infallible. "This summary might miss some details" is better than an unmarked output that the user assumes is complete. Honest framing builds more trust than false confidence.

Degrade gracefully. When the AI can't produce a good result (low confidence, insufficient data), show that clearly rather than presenting a low-quality output. "Not enough data to make a recommendation" is better than a bad recommendation.

Practical exercise

Exercise: AI feature audit

Choose a product you use that has AI features (email, note-taking, project management, or anything with AI-powered suggestions, summaries, or automation).

Evaluate one AI feature using these criteria:

  1. What problem does it solve? Could it be solved without AI?
  2. What does failure look like? How often does the AI get it wrong?
  3. Can you tell when the AI is wrong? How?
  4. Does the feature show its reasoning?
  5. How easy is it to correct the AI?
  6. Does the feature feel like genuine value or novelty?
  7. Would you miss it if it were removed?

Write a brief product review (300 words) evaluating the feature through this lens. This is the kind of analysis that builds AI product sense over time.