AI product discovery and strategy

Unit 07 of 10

Unit 7: Working with ML engineers: a PM's guide

Learning objectives

Understand how ML development differs from traditional software development. Communicate product requirements effectively to ML teams. Manage expectations around AI feature timelines and quality.

Video script

Reading material

The PM-ML collaboration model

Kickoff: Define the problem, not the solution. Start with the user problem and the success criteria. Let the ML team assess feasibility and propose approaches. Their initial assessment will tell you a lot about the timeline and risk.

Spike phase: Prove it's possible. Before committing to a full build, ask for a technical spike (1-2 weeks) to test whether the approach works with real data. This is the cheapest risk reduction available. Many AI features fail at this stage, which is the right time to fail.

Build phase: Iterate toward quality. Development proceeds in iterations. Each iteration improves quality toward the agreed threshold. The PM's role here is to provide ongoing feedback from the product perspective and test intermediate versions with users.

Launch phase: Ship with monitoring. Launch with explicit monitoring for accuracy, trust metrics, and user behavior. Plan for a tuning period after launch where the team refines the model based on production data.

Ongoing: Maintain and improve. AI features are never "done." Data changes, user needs evolve, and model performance shifts. Budget ongoing capacity for maintenance and improvement.

Common PM-ML tensions

"When will it be ready?" ML engineers struggle with timeline estimates because output quality depends on data characteristics they can't predict. Instead of asking for a ship date, ask: "What are the major unknowns, and when will we resolve each one?"

"Can't you just make it more accurate?" Accuracy improvements often have diminishing returns. Going from 80% to 90% might take two weeks. Going from 90% to 95% might take two months. Going from 95% to 99% might require a fundamentally different approach. Understand where you are on this curve.

"The model works in testing but not in production." This is common because test data is cleaner and more representative than production data. Plan for a performance gap between testing and production, and monitor it closely after launch.

Practical exercise

Exercise: Write an AI feature brief for ML engineers

Write a one-page feature brief for an AI feature idea. Structure it so an ML engineer can evaluate feasibility.

Include: the user problem, the desired user experience, the success criteria (with specific quality thresholds), the available data (what exists, what's missing), constraints (latency requirements, cost limits), and open questions for the ML team.

The goal is to practice translating product requirements into a format that supports productive technical collaboration.