Unit 04 of 10
Unit 4: Designing for trust: the UX of AI features
Learning objectives
Understand why trust is the primary adoption barrier for AI features. Apply trust-building design patterns to AI product concepts. Design appropriate levels of transparency for different AI use cases.
Video script
Reading material
Trust patterns in practice
Confidence indicators. Show users how confident the AI is in its output. A simple "high/medium/low confidence" label helps users calibrate their review effort. High confidence? Quick scan. Low confidence? Careful review.
Before/after views. When AI modifies user content (auto-correction, reformatting, summarization), show what changed. Highlight the differences so users can verify the AI's work without reading the entire output.
Progressive autonomy. Start with the AI as a suggestion engine (user decides whether to accept). As users build trust, offer more automated options (auto-apply with undo). Let users choose their comfort level. Some will want full automation quickly. Others will want to review every suggestion for months. Both behaviors are valid.
Audit trails. For high-stakes AI decisions, keep a record of what the AI recommended and what the user chose. This serves two purposes: it lets users review past decisions, and it gives your team data on how often users override the AI (which indicates trust calibration).
The trust-stakes matrix
The amount of transparency and user control you need depends on the stakes of the AI's output.
Low stakes, low visibility (smart defaults, predictive caching): embed the AI invisibly. Users don't need to know or control it.
Low stakes, high visibility (email subject suggestions, meeting time proposals): show the suggestion with easy override. Minimal explanation needed.
High stakes, low visibility (fraud detection, risk scoring behind the scenes): provide transparency on request. Users should be able to see the AI's reasoning if they ask.
High stakes, high visibility (medical recommendations, financial advice, content for external audiences): full transparency required. Show reasoning, confidence, sources, and make override the default interaction pattern.
Practical exercise
Exercise: Trust audit
Choose an AI feature you interact with regularly. Evaluate it against the four trust principles.
- Does it show its reasoning? What explanation does it provide?
- How easy is it to override? Count the clicks or steps.
- Does it set honest expectations? What claims does it make about accuracy or completeness?
- Does it degrade gracefully? What happens when it can't produce a good result?
Rate each principle 1-5. Then redesign the weakest area: sketch or describe how you'd improve the trust design. Write up your audit and redesign as a brief case study (one page).