A complete template for conducting user interviews that actually lead to better design decisions
Why Six Users Changes Everything
Nielsen Norman Group's research proves that five users uncover 85% of usability problems. Add one more for insurance, and you've got six interviews that cost less than your monthly software subscriptions but deliver insights worth tens of thousands in avoided mistakes.
Erika Hall puts it perfectly in Just Enough Research: "The goal of interviewing users is to learn about their actual behavior and motivations, not their opinions or desires." Six conversations, done right, reveal patterns that surveys with hundreds of responses often miss.
But here's where most businesses stumble—they treat interviews like surveys with a human face. They ask leading questions, fish for compliments, or worse, try to sell during research. This guide ensures you avoid those traps.
Part 1: Selecting Your Six Ideal Users
The Screening Framework
Your six participants should represent distinct user segments, not random volunteers. The Interaction Design Foundation's research on participant selection shows that strategic recruiting beats statistical representation every time.
Creating Your Screening Criteria:
Start by defining three dimensions that matter for your specific research question. Don't use demographics unless they directly impact behavior. Instead, focus on:
Behavioral Segmentation: How do they currently solve the problem your product addresses? Are they using competitors, cobbled-together solutions, or nothing at all? You want variety here. If all six users are already loyal customers, you're missing critical perspectives.
Context of Use: When, where, and why do they engage with solutions like yours? The person checking your site during their commute has different needs than someone browsing from their office computer. Mix contexts across your six participants.
Experience Level: Include both novices who struggle with basic tasks and experts who push your product's limits. Nielsen Norman Group's research shows that expert users often develop workarounds for problems that novices can't articulate but deeply feel.
The Recruitment Process
Option 1: Direct Recruitment (Most Control, Lowest Cost)
Days 1-2: Building Your Pipeline
Start with recent customers, but don't stop there. Your customer database provides willing participants, but they're already invested in your solution. Balance them with prospects who chose competitors or abandoned your product.
Write your outreach message with radical honesty: "We're trying to understand how people really [solve problem], and we think we might be getting it wrong. Could we learn from your experience?"
Offer fair compensation—$50 for 45 minutes is standard for consumer products, $100-150 for B2B. Gift cards work better than cash for most contexts.
Days 3-4: Screening and Scheduling
Create a five-question screener that takes under two minutes to complete. Here's a template based on Erika Hall's approach:
- "How do you currently handle [problem your product solves]?" (Open text - reveals if they have the problem)
- "How often do you [relevant behavior]?" (Multiple choice - ensures regular engagement)
- "What's most frustrating about [current solution/problem area]?" (Open text - reveals pain points)
- "Which tools do you use for [related task]?" (Multiple choice with "other" option - shows tech comfort)
- "What would need to change for [problem area] to be less frustrating?" (Open text - reveals expectations)
Schedule interviews across different times to catch various contexts. Morning interviews often get hurried professionals. Afternoon slots attract different perspectives.
Option 2: Remote Panel Services (Fastest, Most Expensive)
Using UserInterviews.com or Similar Platforms:
These services handle recruitment but cost $100-200 per participant plus your incentive. The speed might be worth it—you can have six qualified participants scheduled within 48 hours.
Setting Up Your Project: Write a detailed screener that the service will use. Be specific about disqualifiers. If you need people who've purchased online in the last month, say exactly that. Vague criteria like "regular internet users" waste money on irrelevant participants.
Request diverse demographics even if they don't directly matter. Different backgrounds bring different mental models, and that variety strengthens your findings.
Allow the service to over-recruit by 20%. They'll typically schedule 7-8 participants to ensure you get your six, accounting for no-shows.
Option 3: Community Recruitment (Best for Niche Audiences)
If your users gather in specific online or offline spaces, go there. LinkedIn groups, Reddit communities, professional associations, or local meetups can provide highly engaged participants.
The approach differs here. Instead of cold outreach, contribute value first. Answer questions, share insights, then mention your research need. Communities respond to members, not strangers seeking free labor.
Part 2: Crafting Your Interview Guide
The Question Architecture
Erika Hall's interviewing framework emphasizes story over opinion. Your questions should prompt narratives, not evaluations.
The Opening Gambit (5 minutes):
"Thanks for taking time to help us understand how people really deal with [problem area]. There are no right or wrong answers—we're just trying to learn from your experience.
Before we dive in, could you briefly describe your typical day and where [problem area] fits into it?"
This opening does three things: it sets a conversational tone, confirms they have relevant experience, and provides context for interpreting their later responses.
The Experience Deep-Dive (25 minutes):
These questions form your interview's core. Adapt them to your specific research needs:
Question 1: The Last Instance
"Tell me about the last time you needed to [core task]. Walk me through what happened from the moment you realized you needed to do this."
Why this works: Specific instances reveal actual behavior. People can't accurately predict what they'll do, but they can accurately recall what they did.
Follow-up prompts:
- "What happened next?"
- "What were you thinking at that moment?"
- "How did that make you feel?"
- "What else did you try?"
Question 2: The Breakdown
"Can you remember a time when [current solution] really didn't work for you? What was going on?"
Why this works: Failure points reveal unmet needs more clearly than success stories.
Follow-up prompts:
- "How did you work around that?"
- "Who did you ask for help?"
- "What would have prevented that situation?"
Question 3: The Comparison
"How does [your solution/problem area] compare to [alternative they've mentioned]?"
Why this works: Comparisons force specificity. Users can better articulate preferences when contrasting options.
Follow-up prompts:
- "When would you use one versus the other?"
- "What does [alternative] do better?"
- "What's missing from both?"
Question 4: The Workaround
"Have you found any tricks or shortcuts for making [task] easier?"
Why this works: Workarounds reveal both problems and the user's mental model of how things should work.
Follow-up prompts:
- "How did you figure that out?"
- "Do others do it the same way?"
- "What would make that unnecessary?"
Question 5: The Handoff
"If you had to explain [how to do task] to someone who's never done it before, how would you describe it?"
Why this works: Teaching forces users to articulate their mental model and identify confusing elements.
The Context Exploration (10 minutes):
"I'd like to understand the bigger picture of how this fits into your work/life."
- "What usually triggers your need to [task]?"
- "Who else is involved when you [task]?"
- "What happens if you don't [task] or it goes wrong?"
- "How do you know when you've successfully completed [task]?"
The Wrap-Up (5 minutes):
"We're almost done. Just a couple final questions:"
- "What haven't I asked about that you think I should know?"
- "If you had a magic wand and could change one thing about [problem area], what would it be and why?"
The Anti-Pattern Guide: Questions to Never Ask
Based on Hall's research and Nielsen Norman Group's guidelines, avoid these interview killers:
Don't Ask for Predictions:
- Wrong: "Would you use a feature that..."
- Right: "Tell me about a time when you needed..."
Don't Lead the Witness:
- Wrong: "Don't you think it would be better if..."
- Right: "How do you feel about..."
Don't Ask for Solutions:
- Wrong: "What features would you want?"
- Right: "What are you trying to accomplish when..."
Don't Accept Generalities:
- Wrong: "Do you usually..."
- Right: "The last time you..."
Continued in detail sections...
This guide continues with comprehensive sections on:
- Part 3: Conducting the Interview (In-person, Remote, and Third-party setups)
- Part 4: Interview Techniques That Get Truth (Active listening, probing techniques, managing difficult moments)
- Part 5: Synthesis and Pattern Recognition (Analysis frameworks, deliverables)
- Part 6: Making Research Stick (Socialization strategies, implementation bridges)
- Complete Interviewing Checklist with extensive detail on each step
- Common Pitfalls and Recovery Strategies
This guide is part of the Experience Helpdesk membership resources.
Interview Techniques and Advanced MethodsSynthesis, Implementation, and Complete Checklist