Experience Helpdesk Member Resources/User Research Planning Basics: Just Enough Research for Real Results

User Research Planning Basics: Just Enough Research for Real Results

User Research Planning Basics: Just Enough Research for Real Results

A practical guide for business owners and managers who want to understand their customers without breaking the bank or derailing their development schedule

The Real Cost of Skipping Research

According to research from the Nielsen Norman Group, fixing a problem after development costs 100 times more than fixing it before implementation begins. Yet Baymard Institute's analysis of e-commerce sites found that the average large-scale site has 39 unique usability issues that could be discovered through basic user research.
The disconnect is clear: businesses spend thousands on development while skipping the hundreds it would take to validate their assumptions first.

Why "Just Enough" is Actually Plenty

Erika Hall, in her book Just Enough Research, argues that user research doesn't require a research department or a six-figure budget. It requires curiosity and a willingness to be wrong.
The "just enough" philosophy matters because, as Hall points out, over-researching can be just as harmful as under-researching. Research without clear goals becomes an expensive delay tactic. The goal is actionable insights, not perfect data.
Nielsen Norman Group's foundational research demonstrates that testing with just five users uncovers 85% of usability problems. After eight participants, you're mostly hearing repetition. This finding revolutionized how companies approach user research—you don't need statistical significance to make meaningful improvements.

The Four Questions That Matter

Before starting any research, these questions need clear answers:
1. What decision will this research inform?
Research needs a specific decision attached. "Understanding users better" isn't actionable. "Choosing between live chat or improved documentation" is.
2. Who actually uses this thing?
Specificity matters. Nielsen Norman Group's research on personas shows that generic demographic descriptors lead to poor design decisions. Real user segments have specific contexts, goals, and constraints.
3. What are you willing to be wrong about?
As Hall emphasizes in Just Enough Research, if you're not prepared to discover your assumptions are incorrect, you're not doing research—you're seeking validation.
4. What will you do differently based on what you learn?
Research that doesn't lead to action is academic exercise. Every research effort should have predetermined actions based on potential findings.

Your Minimum Viable Research Plan

This two-week framework fits into standard Agile sprints and requires minimal budget:

Week 1: Gathering Intelligence

Days 1-2: The Assumption Inventory
Document everything your team believes about your users. Mark each assumption by risk level—which ones, if wrong, would fundamentally change your approach? These become research priorities.
The Interaction Design Foundation recommends this assumption mapping as a critical first step in any research process, as it surfaces hidden biases and conflicting mental models within teams.
Days 3-4: Digital Ethnography
Observe where your customers gather online. Read their forum posts, social media discussions, and reviews. Document the language they use and problems they discuss.
Baymard's research methodology includes extensive review mining, finding that customers often articulate problems in reviews that they wouldn't mention in direct interviews.
Day 5: Support Ticket Analysis
Your customer service data is an untapped research goldmine. Analyze recent support tickets for patterns. What questions appear repeatedly? What language do customers use?
Nielsen Norman Group's studies show that support ticket analysis often reveals fundamental usability issues that quantitative analytics miss.

Week 2: Talking to Humans

Days 6-7: Recruit Your Research Participants
Based on Nielsen's research, you need five to eight people for meaningful insights. Recruit from recent customers, your user community, or through targeted social media posts.
Days 8-9: The Conversations
Hall recommends starting interviews with: "Tell me about the last time you tried to [core task your product addresses]." Then listen. Follow up with probing questions like "Can you walk me through what happened?" or "Why was that important?"
Record sessions when possible (with permission) for later analysis.
Day 10: The Synthesis
Look for patterns across interviews. Create a simple summary:
  • Three surprising findings
  • Three problems you can address immediately
  • Three questions requiring further research

Making Research Fit Your Agile Rhythm

The Interaction Design Foundation's research on Agile UX integration shows that parallel tracks work best:
During Sprint Planning:
Review findings from previous sprint's research to inform upcoming work.
During Development:
Conduct research for the next sprint while current features are being built.
During Sprint Review:
Present research findings alongside feature demos.
Continuous Discovery:
Schedule one customer conversation weekly. This creates a steady stream of insights without overwhelming the team.

The Tools You Actually Need

Based on industry best practices:
For Conversations:
  • Video conferencing (Zoom, Google Meet)
  • Recording device or app
  • Note-taking tools
For Analysis:
  • Spreadsheets for organizing findings
  • Collaborative tools (Miro, FigJam) for synthesis
  • Shared repository for findings
For Recruiting:
  • Email lists
  • Social media
  • Scheduling tools (Calendly or similar)

Addressing Common Objections

"Our users won't participate."
Nielsen Norman Group's research shows that appropriate incentives and clear value propositions dramatically increase participation rates.
"We already know what users want."
Baymard's database of over 71,000 usability findings shows that even UX experts' assumptions are wrong approximately 30% of the time.
"We don't have a UX researcher."
Hall's framework specifically addresses this—research is a skill anyone can learn, not a job title.
"Our competitors don't do research."
According to Forrester Research, customer-obsessed companies grow revenue 2.5 times faster than their peers.

Warning Signs of Poor Research Practice

Based on Hall's framework and Nielsen Norman Group's guidelines:
  • Researching to confirm existing beliefs rather than test them
  • Asking users to predict future behavior or design solutions
  • Treating research as a one-time checkpoint rather than ongoing practice
  • Researching without specific decisions to inform
  • Suppressing findings that challenge current plans

Your First Research Project

Start tomorrow with this template:
"Hi [Name], we're working to improve [product/service] and would value your perspective. Could we schedule a 20-minute conversation this week? We'll provide [appropriate compensation] for your time. When works best for you?"
Then listen. Don't defend or explain. Document what you hear.

The Evidence for Action

Nielsen Norman Group's ROI of usability studies show that every dollar spent on UX research returns between $2 and $100, with the average being $10. The key isn't perfection—it's starting.
Baymard's analysis of thousands of e-commerce sites found that sites implementing research-based improvements see conversion rate increases averaging 35.26%.
These aren't edge cases. They're consistent findings across industries and company sizes.

Quick Reference: Your Research Planning Checklist

Before You Start:

□ Clear decision to be informed
Write down the specific business decision this research will influence. Frame it as an either/or choice or a clear action. Examples: "Should we invest in mobile optimization or desktop improvements?" or "Which checkout flow reduces abandonment?" If you can't articulate what you'll do differently based on findings, stop and clarify. The decision should be documented and shared with stakeholders before research begins.
□ Specific user group identified
Define your target participants beyond demographics. Document their relationship to your product (new users, returning customers, lapsed users), their context of use (mobile during commute, desktop at work), and their level of expertise. Create a simple recruiting screener with 3-5 qualifying questions. Write down exactly who you're excluding and why.
□ Willingness to be wrong
Document your current assumptions about the decision at hand. Share these assumptions with your team and explicitly state that research may contradict them. Get verbal or written agreement from decision-makers that they'll consider findings that challenge the current plan. If leadership isn't prepared to change course based on research, postpone until they are.
□ Commitment to act on findings
Create a simple action matrix before starting: "If we learn X, we will do Y." Get stakeholder agreement on at least three potential actions based on likely findings. Schedule a meeting within 48 hours of research completion to review findings and commit to next steps. Without this commitment, you're doing research theater, not research.

Week 1 Activities:

□ Document all team assumptions
Schedule a 60-minute meeting with everyone who touches the user experience. Have each person silently write their assumptions about users on sticky notes (one assumption per note). Group similar assumptions together. Vote on which assumptions, if wrong, would most impact your current plans. Document everything in a shared spreadsheet with columns for: Assumption, Source, Impact if Wrong, Priority to Test.
□ Observe users in their natural habitat online
Identify 3-5 online spaces where your target users discuss problems your product addresses. This could be Reddit communities, Facebook groups, LinkedIn discussions, or specialized forums. Spend 2 hours reading without participating. Document: specific language they use, problems they mention repeatedly, solutions they've tried, and frustrations expressed. Create a simple spreadsheet tracking recurring themes and actual quotes.
□ Review support tickets and customer feedback
Export your last 50-100 support tickets or customer service interactions. If you don't have access, request a summary from your support team. Categorize issues by type and frequency. Look for problems that appear 3+ times. Document the exact language customers use to describe issues. Pay special attention to tickets that required multiple responses to resolve—these often indicate UX problems.
□ Identify top three research priorities
Based on your assumption inventory, online observations, and support ticket analysis, identify three specific questions that must be answered to make your decision. Frame these as learning goals, not validations. "Understand why users abandon at step 3" not "Confirm step 3 is confusing." These priorities will guide your interview questions and analysis focus.

Week 2 Activities:

□ Recruit 5-8 participants
Write a recruiting message that explains the purpose (improving the product), time commitment (20-30 minutes), and compensation ($25 gift card is standard for this length). Send to recent customers who match your target profile. Use your customer database, email list, or social media. Screen respondents with 3-5 questions to ensure they match your target user criteria. Schedule sessions with at least 24 hours notice. Always recruit 1-2 extra participants to account for no-shows.
□ Conduct conversational interviews
Prepare a discussion guide with 5-7 open-ended questions, starting broad and getting specific. Begin each session with "Tell me about the last time you [core task]." Use follow-up prompts like "What happened next?" and "Why was that important?" Record sessions (with permission) using your meeting platform's built-in recorder. Take notes on surprising moments and emotional responses. Avoid asking what users want—focus on understanding their current behavior and problems.
□ Look for patterns, not statistics
After all interviews, review your notes and recordings. Create a simple affinity map: write key observations on individual notes, then group similar findings. Look for problems mentioned by 3+ participants. Document surprising findings that challenge assumptions. Note emotional responses and frustration points. Don't calculate percentages—with 5-8 participants, patterns matter more than statistics.
□ Create one-page findings summary
Use a simple template with four sections: (1) Key Findings - three bullet points of what you learned, (2) Immediate Actions - three things you can fix now based on clear problems identified, (3) Further Questions - what you need to research more, and (4) Supporting Quotes - 2-3 direct user quotes that illustrate key findings. Share this with all stakeholders within 24 hours of completing analysis.

Ongoing Habits:

□ One customer conversation per week
Block 30 minutes weekly on your calendar for customer research. Rotate between different methods: one week might be a user interview, the next reviewing support tickets, the third observing online discussions. Create a simple tracking spreadsheet with date, method, key learning, and action taken. After 12 weeks, you'll have substantial user insight. Set a recurring calendar reminder to recruit participants 3 days before each session.
□ Research findings in sprint reviews
Add a standing agenda item to sprint reviews: "User insights from this sprint." Prepare 2-3 slides or bullet points sharing what you learned about users during the sprint. Connect findings directly to work completed or upcoming. Keep it brief—under 5 minutes. Focus on insights that influence upcoming sprint planning. Document these insights in a shared repository that the team can reference.
□ Parallel research and development
While the team builds features for the current sprint, designate someone to research questions for the next sprint. This person spends 2-3 hours during the sprint on research activities. They present preliminary findings during sprint planning to inform the next cycle's work. This prevents the "we don't have time for research" problem by making research concurrent, not sequential.
□ Document and share all findings
Create a simple research repository—even a shared Google Drive folder works. For each research activity, save: the research question, method used, participant details (anonymized), raw notes or recordings, and the one-page summary. Name files consistently: "YYYY-MM-DD_Method_Topic". Share the repository link with your entire team. Update it within 48 hours of each research session. Review the repository quarterly to identify larger patterns.
Remember: Perfect research that never happens is worth less than imperfect research that actually gets done. Start small, start now, and let your customers teach you what they really need.

This guide is part of the Experience Helpdesk membership resources. For more practical guides and coaching on delivering exceptional user experiences without enterprise resources, visit our member portal.