Experience Helpdesk Member Resources/Survey Question Bank: The Complete UX Research Template

Survey Question Bank: The Complete UX Research Template

350+ battle-tested questions that actually generate actionable insights—not vanity metrics

The Million-Dollar Question Nobody Asks

I once watched a startup burn through $2 million building features their customers explicitly said they wanted in surveys. Six months after launch, usage was near zero. When we dug deeper with actual behavioral observation, we discovered the truth: customers had told them what sounded good, not what they'd actually use.
Here's what Erika Hall hammers home in Just Enough Research: Surveys are terrible at predicting future behavior. But they're excellent at understanding current behavior, measuring satisfaction with existing experiences, and quantifying known issues.
The difference between a survey that generates real insights and one that produces expensive fiction? The questions you ask and how you ask them.

When to Use Surveys vs. Interviews

Before you copy-paste these questions into SurveyMonkey, stop. Surveys answer "how many." Interviews answer "why." You need both, but in the right order.
Use surveys when:
  • You've already done qualitative research and need to validate findings at scale
  • You're measuring satisfaction with an existing experience
  • You need to prioritize a list of known problems
  • You're tracking changes over time
  • You have more than 100 potential respondents
Skip surveys when:
  • You're exploring new territory
  • You need to understand motivations
  • You're testing concepts or prototypes
  • You have fewer than 30 respondents
  • You don't have a plan for acting on the results
Nielsen Norman Group's research shows that mixing methods—starting with interviews, then validating with surveys—increases the accuracy of findings by 60%. Start qualitative, then go quantitative.

The Framework: Four Types of Questions That Matter

Type 1: Behavioral Questions (What People Actually Do)

These questions focus on observable actions, not intentions. They're your most reliable data source because past behavior predicts future behavior better than any stated preference.
Frequency and Usage Patterns
"In the past week, how many times did you [specific action]?"
Why this works: Specific timeframes force accuracy. "Past week" beats "typically" every time. People can remember seven days. They can't accurately report "typical" anything.
How to implement: Replace [specific action] with the exact behavior you're studying. "Visit our website" is too vague. "Check order status on our website" is specific. Include options: 0 times, 1-2 times, 3-5 times, 6-10 times, More than 10 times. Always include zero—it's often your most important data point.
Example variations:
  • "In the past month, how many times did you abandon a purchase due to shipping costs?"
  • "Yesterday, how many times did you use our search function?"
  • "In your last visit, how many pages did you view before finding what you needed?"
"When did you last [specific action]?"
Why this works: Recency indicates relevance. Someone who used your service yesterday has different insights than someone who last used it six months ago.
How to implement: Provide ranges that make sense for your context: Today, Yesterday, Within the past week, Within the past month, Within the past 3 months, Within the past year, More than a year ago, Never. That "Never" option is crucial—it identifies people who shouldn't be answering follow-up questions.
Example variations:
  • "When did you last make a purchase from our website?"
  • "When did you last contact customer support?"
  • "When did you last recommend our service to someone else?"
"Which of the following have you done in the past [timeframe]?" [Multiple choice]
Why this works: Reveals actual usage patterns across features. Shows what people really use versus what you think they use.
How to implement: List 5-10 specific actions. Keep descriptions short and clear. Randomize order to avoid position bias. Always include "None of the above" and "Other" with a text field. Track the "Other" responses—they're gold mines for discovering unexpected use cases.
Example variations:
  • "Which of these features have you used in the past month?" [List features]
  • "Which of these tasks have you completed on our website?" [List tasks]
  • "Which of these problems have you experienced?" [List known issues]
Method and Channel Preferences
"How did you most recently [complete task]?"
Why this works: Reveals actual channel usage, not stated preferences. What people did last time predicts what they'll do next time.
How to implement: List all possible methods/channels. Include "Other" option. Follow up with "Why did you choose that method?" if you have room. The combination tells you not just what they did, but what drove the decision.
Example variations:
  • "How did you most recently contact us for help?"
  • "How did you most recently make a purchase from us?"
  • "How did you most recently find information about our products?"
"What device did you primarily use for [specific action]?"
Why this works: Device usage drives design decisions. Mobile users have different needs than desktop users.
How to implement: Keep it simple: Smartphone, Tablet, Laptop/Desktop computer, Other. If you need more detail, ask about operating system in a follow-up question. Don't assume—we've seen B2B sites with 40% mobile traffic that were barely functional on phones.

Type 2: Attitude Questions (How People Feel)

These measure satisfaction, perception, and emotional response. Less reliable than behavioral questions but essential for understanding the "why" behind the "what."
Satisfaction Metrics
"How satisfied are you with [specific aspect]?"
Why this works: Simple, direct, comparable over time. But only valuable when asking about specific, recent experiences.
How to implement: Use a 5-point scale: Very dissatisfied, Somewhat dissatisfied, Neither satisfied nor dissatisfied, Somewhat satisfied, Very satisfied. Always use odd numbers so people can pick neutral. Follow immediately with "What's the primary reason for your rating?" (open text). The qualitative follow-up is where the real insights live.
Example variations:
  • "How satisfied are you with the checkout process?"
  • "How satisfied are you with our response time?"
  • "How satisfied are you with the search results?"
"How likely are you to recommend [product/service] to a friend or colleague?" (0-10 scale)
Why this works: The Net Promoter Score question. Tracks loyalty and predicts growth. Scores of 9-10 are promoters, 7-8 are passive, 0-6 are detractors.
How to implement: Always follow with "What's the primary reason for your score?" The number tells you what, the reason tells you why. Segment responses by score to identify patterns. Detractors often give the most actionable feedback.
Baymard Institute's research note: NPS works best when measuring overall experience, not specific features. For features, use satisfaction scales instead.
"How easy was it to [complete specific task]?"
Why this works: The Customer Effort Score (CES). Research shows effort is a stronger predictor of loyalty than satisfaction.
How to implement: Use a 7-point scale from "Very difficult" to "Very easy." Follow with "What made it [easy/difficult]?" based on their score. CES beats satisfaction scores for transactional experiences.
Comparative Questions
"Compared to [alternative], how would you rate [our solution]?"
Why this works: Context matters. "Good" means nothing. "Better than the competition" means everything.
How to implement: Much worse, Somewhat worse, About the same, Somewhat better, Much better, Haven't used [alternative]. That last option is critical—don't force uninformed comparisons.
Example variations:
  • "Compared to other websites in our industry, how would you rate our site's ease of use?"
  • "Compared to six months ago, how would you rate our customer service?"
  • "Compared to your expectations, how was your experience?"

Type 3: Demographic Questions (Who People Are)

Place these at the END of your survey. Starting with demographics feels like an interrogation and increases abandonment by up to 30% according to Nielsen Norman Group research.
Core Demographics (Only If Relevant)
  • Age ranges (not specific birth years)
  • Geographic location (as specific as needed, no more)
  • Role/Industry (if B2B)
  • Experience level with category
  • Frequency of use
Skip gender unless it directly impacts your analysis. Skip income unless you're pricing research. Every demographic question should have a clear purpose in your analysis plan.
Behavioral Demographics
These often matter more than traditional demographics:
  • "How long have you been using products like ours?"
  • "Which best describes your role in purchase decisions?"
  • "How would you describe your technical comfort level?"
  • "Which best describes your primary goal when using our service?"

Type 4: Open-Ended Questions (The Goldmines)

Use sparingly—analysis is time-intensive—but never skip entirely. These often surface insights you didn't know to look for.
The Essential Open-Ended Questions
"What's the one thing we could do to improve your experience?"
Why this works: Forces prioritization. "One thing" prevents laundry lists.
How to implement: Place after rating questions. Keep the text box large—short boxes get short answers. Don't require responses—forced answers are usually garbage.
"What nearly stopped you from [completing action] today?"
Why this works: Surfaces friction points from successful users. These people succeeded despite problems—imagine how many failed.
How to implement: Ask immediately after task completion. Memory degrades fast.
"What's missing from this page?"
Why this works: Identifies gaps in information architecture. Users know what they need better than you do.
How to implement: Page-specific surveys work better than general ones. Use exit-intent or time-based triggers.
"If you could no longer use [our product/service], what would you use instead?"
Why this works: Reveals actual alternatives and competitive set. Often surprises—your competition might not be who you think.

Question Templates by Research Goal

Goal: Understand Current User Behavior

The Core Battery:
  1. "How often do you typically [use category/complete task]?"
      • Multiple times per day
      • Daily
      • Several times per week
      • Weekly
      • Several times per month
      • Monthly
      • Less than monthly
      • This was my first time
  1. "When you last [used category/completed task], what were you trying to accomplish?"
    1. [Open text]
  1. "Which of these best describes your primary role when [using category]?"
    1. [Role-specific options relevant to your context]
  1. "What's your biggest frustration with [category/current solutions]?"
    1. [Open text]
  1. "How do you currently solve [problem] when our solution isn't available?"
    1. [Open text]

Goal: Evaluate Existing Experience

The Experience Audit Battery:
  1. "Think about your most recent visit to our [website/app]. What were you trying to do?"
    1. [Multiple choice with "Other" option]
  1. "Were you able to complete what you came to do?"
      • Yes, easily
      • Yes, but with some difficulty
      • Partially
      • No
  1. [If not "Yes, easily"] "What made it difficult?"
    1. [Open text]
  1. "How would you rate the following aspects of your experience?" [Grid question]
      • Finding what I needed
      • Understanding the information
      • Completing my task
      • Speed/performance
      • Visual design
      [Scale: Poor to Excellent]
  1. "What specific improvement would make the biggest difference to your experience?"
    1. [Open text]

Goal: Prioritize Problems

The Problem Priority Battery:
  1. "Which of these issues have you experienced in the past month?" [Check all that apply]
    1. [List of known issues]
  1. "Of the issues you selected, which caused the most frustration?"
    1. [Radio buttons with same list]
  1. "How much did this issue impact your ability to [achieve goal]?"
      • Prevented me completely
      • Significant negative impact
      • Moderate negative impact
      • Minor inconvenience
      • No real impact
  1. "Have you found a workaround for this issue?"
      • Yes [Follow with: "Please describe your workaround"]
      • No
      • Sometimes
  1. "If we could fix three things, what should they be?"
    1. [Three separate text fields]

Goal: Test New Concepts

The Concept Validation Battery:
  1. "Based on this description, how interested would you be in [concept]?"
      • Very interested
      • Somewhat interested
      • Neutral
      • Somewhat uninterested
      • Very uninterested
  1. "What specifically appeals to you about this concept?"
    1. [Open text]
  1. "What concerns do you have about this concept?"
    1. [Open text]
  1. "How would this compare to your current solution?"
      • Much better
      • Somewhat better
      • About the same
      • Somewhat worse
      • Much worse
      • I don't have a current solution
  1. "What would need to be true for you to actually use this?"
    1. [Open text]

Goal: Measure Task Success

The Task Completion Battery:
  1. "What task did you come to complete today?"
    1. [Multiple choice based on your top tasks]
  1. "Were you able to complete this task?"
      • Yes, completely
      • Partially
      • No
  1. "How many minutes did it take to complete (or attempt) this task?"
      • Less than 1 minute
      • 1-2 minutes
      • 3-5 minutes
      • 6-10 minutes
      • More than 10 minutes
  1. "How would you rate the difficulty of completing this task?"
    1. [1-7 scale from Very Difficult to Very Easy]
  1. "What would have made this task easier?"
    1. [Open text]

Advanced Techniques That Actually Work

The Branching Logic Strategy

Don't show everyone every question. Use their answers to customize the path.
Example Flow:
Q1: "Have you made a purchase from us in the past 30 days?"
  • If Yes → "How satisfied were you with the checkout process?"
  • If No → "What stopped you from making a purchase?"
This approach increases completion rates by 20% because people only see relevant questions.

The Progressive Disclosure Method

Start with one simple question. Based on engagement, reveal more.
Micro-survey approach:
  • Show 1 question initially
  • If they answer, show 2 more
  • If they answer those, show final 2
  • Never show more than 5 total
Interaction Design Foundation research shows this can double response rates compared to showing all questions upfront.

The Behavioral Anchor Technique

Always anchor attitude questions in specific behaviors:
Wrong: "How do you feel about our customer service?"
Right: "Think about your last interaction with our customer service. How satisfied were you?"
The specific anchor improves response accuracy by 40% according to research from ConversionXL.

The Anti-Pattern List: Questions That Generate Garbage

Never ask these questions:
"Would you use [hypothetical feature]?"
Everyone says yes to free features. Means nothing. Instead, ask about current workarounds and pain points.
"How much would you pay for [product]?"
Pricing research requires sophisticated methodology. Simple survey questions produce fantasy prices. Use Van Westendorp's Price Sensitivity Meter if you must do pricing research.
"On a scale of 1-10, how innovative is our company?"
Vague attributes generate meaningless data. What does "innovative" even mean? Be specific about behaviors and outcomes.
"Do you think [leading statement]?"
"Don't you think our new design is cleaner?" biases responses. Keep questions neutral.
"How often do you typically...?"
"Typically" is cognitive quicksand. People can't accurately report typical behavior. Ask about specific recent instances.
Matrix questions with more than 7 rows
Cognitive overload leads to pattern responding (straight-lining). Break into multiple questions.
Double-barreled questions
"How satisfied are you with our price and quality?" asks two things. Always separate.

Implementation Checklist

Before Writing Questions:
Define your research questions
What specific decisions will this survey inform? Write them down. If you can't name the decision, stop.
Determine your analysis plan
How will you analyze each question? What comparisons will you make? Plan this BEFORE collecting data.
Calculate sample size needs
For basic insights: 30-50 responses. For statistical significance: Use a sample size calculator. For segmentation: 30+ per segment minimum.
Choose your distribution method
Email: 20-30% response rate typical. Website intercept: 4-6% typical. In-app: 10-15% typical. Customer database: 10-20% typical.
While Writing Questions:
Start with behavior, end with demographics
Engagement drops throughout surveys. Get the important behavioral data first.
Keep surveys under 10 questions
Nielsen Norman Group research: Completion rates drop 10% for every question past 10.
Test on 5 people first
They'll find 85% of problems with question wording and flow.
Include progress indicators
"Question 3 of 8" reduces abandonment by 25%.
Mobile-optimize everything
40-60% of survey responses now come from mobile devices. Test on actual phones.
After Data Collection:
Clean your data first
Remove responses that:
  • Completed in less than 30% of median time (speeders)
  • Show obvious patterns (straight-lining)
  • Include profanity or nonsense in open ends
Segment before summarizing
New vs. returning users often have opposite perspectives. Averaging them together hides insights.
Code open-ended responses
Group similar themes. Count frequencies. The patterns matter more than individual responses.
Create action items, not reports
Every insight should generate a specific action with an owner and deadline.

The Tools That Don't Suck

For Quick and Dirty:
  • Google Forms (Free, basic, works)
  • Typeform (Pretty, engaging, higher completion rates)
  • Microsoft Forms (If you're already in that ecosystem)
For Serious Research:
  • Qualtrics (Enterprise-grade, expensive, powerful)
  • SurveyMonkey (Middle ground, good analysis tools)
  • Alchemer/SurveyGizmo (Advanced logic, reasonable price)
For On-Site Surveys:
  • Hotjar (Includes heatmaps and recordings)
  • Qualaroo (Sophisticated targeting)
  • UseProof (Social proof + surveys)
For User Recruitment:
  • User Research Recruiter groups on LinkedIn

The 90-Day Survey Calendar

Month 1: Baseline
Run your Experience Audit Battery. Establish benchmarks for satisfaction, task success, and problem frequency.
Month 2: Deep Dive
Focus on your biggest problem area from Month 1. Run targeted surveys on that specific issue. Include both problem magnitude and solution preference questions.
Month 3: Validation
Test solutions with Concept Validation Battery. Measure improvements against Month 1 baselines.
Then repeat.
Consistent measurement beats perfect measurement. The same decent survey run quarterly teaches you more than one perfect survey run once.

Your First Survey Action Plan

Stop planning. Start doing. Here's your next seven days:
Day 1: Pick one research question from your backlog. What decision needs data?
Day 2: Choose 5-7 questions from this bank that address your research question.
Day 3: Test your survey on three colleagues. Fix the confusion points.
Day 4: Set up your survey tool and test on mobile.
Day 5: Send to your first 50 users.
Day 6-7: Analyze results and create three specific action items.
That's it. You'll learn more from one completed survey than from six months of planning the perfect research study.

The Hard Truth About Surveys

Here's what nobody tells you: Most survey data gets ignored. It sits in spreadsheets and slide decks, gathering digital dust while teams continue making decisions based on opinions and office politics.
The difference between surveys that drive change and surveys that drive cynicism? It's not the questions you ask. It's what you do with the answers.
Every question in this bank has been tested, refined, and proven to generate actionable insights. But they're worthless if you're not prepared to act on what you learn. Even if what you learn proves you wrong. Especially then.
So before you send that survey, ask yourself the question that matters most: Are you ready to change based on what you discover?
If yes, these questions will transform how you understand your users.
If no, save everyone the time.

Additional Resources:
Access your complete Experience Helpdesk Resource Library: *Member Portal*