Article type
Method piece / Interview craft guide
The one thing this piece is trying to say
After many interviews, teams are left with a familiar feeling:
the conversation was pleasant, the notes are extensive, and the recording sounds lively.
Yet when it is time to make a decision, the usable evidence is thinner than everyone expected.
Often the problem is not that the participant was inarticulate. The problem is that the interview guide itself was weak.
- the questions sounded more like a survey
- the session narrowed too early
- there were too many hypotheticals
- “clarifying” questions quietly smuggled in interpretations
- multiple questions were bundled into one
- prompts were vague enough to mean almost anything afterwards
So an interview guide is not just an ordered list of prompts.
Its real job is this:
help you surface evidence that can be analysed, compared, and used for judgement without steering participants into your own assumptions.
Start by dropping a common misunderstanding: an interview guide is not a question bank, it is evidence design
If you treat the guide as a neat checklist of things to ask, it quickly becomes something like this:
- 2 warm-up questions
- 3 pain-point questions
- 4 feature questions
- 2 wrap-up questions
It looks complete on paper, but it tends to loosen badly in the room.
Research is not about “getting through the questions”. It is about helping participants surface relevant experiences, context, decision processes, workarounds, emotions, and constraints.
So when I write an interview guide, I do not start by brainstorming questions. I start by working through three layers:
- what exactly are we trying to learn
- what kind of answer would count as evidence
- what sequence is most likely to surface that evidence cleanly
That is what keeps the guide from becoming a pretty but empty prompt list.
Begin with the research objective, not with “what should we ask?”
Weak interview guides often begin with vague goals such as:
- let’s see what people think
- let’s understand user needs
- let’s ask whether they like the feature
Objectives like that are too soft, and the guide quickly drifts with them.
More useful objectives tend to sound like this instead:
- understand where new users begin to lose confidence during their first attempt to complete a task
- learn the context, triggers, and alternatives around the last abandoned booking
- clarify how people compare listings and which pieces of information affect the final decision
- understand how operations staff currently handle exception cases with workarounds
Notice what changed. These objectives are about behaviour and judgement in a particular slice of reality, not about gathering general opinions.
The main design principle: go broad before narrow, story before judgement
This is one of the most valuable principles to preserve.
Do not open an interview by charging straight into narrow questions.
A steadier sequence is usually:
- get the participant back into context
- ask for a recent, concrete, specific episode
- let them tell the story of what happened
- gradually narrow into decision points, emotions, friction, and workarounds
- only then move into more targeted clarifying questions
Why does that matter?
Because if you narrow too early, participants start responding to your framing rather than revealing their own.
Example
Instead of beginning with:
- do you find comparison difficult?
- would you want the system to recommend the best listing for you?
a better opening is usually closer to:
- talk me through the last time you booked accommodation
- when you compared listings last time, what did you do step by step?
- what information did you look at before deciding?
- was there a point where you began to hesitate or considered giving up?
That kind of prompt produces process, language, and context instead of prompted opinions.
Ask fewer hypotheticals and more about recent behaviour
This is one of the most common traps in user interviews.
PMs naturally ask things like:
- would you use this if we built it?
- would one-click comparison be useful?
- if we added more filters, would you be more likely to buy?
These questions feel natural, but the evidence they produce is often weak.
That is not because participants are dishonest. It is because people are poor at predicting themselves in future, imagined situations.
So instead of leaning on hypotheticals, I would much rather ask:
- how did you compare options last time?
- how did you eventually make that decision?
- did you create any workaround yourself?
- what did you do instead after giving up?
Hypotheticals are not forbidden, but they should not be the main course.
The main course should still be recent behaviour and specific incidents.
Watch out for these common question types that quietly contaminate answers
1. Leading questions
For example:
- did that part of the flow feel confusing?
- that reminder feature would be helpful, wouldn’t it?
- would you prefer more transparent pricing?
These are not neutral prompts. They imply a direction.
2. Clarifying questions that smuggle in interpretation
Sometimes you think you are simply checking understanding when in fact you are handing the participant a conclusion.
For example:
- so you abandoned because you did not trust the platform?
- so what you really wanted was a simpler interface?
- so the main problem was lack of pricing transparency?
These are dangerous because participants often accommodate the framing rather than challenge it.
3. Compound questions
Such as:
- did you abandon because the price was too high, the information was messy, or the flow was too complicated?
- how do you normally search, compare, plan, and track your trip?
Once one question contains three or four others, the answer usually becomes flattened and partial.
4. Ambiguous questions
For example:
- how do you usually use it?
- what do you think of the experience?
- what are your thoughts on this feature?
These are so broad that the participant has to guess which slice of experience you are after.
Open questions should carry the weight; closed questions should add precision
My rule here is simple.
If you need the participant to organise a story, reveal context, and show what mattered to them, start with open-ended prompts.
If you merely need to pin down timing, frequency, role, or whether something happened, closed questions can be useful follow-up tools.
Open prompts tend to sound like this
- tell me about the last time…
- walk me through what you did
- what did you look at first, and what happened next?
- where did things start to feel difficult?
- how did you decide whether to continue or stop?
Closed questions are better for detail
- was that last week?
- was that the first time it happened?
- did you decide alone or with someone else?
- were you on a phone or a computer?
Closed questions are not bad.
But when they dominate the opening of the session, the interview starts to feel like an oral questionnaire.
How I usually structure an interview guide
Every study is different, of course, but I often use this rhythm.
1. Warm-up
Not just polite small talk. The point is to ease the participant back into a relevant context.
2. Context
Understand role, environment, task, and frequency before diving straight into pain.
3. Recent episode or critical incident
This is usually the most important section.
Ask for the last time, the most memorable time, or the most difficult recent time.
4. Decision points, frictions, and workarounds
This is where you start digging into turning points, hesitations, alternatives, and coping behaviour.
5. Clarifying detail
Only now do more targeted follow-up questions become useful.
6. Light wrap-up
Check whether anything important was missed and allow the participant to add context.
Good probing is not just asking “why” again and again
Many novice guides are filled with “why” questions.
In practice, repeated “why” often pushes participants into post-rationalisation or makes the conversation feel like an interrogation.
I tend to rely more on probes like these:
- can you tell me a bit more about that moment?
- what did you do first?
- what happened after that?
- how did you decide whether to continue?
- what other options were you considering?
- was that typical for you, or unusual?
These prompts help people unpack the experience rather than force them into instant explanations.
Do not smuggle solutioning into a discovery interview
This is another trap PMs fall into very easily.
A session that is meant to understand the problem gradually starts to ask things like:
- what if we built it this way?
- would you prefer this version?
- which of these features would you choose?
That is not always forbidden.
But once you start doing it too early, the participant stops recalling lived experience and starts reviewing your ideas.
Those are completely different forms of evidence.
Since you are planning a separate series on problem solving and design thinking, it is especially useful to stop deliberately here.
This piece should remain about problem understanding, not solution evaluation.
An interview guide does not need to be read out verbatim
This matters a great deal.
The value of the guide is not that you follow it like a script. It is that you know:
- what each section is trying to validate
- which questions are essential for evidence
- which prompts are optional probes
- what can be skipped if it emerges naturally
- how to follow a rich thread without losing the study
That is why a good guide is better thought of as an evidence map with rhythm, rather than as a script.
When interviews should not be the main method
Even though this piece is about interview guides, there is an important boundary to state.
If your real question is:
- whether a prototype is usable
- where a task flow breaks down
- what situated behaviour looks like in the real environment
- how a behaviour evolves over time
then interviews may not be the primary method at all.
You may need usability testing, field studies, or diary studies instead.
So before writing the guide, it is still worth returning to the previous article: did you choose the right method in the first place?
Stop here for now
This piece deliberately stays with one question:
how do you ask for less distorted evidence?
The next natural step is the one after the guide is written:
when the session starts, how should a PM facilitate, observe, and take notes without distorting the room?