Article type

Practical method piece / Research ops primer

The one thing this piece is trying to say

A lot of PMs assume the hardest part of user research is the interview itself.

Quite often, the study has already gone wrong before that.

  • nobody was clear where to find participants
  • one invitation email went out and everyone waited for luck to do the rest
  • the screener effectively told people which answers would get them in
  • scheduling became chaotic and no-shows wrecked the round
  • incentives were paid late, making the next study harder to recruit for
  • consent and data handling were treated as an afterthought

So outreach, screeners, incentives, and consent are not administrative debris sitting next to research.
They are the minimum viable research-ops layer that turns recruitment from a one-off scramble into something repeatable, predictable, and sustainable.

Start with the core judgement: recruitment is not scheduling, it is part of study design

I no longer think of recruitment as “getting a few sessions into the calendar”. I think of it as three linked questions:

  1. What kind of evidence does this study actually need?
  2. Which people are most likely to provide that evidence?
  3. Which process will reliably let the right people in and keep the wrong people out?

If those questions are still fuzzy, a polished outreach email will only scale the fuzziness.

Recruitment needs to answer five practical questions:

  • where will participants come from
  • how will the screener qualify them
  • what communication rhythm will keep the process moving
  • how will incentives be handled fairly
  • what consent and privacy boundaries need to be clear from the start

Those five decisions together are what make a recruitment system.

Step 1: write the recruitment brief before you write the invitation

Teams in a hurry often do the reverse.

They draft the invitation first, then work out the details later.

The more dependable order is to write the recruitment brief first.

That brief is not only for agencies. Even if you are recruiting participants yourself, it forces you to answer the questions that matter:

  • what this round is trying to learn
  • which participant groups are needed
  • the inclusion and exclusion criteria
  • what variation should be preserved
  • whether the sessions are remote or in person
  • what access needs need to be supported
  • what the incentive is
  • how the study will be scheduled
  • what personal data is genuinely necessary

If that is all still vague, the invitation tends to become something woolly like “we’d love to chat about your experience”, and the participant pool becomes equally woolly.

A practical rule

I like a brief that can answer this sentence clearly:

Which participants will move this study closer to an answer, and which participants will distort it?

If you cannot answer that, the screener will not save you.

Step 2: choose recruitment sources deliberately, rather than assuming there is only one route

There is no universal channel for finding research participants. For PMs, there are usually five practical options.

1. Existing user lists or product data

This is often the fastest route.

It works well when you need:

  • people who recently completed a behaviour
  • users who dropped off, cancelled, churned, or requested a refund
  • active users of a particular feature
  • people who completed or failed onboarding recently

The benefit is precision.
The risk is that you only hear from people who are already visible to you.

2. CRM lists, newsletters, or research opt-in pools

If you already have a research opt-in pool, that is a genuine operational asset.

But these participants are often more feedback-friendly and more comfortable with research than the average user. Sometimes that is fine. Sometimes it softens the evidence.

3. Third-party communities, professional bodies, or partners

If your study concerns a specific role or context, such as:

  • freelancers
  • teachers
  • accommodation hosts
  • carers
  • specialist-tool users

then communities and professional groups can be more efficient than a broad email blast.

But being in the community does not automatically mean someone fits the research question.

4. Recruitment agencies or panels

These become useful when you need:

  • general-public participants
  • specific geographies
  • particular professions
  • assistive-technology users
  • harder-to-reach audiences

Agencies are not magic, though.
A vague brief and a bad screener will simply help them recruit the wrong people more efficiently.

5. Situational or in-the-moment recruitment

Some questions depend heavily on fresh memory and real context. For example:

  • just after a booking flow
  • right after a checkout
  • after a clinic visit
  • after completing an application
  • while leaving a physical service environment

In those cases, event-triggered or contextual recruitment can be more truthful than a broad batch email.

Step 3: the screener is not a form, it is the first methodological gate

PMs usually make one of two mistakes with screeners.

Either they write a sign-up form.
Or they write a mini personality test.

Neither is the point.

A screener exists to do one thing well:
use the smallest number of questions possible to determine whether someone is qualified to answer the research question.

Principles of a good screener

1. Ask about past behaviour before future intention

Instead of asking:

  • would you use a product like this?
  • do you care about accommodation information?

it is usually better to ask:

  • when did you last complete an accommodation booking yourself?
  • what made you abandon the last booking you did not finish?
  • what do you currently use to compare listings or organise a trip?

Past behaviour is far more dependable than future promises.

2. Do not reveal the “right” answer

If the screener effectively signals:

  • we are looking for people with this pain
  • we prefer this type of answer
  • tell us you are frustrated and you might get selected

then you risk recruiting people who are good at being recruited rather than people who are good for the study.

3. Put eligibility first and opinions later

Confirm fit first.
Collect richer descriptive detail afterwards.

Otherwise you end up with lots of attractive information about people who should never have been in scope.

4. Keep it short, but make it discriminating

I would rather have a shorter screener with sharper cuts than a long one that behaves like a survey.

Long screeners reduce completion, increase fatigue, and attract people who are mainly chasing the incentive.

A practical reminder

The screener is not there to identify the most articulate participant.

It is there to identify the participant best placed to answer the study’s question.

Step 4: outreach should not read like marketing copy, but it should not sound like cold admin either

Good research outreach helps the recipient understand four things quickly:

  • who you are
  • what the study is broadly about
  • why they were invited
  • what participation involves and what they will receive

If the email sounds like a campaign, people may assume you are selling something.
If it sounds like sterile internal admin, response rates tend to collapse.

The core elements I keep

  • one clear sentence on the study purpose
  • why this person is receiving the invitation
  • session format and duration
  • the incentive
  • a brief note on confidentiality and data handling
  • a simple call to action, usually the screener or a reply of interest

Tone rules for outreach

  • be transparent, not theatrical
  • do not pretend to know the person
  • do not downplay the effort by calling it “just a quick chat”
  • do not imply that some answers are more welcome than others
  • do not treat participants like free consultants

Step 5: scheduling, reminders, and backups often determine whether the round survives contact with reality

This is the least glamorous part, which is precisely why it causes so much damage when neglected.

If you are recruiting yourself, I would suggest doing at least three things.

1. Keep backup participants

Not everyone who agrees to take part will turn up.

If the timeline is tight and you have no backups, you are basically hoping for good weather.

2. Use a consistent reminder cadence

For example:

  • one confirmation when booked
  • one reminder the day before
  • one short reminder one to two hours before the session

That is not pestering. It is basic no-show prevention.

3. Make the session logistics explicit

Especially:

  • time zone
  • meeting link or location
  • anything they need to prepare
  • whether camera is required
  • whether the session will be recorded
  • what to do if they need to cancel

When these details are vague, participant experience deteriorates very quickly, and poor participant experience makes future recruitment harder.

Step 6: incentives are not for buying answers, they are for respecting time and effort

I dislike the idea that an incentive is simply “a small thank-you gift”.

That framing often leads to bad decisions.

An incentive does not buy an opinion. It acknowledges that participants are spending time, attention, context-switching energy, and sometimes travel, equipment, care, or emotional effort.

When setting an incentive, I look at:

  • session length
  • opportunity cost
  • topic sensitivity
  • recruitment difficulty
  • whether travel or in-person attendance is required
  • whether the audience is harder to reach

Two principles matter most

1. Do not set it too low

An incentive that is too low makes it harder to recruit appropriate participants and increases the chance of drawing from a distorted slice of the market.

2. Do not pay late

Slow incentive fulfilment does real reputational damage.

In the short term it looks like a minor process issue. In the long term it erodes trust and makes participant recruitment progressively harder, especially if you are trying to build your own pool.

At this point many PMs have the same instinct:

“Surely we do not need to be that formal. This is only product research.”

But user research still involves personal data, recordings, transcripts, internal sharing, cloud tools, third-party transcription, and sometimes sensitive material.

So consent is not just a tick box. Participants need to understand:

  • what the study is for
  • what data will be collected
  • whether audio or video will be recorded
  • how the data will be stored and who can access it
  • whether they can skip questions
  • whether they can withdraw
  • what anonymisation or deletion looks like later

My minimum-viable rules

  • do not collect personal data you do not need
  • do not be vague about recordings
  • do not treat consent as something you mention in passing
  • do not scatter participant data across random spreadsheets and chats
  • do not quietly repurpose research contact lists as marketing lists

This may not look like interviewing craft, but it directly affects participant trust and whether research can become a sustainable practice inside the team.

The thing PMs should build is not a single invitation, but a recruitment system

You do not need a fully fledged research-ops platform on day one.

But it is worth building a small set of reusable assets:

  • a recruitment-brief template
  • a screener template
  • outreach email templates
  • reminder templates
  • a consent template
  • an incentive policy
  • a participant tracker
  • a cancellation and no-show rule

Once those exist, every future round starts with more signal and less chaos.

That is why I think this is such a worthwhile PM skill.
Do not treat recruitment as a recurring emergency. Treat it as standing infrastructure for discovery work.

When not to force self-recruitment

This piece is not arguing that all studies should be recruited in-house.

In some situations I would actively recommend against it:

  • highly sensitive or high-risk participant groups
  • very specialised or hard-to-reach audiences
  • large samples across regions under tight timelines
  • studies requiring explicit accessibility support
  • work with heavy legal or privacy constraints

In those situations, agencies, specialist partners, or a proper research team are usually the more sensible route.

Stop here for now

This piece deliberately stops at the recruitment system.

The next piece moves one step earlier in the study itself, into another common mistake:

many people think they are writing an interview guide when they are really just listing questions.

A proper interview guide is not about ordering prompts neatly. It is about helping you uncover usable evidence without steering the participant into your own assumptions.