Article type

Method article / concept calibration

The confusion this piece is trying to fix

A lot of PMs hear “qualitative” and “quantitative” and immediately treat them as camps.

Some will say they are data-driven, so they start with quant. Others will say the only real way to understand users is through qual. Both positions are familiar. Both can distort the actual job.

Because the real question for a PM is not:

  • which side am I on?
  • are we doing qual or quant this time?

It is closer to this:

  • what am I actually trying to learn?
  • do I lack scale, frequency, confidence, and comparison, or do I lack context, meaning, and motivation?
  • do I need to know how many, or why?

This article is about getting those boundaries straight before the rest of the series starts building on them.

The distinction I find most useful

The most practical distinction is not that qualitative work is “open-ended” and quantitative work “has numbers”.

A cleaner distinction is this:

  • qualitative methods gather evidence by directly observing or hearing behaviours, experiences, or attitudes
  • quantitative methods gather evidence indirectly, through a measurement instrument or system

That sounds subtle, but it changes how you choose methods.

For example:

  • an interview gives you direct access to a person’s description of an experience
  • a field study lets you see work or behaviour in its real setting
  • a diary study asks participants to capture experiences over time
  • a survey usually gathers responses through a structured instrument
  • product analytics usually captures behavioural signals through an event system

Once you frame things this way, a lot of muddy statements become easier to spot:

  • an open-ended survey is not automatically equivalent to an interview
  • hearing the same thing from five people does not somehow make it quantitative
  • a metric is not automatically harder evidence simply because it is numerical
  • doing a few interviews does not by itself mean you properly understand a market

What qualitative work is especially good at

If I strip it right back, qualitative work is strongest at:

  • recovering meaning
  • restoring context
  • unpacking motivation
  • hearing how users describe a problem
  • noticing workarounds, hesitation, misunderstanding, and perceived risk

It is particularly useful when you are still exploring the problem, disentangling causes, or building hypotheses.

For example, when you want to know:

  • why new users never reach an aha moment
  • why one part of onboarding causes people to stall
  • why users say the product is valuable but still do not stay
  • what “too much hassle” actually means in a specific workflow
  • why the value proposition looks clear internally but fails to land externally

These questions share a trait: you do not just want a proportion. You want the structure of the reason.

Qualitative work is more likely to give you leads than verdicts

This matters because people often expect too much certainty from qualitative research.

Five interviews are not supposed to deliver a finished roadmap. More often, good qualitative work helps you:

  • find causes you were not seeing
  • hear the user’s own language for the problem
  • discover where your event definitions are oversimplifying reality
  • generate stronger segmentation ideas
  • tighten the wording of your hypotheses

It fills in the map. It does not stamp the final answer on the page.

What quantitative work is especially good at

Quantitative work is strong in almost the opposite way.

It helps you with:

  • patterns
  • scale
  • trends
  • whether a difference is stable
  • whether something is worth prioritising

For example, when you want to know:

  • whether a complaint is marginal or widespread
  • which funnel step is the worst drop
  • whether one segment truly underperforms
  • whether activation or retention changed after a release
  • which issue should move up the roadmap

Those questions are asking for breadth, frequency, severity, and comparison.

Quantitative work usually gives you confidence bounds, not reasons

Quantitative evidence is excellent at telling you things like:

  • this is not just two noisy customers
  • this drop is real, not imagined
  • this segment is genuinely weaker
  • this change did not merely feel better, it moved something measurable

What it generally will not do on its own is explain why the pattern exists.

So when a PM says, “let’s look at the numbers first”, that is not wrong. The danger is treating a pattern as if it were already an explanation.

Mixed methods is not compromise. It is deliberate design.

Teams often say they use both qualitative and quantitative methods. In practice that sometimes means:

  • a few interviews
  • a quick survey afterwards
  • one deck containing both

That can still be useful, but it is not necessarily good mixed-methods work.

A stronger version asks a stricter question:

Are both forms of evidence being used to answer the same core question, and have they been designed to connect to one another?

That is the point that matters.

I find it helpful to think in terms of three common patterns.

1. Quant first, then qual

This makes sense when:

  • you have already spotted a measurable pattern
  • you know where the anomaly is
  • you do not know what is causing it

For example:

you discover that one acquisition source has significantly weaker D7 retention. You can first confirm the pattern quantitatively, then interview users from that source to separate weak promise, mismatched context, or delayed value.

2. Qual first, then quant

This is useful when:

  • you are still exploring the problem space
  • you do not yet know how to define the event, segment, or survey option
  • you need better language and sharper hypotheses

For example:

in a B2B workflow, users keep calling something “a hassle”, but you do not yet know whether they mean permissions, approvals, or moving data between tools. Starting qualitatively can help you produce better survey options, better tracking, or more meaningful segments later.

3. Run both in parallel, then connect them

This is strongest when:

  • the problem genuinely lives on two levels
  • you need both pattern and context
  • you have the time and resource to do it properly

For example:

before a major redesign, you run quantitative benchmarking on task success and completion time, while also running qualitative sessions to see where people hesitate, misread, or detour. You end up with more than a scorecard. You understand what sits behind the score.

Three method mismatches PMs run into all the time

This is where the distinction becomes practical.

Mismatch 1: trying to explain why with only a dashboard

This is the classic one.

A team sees activation drop and responds by adding more events, slicing the funnel harder, and opening more cohort views. None of that is wrong. But if the real question is:

  • what users are thinking
  • what they think is not worth it
  • what they have misunderstood
  • what context makes the flow impossible to complete

then the dashboard can become more detailed while the team remains stuck in roughly the same place.

Mismatch 2: trying to judge scale with interviews alone

Interviews are excellent at finding and unpacking problems. They are poor on their own as a prioritisation engine.

If five users mention the same pain point, the next move is not automatically to go all in. The next questions are usually:

  • how common is this?
  • which segment suffers most?
  • does it materially affect conversion, retention, or adoption?
  • how does it compare with the other problems we could solve first?

That is where quantitative work comes back in.

Mismatch 3: treating mixed methods as a data collage

Some teams do everything at once:

  • interviews
  • surveys
  • analytics
  • support tickets
  • session replays

On paper, that looks comprehensive. In practice, it can become a very thick pile of disconnected evidence.

More inputs do not automatically create better judgement. The integration matters.

A simple way for PMs to choose

If you are trying to decide where to start, do not begin with the method label. Begin with the shape of the question.

Lean qualitative when the question sounds like this

  • why are new users stopping here?
  • how do they understand this flow?
  • what are they worried about?
  • which workaround are they using to get the job done?
  • what kind of “hassle” are they actually describing?

Lean quantitative when the question sounds like this

  • how often is this happening?
  • which segment is hit hardest?
  • where is the biggest drop?
  • did this really improve?
  • is this important enough to move up the queue?

Use mixed methods when the question sounds like this

  • why does this pattern exist?
  • how can I sharpen the explanation into a testable hypothesis?
  • is this complaint a broad problem or just a loud niche?
  • I need to know whether it exists, how large it is, and why it happens

When not to force mixed methods

This is worth saying because mixed methods can start to sound like the premium option for every project.

There are situations where I would not force it.

1. The research question is genuinely narrow

If you only need to check whether a flow has obvious usability issues, a small qualitative usability study may be enough. There is no need to bolt on a survey that will not alter the decision.

2. The team has not defined the core question yet

If nobody can articulate what the team is trying to learn, combining methods simply doubles the confusion.

3. You cannot integrate the two evidence streams

The main risk in mixed methods is not just cost. It is finishing with two piles of evidence that never meet.

At that point, you have not created a stronger study. You have created a larger document.

The judgement I hope PMs keep from this piece

I do not particularly want readers to leave with more jargon. I would rather they keep hold of three things.

First, the difference between qualitative and quantitative work is not merely whether there are numbers involved. It is whether you are directly encountering behaviour and experience, or inferring them through an instrument.

Second, qualitative work is usually better for unpacking causes and context, while quantitative work is better for sizing, comparing, and prioritising.

Third, mixed methods becomes valuable when the methods are intentionally designed to answer the same question together, not when they merely appear in the same slide deck.

If those distinctions are clear, the later choices in this series become much easier. You will waste less time deciding whether to interview, observe, benchmark, survey, or go deeper into analytics.

What comes next

The next article keeps moving forward, but not yet into discussion-guide design.

First, I want to separate another common confusion: not every research session is an interview.

A lot of PMs only have the interview hammer, and eventually every uncertainty starts looking like an interview problem. But user interviews, usability tests, field studies, and diary studies are not solving the same job.

That boundary needs to be clean before we get into how to recruit, how to ask, and how to analyse.