Prompt. Probe. Repeat. Modern Qual Has a New Dance Partner!

Jun 25, 2025, Ushma Kapadia

You know that moment mid-analysis when your brain hits a wall, and you wish you had a sharp junior researcher to bounce ideas off?

Turns out, that junior could be a GenAI… if you know how to talk to it.

Prompting isn’t some futuristic skillset. It’s just asking good, clear, purposeful questions. Which, let’s be honest, is second nature to anyone who’s run a tricky IDI or turned open-ended chaos into a client-ready deck.


Prompting is a Qual skill in disguise

For trained qualitative researchers, talking to AI is less about Tech and more about applying questioning techniques they are already masters of. Using the below guidelines makes the two-way conversation even richer and productive:


  • Role Prompting: Assign AI a ‘hat’ to wear

AI works better when you ask it to step into a specific persona. You can literally tell it pretend who to be, when conducting the task you give it.

For instance, when working on a Product Testing study, try:
“You’re a UX researcher summarizing 12 in-home use tests. What were participants’ first-use reactions? Group them by user type.”

Or in a Brand Equity context, try:
“Act like a brand strategist for ___ [mention brand]. What associations – positive, neutral, or negative – are being made with ___ [mention brand]? Present in a table, with verbatims.”

Just like you frame your questions differently for a Marketing lead v/s a Product Manager, you can steer GenAI outputs using role-based cues.

 

  • Specify the output format: Ask for how you want it to present, whatever you ask it to do for you

Don’t just ask for “insights.” Guide AI by telling it how to present its output. You can ask GenAI to return results as bullet points, tables, tiers, themes… whatever fits your needs.

For example:
“List 5 recurring pain points mentioned in these interviews. Use a bulleted list and include one quote per point.”

Or in a Customer Satisfaction project:
“Give me a two-column table: Column A for expressed frustrations, Column B for suggested fixes (explicit or implied). Rank them by frequency (most to least mentioned)”

Such structured prompting makes outputs more digestible and easier to develop follow-up prompts around.

Literal prompt:

“What are consumers saying in this interview?”

Better prompt:

“Give me a two-column table: Column A for customer frustrations, Column B for suggested workarounds they mentioned.”

 

  • Break down tasks: Don’t ask for the Moon, all in one shot

If you have 20 transcripts and ask “What are the key insights here?”, you’ll likely get generic soup of bullet-points. Instead, chunk your asks. Trying to get too much in one go often results in AI overgeneralizing or becoming vague. Start with:

  • “Summarize emotional responses in the first 3 transcripts.”
  • Then: “Now show me language patterns that differ between enthusiastic and skeptical respondents.”
  • Finally: “Generate three storylines we could use to frame the report.”

Literal prompt:

“Tell me useful insights from this focus group”

Better prompt:

“What are the emotional drivers mentioned in the first 10 minutes?”

“Which quotes suggest brand disconnection?”

“List 3 ideas we could explore further in follow-up interviews.”

This kind of “chunking” in Qualitative Research mirrors how the human brain thinks through data.

 

  • Context improves everything: Tell it what you're doing and why

Framing prompts like a Qual researcher will fetch better results… be it the client industry, what their marketing objectives are, what the research set out to do, who the target audience is.

For instance:
“You’re analyzing IDIs for a Customer Satisfaction study. As a client, I want to know why NPS is dipping among repeat users. What themes can you identify that point to churn risks?”

 

Literal prompt:

“Analyze this set of 3 interviews for NPS.”

Better prompt:

“We’re conducting a Customer Satisfaction study for a retail brand. This transcript is from a repeat buyer who recently gave a low NPS. What signals suggest why their loyalty is slipping?”

You don’t need to write an essay; just enough context to provide boundaries for AI to respond within. You’d do the same if briefing a colleague, right? The more you explain what you’re trying to do, the more relevant the response.

 

  • Lastly… give examples: Guide AI a little

AI loves examples. You don’t need to give it a full template; just enough to help it see your logic. Say you’re prepping toplines for different internal teams. Try this:
“Here’s how I’d summarize for a Communications team at ___ [mention brand]: ‘Consumers love the idea, but feel the tone is off-brand.’ Now do the same for this next theme.”

Just one sample helps GenAI match your style and logic. Think of it like asking your junior a good first draft which you can build on.


Try This Today

Still unsure where to start? Try out these readymade prompts.

Working with AI won’t replace you. But it will help you move faster, think sharper, and see wider. And if you already know how to ask smart questions; you're already halfway there!

Ushma Kapadia
Jun 25, 2025