The State of AI in Qualitative Research—How Researchers Can Benefit

Jul 18, 2024, Renée Hopkins

(This article is reprinted by permission of QRCA. QRCA retains copyright of this material. Questions may be directed to [email protected]. The article can also be found at QRCA VIEWS website: www.QRCAVIEWS.org.)

Artificial intelligence-driven (AI) applications for market research are being developed at what seems like the speed of light, coming out so quickly that it’s difficult for working researchers to keep up. If you haven’t tried any AI applications because you want to wait and see which app will be the “winner”—don’t wait. There will not be one “winner”—different apps do different things, all of them fascinating and useful. 

In the last few months, I’ve seen many demos of AI applications and had many conversations with the people behind those apps, both via Zoom and at IIEX in April. Here’s what I’ve learned about the current state of AI in qualitative market research.  

Gains in speed, productivity, and efficiency are not the best reasons to adopt AI—its biggest value is in synergy.

The first AI applications most of us have seen allow for easier ways to organize and analyze the data. These applications save time and, therefore, save money. Yet I heard more than once that saving time and money are value-adds but don’t take full advantage of the new technology’s capabilities. 

To me, it seems the benefit is the ability to take a “holistic” approach to market research. Jack Wilson from 2CV expresses it better: “I would argue that AI allows us to deliver “synergetic” research—it brings elements of qualitative and quantitative research together to deliver new benefits.”

AI-driven applications make it easier to create innovative mixed-method approaches because they allow you to do qualitative research at scale. You can add qualitative to a quantitative research project without setting up the project in separate stages, including time-consuming hand-offs from qual to quant and, in some cases, back again. 

AI also makes it possible to do large-scale qualitative research projects that include quant approaches and question types. Clients increasingly ask for closed-ended questions to be included in qualitative, and AI allows for that without the need to dig through notes, transcripts, and/or videos to count the number of participants who liked X concept better than Y concept, or average by hand the scores for every question where you asked a participant to put a stake in the ground about their opinion by assigning it a number on a 1 to 10 scale. 

How might you design a research project if you knew you had the ability to easily quantify some findings? If you could do a qual follow-up on quant questions? AI can help find patterns and insights in structured data, nonstructured data, and what I would call really nonstructured data—online reviews, social media posts, etc. What if you could include this data in your analysis? AI is very good at finding patterns in lots of data. 

Adding some AI-moderated qualitative questions to a quant survey can also allow for more in-depth screening of participants who could be recruited into live, in-depth interviews with a human moderator. This can circumvent fake participants. It’s also a better way to select participants for follow-ups than just including an open-ended question or two on the screener. 

The value of using AI applications is in the interaction between you and the AI—not in whether the AI has done a perfect job of coding and summarizing the data. 

AI “hallucinates” at times—but so do humans, at times. Human brains are not always logical and are subject to biases. Sometimes we think we see patterns and insights that are not in the data. Sometimes we miss patterns and insights that are in the data. 

Jack Bowen of Co.Loop says the output from Co.Loop’s AI is “hypothesis-led—outputs are suggestions and should be quickly validated.” Others I’ve talked to say that it’s definitely an interactive process. You shouldn’t simply trust the AI to give you the “correct” answers from the data. 

The AI will find insights you miss, and you will find insights it misses. The interactive process between humans and AI will still go much faster. 

AI is being trained not just on data, but on qualitative interviewing techniques. And one bot can coach another bot in real time. 

Some AI applications for qualitative research have been developed by teams that include anthropologists and linguists who can shape the AI’s ability to understand and use language and its ability to operate within a variety of contexts. I met someone from a company that not only used an AI trained in anthropology but also a second AI trained more specifically in anthropological methods that could “coach” the first AI as it conducted the interview by suggesting questions and follow-up probes. After I found out about that, I had to sit down for a minute and mull over the fact that some AI bots have had better training than probably many of us have had. 

AI moderation allows for private, conversational interactions—and many people prefer to be interviewed in conversational interfaces such as text and WhatsApp. 

There’s a cultural difference here. Some people believe that a conversation bot can’t possibly conduct an interview that would prompt a participant to respond fully, as they would during a face-to-face conversation. 

Yet, people do talk about sensitive subjects in texts. In fact, the more sensitive the subject, the more some people would much rather text about it than have a face-to-face or phone conversation. My own Millennial daughter prefers to text me about things she “doesn’t want to talk about on the phone.” Work groups communicate via Slack, Teams, and other conversational messaging applications. Whole families have ongoing text chats or WhatsApp threads. Many people are quite at home with this. 

“Consumers are intimidated by sharing truthful feelings in interviews and focus groups,” says Ben Jenkins, cofounder and CEO of Sympler. “The presence of another human with their biases and judgments can be quite off-putting and has hampered qualitative research’s aptitude for depth.”2 

Sympler’s AI moderators interview participants in the “private, intimate spaces of messaging (Instagram DMs, Snapchat, and Facebook Messenger). Where people are at their most relaxed and emotionally honest, the platform is able to probe deeply and at scale.” Another company I talked to uses AI to interview participants on Slack. 

Some people believe that focus groups tend to be taken over by the louder, more extroverted participants, while the quieter ones aren’t heard. Some believe that having to look a stranger in the eye while you talk about personal matters is a deterrent to full expression. Clearly, opinions on this issue are mixed. 

The more AI is “grounded” in the project, the better the results. 

By “grounded,” I mean well-briefed or well-set-up. Those of us who already work with AI applications are becoming used to the need to set up the project carefully on the front end—including a brief with the research background and objectives and even the discussion guide. Most applications also allow you to identify the speakers and indicate which is the moderator. 

When working with the findings, you can create prompts that “query” the data, allowing you to view it from a variety of perspectives. I’ve learned that creating effective prompts can be difficult. One app, ResearchGOAT, helps with this by providing “lenses” for the AI to use as you work with the findings. One choice is “economist.” Applying that lens shows you insights that would result if you look at the findings through the lens of behavioral economics. This gives you different options for exploring the data without having to create prompts to query the data. 

Another approach is for the AI to be trained on a specific kind of research—concept testing, user experience, etc. I’ve seen a number of AI applications with this kind of structure “baked in.” Either way, the stronger the guardrails the AI must work within, the less likely you’ll get results that are wildly out of context. 

Focusing on the jobs that might be lost to AI is the wrong approach. 

It’s more helpful to think of AI as a technology that can take on some of your tasks but not your entire job. AI will change the way you do your job, but right now, it’s unlikely to take your job away. Consider what kinds of tasks within your job could be replaced or done better by AI. If you learn to use AI to handle the tasks it is best suited for while you spend your time on tasks better suited for humans—for example, tasks that involve strategic thinking, higher-level creativity, and/or interpersonal skills—that’s a huge job strength. 

It’s worth noting that AI-driven features have been in applications for years, although they were usually referred to as “automation.” Examples: machine transcriptions, spelling and grammar checks, and even that annoying little animated paper clip that used to be in Microsoft Office applications. We are used to such assistance. 

The difference is that with generative functionality, AI has become powerful enough and ubiquitous enough that it will eventually change the way everything works, just like PCs did and like the internet did. 

Many AI research solutions have a la carte plans that keep researchers from being locked into any one platform and give them a variety of tools to try. 

At this time of rapid evolution, it’s good that qualitative researchers don’t necessarily have to tie themselves to one software solution. AI applications do not do exactly the same things, so it’s good to be able to have reasonably priced access to more than one. However, that does increase the overall learning curve. 

In addition to a la carte plans, many AI applications are very inexpensive right now because the entire software category is new. The learning curve might be steep—but at least it’s not prohibitively expensive in most cases. 

AI frees qualitative researchers to focus on their strengths—or, to focus on the “fun stuff.” 

I’ve been asking everyone what they see as the role of the qualitative researcher in the world of AI. Jill Meneilley of Digsite/Question Pro said, “Focus on the fun stuff.” She was not the only person to say something like this, but she phrased it best. Qualitative researchers using AI tools should be able to focus their attention on the research itself—the design, the analysis, and, of course, the client—without having to spend hours digging through transcripts. 

Make sure to follow privacy and security guidelines while using AI. 

The main security questions to consider with AI applications are where your project data is stored and how it is being used. 

Obviously, our clients want data from their research projects to be kept private. Qualitative researchers are obligated by law to keep participants’ private data secure and private. Ask potential AI vendors about their policies for data privacy and security. They are prepared to answer these questions. 

To help you work through these and more issues about choosing a vendor for AI-based services, I recommend ESOMAR’s newly released “20 Questions to Help Buyers of AI-Based Services for Market Research and Insights” (available for free from ESOMAR).

These basic guidelines should keep you in compliance with most current privacy legislation:

  • Project data should not be made available to train publicly available AI applications. Don’t just upload transcripts into ChatGPT.  
  • Project data must be stored on a secure server located in the country where you (and/or your client) are located. 
  • Project data should only be accessible to you, your company, and your client. 
  • When the project is over, participants’ personal data must be deleted completely from the AI vendor’s server. 

References 

  1. Wilson, Jack. Computer Does Qual: Avoiding AI-Overclaim and False Equivalency. Greenbook, The Prompt blog, Dec. 13, 2023 (www.greenbook.org/insights/the-prompt-ai/computer-does-qual-avoiding-ai-overclaim-and-false-equivalency)
  2. Jenkins, Ben. Chatbots Are the Future of Empathy: How candor and vulnerability are elicited from the ‘stranger’. Sympler blog, Nov. 15, 2023. (https://news.sympler.co/sympler-insights/chatbots-are-the-future-of-empathy-how-candor-and-vulnerability-are-elicited- from-the-stranger)
  3. Download at https://esomar.org/20- questions-to-help-buyers-of-ai-based-services
  4. If you haven’t looked into privacy regulations recently, you may be surprised at how much qualitative project-related data is subject to privacy legislation. I highly recommend the on-demand classes from QRCA’s Data Privacy in Qualitative Research Certificate 2023. (https://qualology.qrca.org/item/data-privacy-qualitative-research- certificate-2023-571614)

Renée Hopkins
Jul 18, 2024