I’ll start out by saying, I’m not an AI skeptic nor an AI proponent.
I AM someone who works in a business that is decidedly human— qualitative research— a business entirely made up of people discussing things with one another.
And I’ve come to understand that given the conversational nature of my work, this both inoculates me from AI’s immediate encroachment but also makes my career vulnerable in the future, as AI language models are advancing in droves.
Now, like many others, I find myself wrestling with this too-large existential-type question: is a human conversation able to be reduced entirely to the language that makes up its contents? But of course, that’s a question that takes me too far out of my league to answer: I don’t have a degree in tech ethics, I’m not an engineer, I’m not familiar enough with how these systems work to speculate, and I’m certainly not going to solve this faster than the people who have devoted their life and work to these technologies.
But there is a smaller question that I can bite off, and that’s whether or how to use AI in my practice— and whether or not it has a place, and if so, what place it has.
This is where you might be thinking, “Oh no, another LinkedIn thought piece about how to use AI in your job”— the good news is, that’s absolutely not what this will be.
Luckily, lots of folks that have jumped into the fray, using AI tools in qualitative and quantitative research— and they now represent loads of vendors that are in my inbox, letting me know that they can perform market research even (insert superlative word here: ‘better,’ ‘more efficiently,’ ‘more effectively,’) thanks to the use of AI.
I couldn’t consider myself a good researcher if I didn’t take some of them up on their sales pitches to learn what I could. While I can’t speculate about how every tool or every pitch goes— essentially, many of these services rely on AI to do text-based analysis from video or verbal transcripts, which are also generated from live video or audio by AI.
Most recently, I attended webinar laying out the case that AI has advanced, and now has its ability to parse finer details than before. Researchers had been using AI to broadly ‘code’ or bucket quotes or snippets into high-level themes.
Now, this webinar host said, AI could provide a more nuanced and detailed analysis of the text before it, and split hairs between more distinct types of responses. This promise relied on the machines to “understand the words,” the vendor said.
Simultaneous to this, as they talked, their company had an AI-generated live transcription working to translate their words into text. Speaking familiarly, they continually used the common shorthand word “qual” to refer to the field of qualitative research.
And throughout their entire presentation, the AI-generated captions got the word “qual” wrong in the following ways: it misinterpreted the word ‘qual’ when spoken as “wall,” “goal,” “ball,” “quality,” “call,” and “ballet.” (These were just a few examples, as there were more).
Oops.
Look, I don’t think anyone would preach that AI transcription is perfect, but when the entire value of the service relies on text-based analysis from transcription, and ‘understanding’ that text, this becomes a more glaring issue.
From what I could tell, these ‘misheard’ words were interpreted wrongly by AI because of the way the speaker was connecting words together— when they were talking about ‘scale’— as in, numbers of research participants, the AI-transcription defaulted to “wall” rather than ‘qual’— as ‘scale’ and ‘wall’ would make more sense together. The same repeated with other words, as the system was clearly making associations it deemed likely, when faced with an abbreviated shorthand word it did not understand.
Putting this snafu to the side, there’s some other issues with using AI for qualitative research purposes that I was just reminded of on a study that I just completed.
Namely: I work in a business where what people say is not always what they mean.
I wrote a few pieces about these phenomena that cause that in the past: one about cognitive biases (***LINK***), one about puffery and posturing in-group (***LINK***), and the way listening for the undercurrent of a spoken phrase is as important as hearing the words (***LINK***), but there’s more reasons why even the best language models might struggle to interpret meaning from the text of a qualitative research study.
1. Sometimes the textual contents of phrases does not imply what people mean (at all)
A recent client of mine was curious to get down to the bottom of why people were discontinuing use of their service. A common question that I get from clients, it’s pressing for sure— as the impact of customer matriculation on the bottom-line is keenly felt.
When leaving the service, former customers filled out a short survey, and when they did— most of the departing customers mentioned price, cost, or some iteration of that. The client utilized Chat GPT to help group or ‘code’ these responses and found that cost concerns were indeed listed by the tool as one of the primary reasons given for churn.
Still curious to find out more, they hired me to do qualitative research on the heels of this survey to dig further into these and other themes, and to make these findings more detailed and thereby, more actionable.
Turns out, consumers only mentioned price as a factor when they had already begun to lapse in their usage of the service. Meaning, cost only became an issue once they realized they were not “getting their money’s worth” from the monthly-fee service due to their lessening use of it.
Which means something else started the clock on that lessening that eventually led to churn.
This is just one example of the many ways that qualitative research is about so much more than the words that people say— it’s about digging for the root of what’s said.
2. If a researcher doesn’t hear it from the person themselves, they can miss all the meaning
The ability to gauge and understand emotions is what makes up the entirety of the ‘art’ of qualitative research. Sure, there’s some ‘science’ to it, but if I fed a set of scripted questions to either an inexperienced human interviewer OR an AI program, I wouldn’t get nearly the same result as if an experienced qualitative researcher was listening to every word themselves.
Inflection goes far beyond verbiage when delivering real meaning. Watching a person’s DELIVERY of their words in real-time is how their words are flavored and seasoned with the zest of enthusiasm, the melancholy of a twinge of sadness, or the flat-line apathy that can reveal where no impact is felt— these inflections create big learnings in and of themselves.
Gestures, facial expressions, all of these emotional indicators are huge tips for a researcher to know the right places to dig, but are all things AI still does not excel at discerning.
This happens to underlie why I’ve always kept my qualitative research business as a solo proprietorship, despite my accountant’s (near-constant) suggestion that I should hire lower-level employees to do the bulk of the moderation of the focus groups and interviews in my practice, and focus my attention on more “high-level” (his words) analyses of those interviews.
I have politely demurred at this suggestion, every time.
Doing each and every interview IS doing the high-level analysis, because when they said it, how they said it, the look on their face at the time, their tone, their mannerisms— all of that was as important as WHAT they said.
And I want to be the kind of qualitative researcher that’s entirely present for every single one of those cues and clues to how people are really feeling, so I can go beyond what they’re saying.
_______
Can AI be used in research? Absolutely, and it is.
AI works to be a preliminary ‘code,’ group, or bucket open-ended questions from an online quantitative surveys, for example, being a ‘first pass’ that saves a ton of time. I’ve used it for such, and while it doesn’t always understand nuance in the open-ends (they should always be reviewed in-detail by a human being), it can be a quick way to get to some high-level, over-arching themes out there.
And I’m sure it has other applications, especially for quantitative.
But in qualitative? That’s a ‘no’ for me right now.
There’s lots of promises out there for using AI in qualitative, to be sure— and lots of vendors are certainly looking to sell their technologies or platforms utilizing it.
Yet, for me, the gold nuggets of qualitative research come from being able to be completely, totally, present as a single human listener and interviewer in each of my in-depths or focus groups. Then, applying my experienced, strategic mind to pull insights out from behind and around the words, being unafraid to get down into difficult emotions and topics with people, and then evaluating the whole messiness of it— not just the words on the transcript pages— to find the meaning within. Lastly, it’s about organizing that and communicating it all back to the client in a clear, powerful story they can easily follow, understand, and hopefully, relate to emotionally as the human beings they also are.
Until AI can do that, I won’t be worried, as that’s where the brilliance of qualitative really shines.
Whew.
Now, given AI is a hot topic (and sometimes a fraught topic), what’s your take on the above? I’d love to hear from you about how or if you’re using AI in research, and if so, what your results have been. Comments are open— would love to hear from you. What are your thoughts?
