(This article was previously published on Greenbook, and has been reprinted with the permission of the author Jack Wilson. You can find the original here.)
I've been thinking about what Yuval Noah Harari has been talking a lot about this year - that AI systems are transitioning from a phase of capturing people's attention to a new phase focused on cultivating intimacy.
Yuval has argued that during the attention phase, AI systems (particularly on social media) were designed to keep users engaged for as long as possible, using techniques that encouraged addictive behaviours. Now, AI has entered the intimacy phase, where it is increasingly able to build deep connections with users through dialogue.
He posits that through mastery of language, AI is increasingly able to form intimate relationships with people and use the power of intimacy to change our opinions and worldviews. In a political battle for minds and hearts, intimacy is the most efficient weapon for shifting opinions, and AI has gained the ability to mass-produce intimacy.
This perspective has fascinating implications for market research. A lot of conversations I've had recently with colleagues at 2CV about their experience of using AI-assisted analysis has been very positive, but with an edge of concern over the risks of over-reliance.
Although AI often gets things right, it also gets things wrong - it oversimplifies, it misinterprets, it misses the tone of voice, it misplaces emphasis. The trouble is – the conclusions can often look right and sound right without actually being right.
Spotting these inaccuracies, misinterpretations and overlooked nuances requires in-depth familiarity with the source material and the experience and confidence that often comes with years of working in research. A key concern I hear from experienced researchers is their fear of junior colleagues becoming over-reliant on the summaries and reportage generated by the AI tools we use.
The concept of intimacy being the most effective weapon for influence is a fascinating lens to put on this issue – are we more likely to believe a research summary when articulated to us through dialogue? Are we more likely to drop our analytical guard when the majority of responses from our LLM analysis tool of choice are accurate? What are the risks of AI-assisted analysis tools shaping the outcomes of our research through inherent biases within its training data?
Although I am a big believer in the power that AI-assisted analysis provides – we need to remain realistic about its flaws. Firstly - we need to remember that the AI is almost always working with an imperfect dataset. In the case of qualitative research as an example – audio quality will create encoding errors, using visual stimulus may result in abstract references, body language and tone of voice are absent (or poorly read) from the data. Secondly - AI is an imperfect analyst, it lacks true understanding of human behavior, physical space, and it doesn’t even understand the words it parrots back to us.
The upshot of all of this underlines the importance of effective mechanisms for efficiently and accurately interrogating the raw data fed into these machines alongside any generated summary. Without referencing raw data – we can lose sight of reality. Josh Seltzer and Kathy Cheng at Nexxt Intelligence recently spoke about the concepts of data and insight provenance – we need to always be interrogating not only ‘how was this data captured?’, but also ‘how were these conclusions reached?’.
This is such an important point when it comes to AI-assisted analysis – we need to ensure we aren’t relying on AI to draw our conclusions for us and we aren’t taking as read that every summary it generates is accurate. Intimacy is a two-way street – in the same way that AI grows more and more familiar to us and learns our flaws and foibles, we will do the same.
In the same way we build intimacy with our colleagues and learn to understand how they work, we will build a similar understanding of AI‘s strengths and weaknesses.
Although I understand Yuval’s concerns of AI building intimacy at a macro societal level – I would hope that through intimacy we develop a more mature relationship with AI that is realistic about both its benefits and its flaws.