To end this year I’d like to publish an article from a good friend of mine, Siamack Salari. He describes a situation with a client that sometimes happen to me too. When the client wants to be part in the research phase but actually can’t make any interpretation of what they see. Read and enjoy:
This particular client was/is based in Geneva. That’s all I’m going to say about them. OK one more clue: they’re not a tobacco company. “Siamack”, they briefed me, “We want to use your app to get inside people’s shoes and generate new insights around issue X”.
Here is a client, I remember thinking, who finally gets the idea that investing time in sitting back, watching consumers and letting things unfold in real-time will yield powerful learnings impossible to obtain any other way. Conversations, decisions, purchases, discoveries, nearly doing things, and more were soon to be captured and viewed as they occurred, in real time. Providing a mass of rich understanding.
So we recruited 20 carefully screened households in five markets and set them a few very loose tasks which didn’t focus on any activity in particular but did ensure plenty of everyday life type content. We certainly didn’t want to filter responses by asking pointed questions.
What I really loved about this study was that the client didn’t even want our help. They, alone, were going to look in on the entries, moderate them and extract insights with no handholding whatsoever. Great! I thought to myself.
Two weeks elapsed with me occasionally checking in on the entries to ensure participants weren’t slowing down their rate of capture and sends. Then I received a call. And I’m summarising a 20 minute conversation:
“Siamack, we as a team didn’t really feel we learned anything new…”
I remember being so shocked at this sentiment that I held the phone away from my ear and looked at it like it was the client’s face.
“But wait, were you expecting participants to do/say insightful things you had never seen/heard before?” The answer was sort of a reluctant, yes.
I had to explain again (all this was already in the proposal) that all they were going to capture was the mundane, everydayness of life. And that their job was to reframe these ordinary events in fresh new ways. I even walked them through a framework for analysis, a set of thinking tools and bunch of enquiries that would lead to springboards. I didn’t go as far as trying to explain grounded theory. But they seemed to get the exercises I needed them to do in order to disentangle at least three powerful, disruptive learnings.
Except they didn’t.
“Insights aren’t going to jump out of these films. They won’t even come out of the replies to your questions…” I tried to explain. But it was useless. One problem was that they had treated the entire exercise a bit like a visit to the zoo. In other words hoping to be entertained by something new. The other, perhaps more critical problem, was that they didn’t have time to go through the content like they needed to. To generate probes, work through the replies, add meanings, etc..
We did offer to help in the proposal, but they wanted to go it alone. And I, naively, over-estimated what they could achieve.
Any researcher reading will know how terrifying it is to hear a client say, we didn’t learn anything new. But what happened with this study was the equivalent of someone sitting in the drivers seat of a car and then expecting it to drive them to a destination. The car was a Ferrari too, but no one knew how to drive it or even felt they needed to drive it.
Lesson? Always make sure the client wanting to generate insights is capable of generating insights. Because they don’t come from qualitative data itself. They come from seeing the ordinary in extraordinary new ways. And there are semi-formalised methods for getting to them.