How to impress in a research debrief: Two questions you should never ask and three you should

The scene is set.  A mosaic of twenty faces stare into the Zoom abyss, hoping to heaven that the next hour will be more interesting than the team meeting they’ve just endured.  Suddenly, someone senior makes their tardy entrance (implicitly underlining their importance) and, like a domino cascade in reverse, the twenty faces sit more upright: there’s an opportunity to impress someone important!

But looking good isn’t enough.  How can you sound good by asking a question that will impress your boss and add genuine value to the meeting?  And what might you say that could devalue your corporate currency?

As a consumer psychologist, with thirty years of research debrief experience, I’m here to help!

Two Bad Questions

1. What was the sample size?
Statistical theory is all well and good, but are you asking about things people can reliably answer?  If you’ve understood anything about behavioural economics / behavioural science / real life, it should be that people don’t have access to the mental processes that drive a large proportion of their behaviour.   The number of people you’re asking doesn’t matter until you can be confident that any response isn’t driven by the illusion of conscious will or inaccurate post-rationalisation.

2. What do respondents want / want us to do?
If you haven’t (very carefully) tested a scenario, what respondents  think they want is unlikely to be much of a clue to anything.  People are hopeless at factoring in the importance of context, loss aversion, time preferences and habits into their own future behaviour.  What they claim they want when they’ve been made to focus on a topic in a particular way for the purposes of research is not dependable.

Three Much Better Questions

1. How psychologically valid is this research?
Use the AFECT criteria to gauge how much confidence you should have in what you’re hearing (and then overlay statistical criteria if appropriate):

  • Is it an Analysis of behaviour? 
  • Were respondents in the Frame of mind that matches the one they’d be in when interacting with the product or service? 
  • Was the Environment representative of where this behaviour occurs at present?
  • Was the focus of the research Covert? 
  • Was the Timeframe in which responses were captured realistic? 

The more of these you can say ‘Yes’ to, the more confident you can be that what you’re hearing is true.

2. What behavioural data could we reference to back this up?
Consumer behavioural data doesn’t tell us why something happens, that has to be inferred, but, unlike attitudinal/subjective measures, it is usually true (subject to how it was captured).  It’s a good idea to explore what behavioural data exists that could be reinterrogated in light of the apparent insights coming from interrogative market research to help validate what you’ve heard.

3. How might answers have been primed by the sequence of questions?
It’s extremely easy to inadvertently prime people in research.  Often, we’re exploring things that people don’t have strong beliefs about.  In these circumstances, there is overwhelming evidence that where people are influenced to begin their mental journey, shapes what they end up saying.  This is not the same as avoiding leading questions.  The question might be perfectly balanced, but the answer it elicits can cause bias because it serves as a prime for a subsequent response.

At Shift, we developed the AFECT criteria to help people gauge the likely accuracy of any market research.  We use this expertise when we design research to ensure it’s as psychologically valid as possible