Are half your Shopper Research recommendations misleading? 

….the search for Significance

As researchers, we report findings to clients.  We have a responsibility to offer considered guidance based on that research (only) rather than guess work, otherwise why bother with research?

We seek to understand what the real life shopper wants so that the commercial outcome we are proposing will actually succeed in the “real world”. That’s where the term significance comes from.  Sometimes bandied around in research, it’s a term that means that our results “signify” reality; ie they represent something real.

So when we are presenting research numbers we must pay attention to statistical significance.  If we want to say that product/display/ promotion A is better than B, then we need to know that this is not simply a factor of normal, expected variation in the different groups of people being asked the question.

This issue has become of even greater importance because we are now all using on line dashboards where users can slice and dice the data to their own requirements.  We now have data processing systems that can spit out reassuringly good looking charts at the touch of a button. We are looking for “stories” and “arguments”; our time is pressurised and fast recommendations need to be made, all too often to pre-determined agendas.

But this is no excuse to ignore the issue of sample size and use numbers unwisely. The end result risks business decisions that are weaker, or plain wrong. And that undermines research credibility , your credibility, and your company’s credibility with your customers.

Don’t get caught out by sample sizes!

 

 

There is a free white paper on statistical significance on our website: https://dev.shopperintelligence.com/free-white-papers  but for now here are my basic user guidelines for getting it right:

  1. Demand any quantitative chart includes the sample size for every data point. Its just not the total sample that matters, but the sample for the data you are using to base your conclusions on, for example the number of shoppers in Asda, who are category buyers, who answered question 7.
  2. Ask your researcher the statistical significance of any reported difference upon which you are making a decision. What level of confidence does it have (this is a calculated number that’s dependent on the scale of the response and the number in the sample). They can tell you if it carries 90% confidence (wrong 1 in 10 times if we repeated the research) or 99% confidence (only wrong 1 in 100 times)
  3. Invest in as much sample as you can afford. All too often we spend a fortune in time (money) on analysis and insight development but penny pinch on what can be a relatively cheap option – some additional interviews.
  4. Train your people in statistics fundamentals before they use data, particularly if they are reporting data to customers.
  5. Demand your data suppliers restrict the ability of data systems to show unreliable reports based on small samples and/or ask them to add automated colour coding of significant results. Ask them what their statistical guidelines are?
  6. And finally, never imply any meaning at all when samples drop below 30  (actually much below 60 and you are unlikely to have very much significance; we aim for 100 +, ideally 150)

Its not that I want to make your task any harder, its just a question of credibility and risk reduction.