To Be or Not To Be: Are Your Test Results Significant?

By eleventy marketing group

This is a great question to ask, particularly if you’re a marketing representative or an account manager

trying to determine whether your prospects will respond to your organization’s nonprofit fundraising appeal with or without a premium. If they prefer it with the premium, it’s going to cost additional money (bad! very bad!). If they can respond positively without a premium, it will cost less money (awesome!). You are already aware that your prospects are being contacted by your competitors who are using similar appeals or selling similar products. Certainly with you and your CFO being cost-sensitive, offering more expensive premiums to improve results is just plain absurd. Your preference, of course, is that your prospects will respond favorably without the need to send them their 30th set of address labels this year, all of which goes into your pricing scheme and impacts your ROI.

So, you do the most logical thing—conduct a survey.

You ask 50 randomly selected individuals from your prospect list whether they are more favorable to the appeal when it comes with a premium, or will they respond favorably to the appeal without the costly premium. When results come in you are jumping with joy to find that only 20 of the respondents (40%) would prefer to get yet another refrigerator magnet (because they don’t have enough already); and 30 of them (60%) would prefer to save space on their refrigerator for their child’s artwork.

You and your CFO are happy as can be—until the team at eleventy analyzes your data and tells you not to pop the bubbly just yet because there’s a better than 15% probability that your results could have happened by chance. Because of this, eleventy recommends that you conduct another survey, and suggests you randomly select another 100 prospects. By using a larger sample size the probability that the results happened by chance are reduced to less than 3% and are a better representation of the audience as a whole.

***WARNING*** TECHNICAL ZONE***WARNING*** TECHNICAL ZONE*** WARNING***

This section of the article contains technical references that statisticians such as ourselves find exciting to discuss. Those readers with a strong aversion to t-scores, z-scores, probability, sampling and statistics in general should immediately avert their eyes and move to the next section.

A one-sample t-test on the initial results was calculated by eleventy, and we found the difference is not statistically significant. For general purposes, a t-score is similar to a z-score, but is particularly helpful on small samples and t is very close to z as sample size increases. The report from eleventy looked something like this: With a rejection threshold, or “critical alpha,” of 0.05: t(49)=1.443, p=0.155, which may be interpreted as a 15.5% probability that the observed difference in your survey could have been a fluke. eleventy goes on to inform you that you typically will want your survey results to show a probability of less than 0.05 in order to be statistically significant.

In the survey of 100 randomly selected prospects, you find that only 39% of the respondents indicate they prefer the appeal with a premium, and 61% indicate they prefer it without the premium. Once again, eleventy runs these results based on the increased sample size, because t-tests are size-sensitive, and reports the findings: with a critical alpha of 0.05: t(99)=2.255, p=0.0263. Because 0.0263 is well below the rejection threshold of 0.05, you now know the results are statistically significant as there is a probability of less than 3% that the results were obtained by chance.

***ALL CLEAR***END TECHNICAL ZONE***ALL CLEAR***END TECHNICAL ZONE***

So this is GREAT news—you now know that the results of your survey are statistically significant and show that the majority of your prospects prefer the offer without a premium. So now it’s time to open the bubbly…but wait…is it?

Nope, not quite, and in the next installment, we’ll tell you why findings that are statistically significant, while comforting, aren’t the final word—what you’re looking for is practical significance, and we’ll discuss that, along with statistical power—next time.

CREDIT: James Moran / Ken Will