When analyzing your data, it’s common to say that there’s a “significant difference” between results. But in statistics, significant doesn’t mean “big” or “important”,  it just means that the difference is unlikely to be caused by random variation.
In this article, we’ll explain what statistical significance means, and when it’s useful to test it.
For any statistical data, there is a margin of error. In other words, your data will never be exact (unless you can test the entire population), but an approximation. The margin of error is the range in which the results should fall if you repeat the same study several times. The larger the sample size (how many participants), the smaller the margin, which means that your results will be more precise.
The purpose of a significance test is to see if the change between samples (e.g., two segments or two different time periods) falls within the margin of error. If it does, it means that the change you see is part of the normal variation. If it does not, it means that there is a difference between the samples. This difference could either be caused by changes to the population (e.g., the website is attracting different visitor groups) or by changes to the study (e.g., reduced trigger time means you are capturing a wider segment of the visitors). Note that the significance test does not tell how big the difference is.
If your web survey is meant to understand your visitors’ experience,  what they think, feel, and struggle with,  the most important thing is what people say, not whether every number is “statistically significant.” Focusing too much on significance can actually make you miss the bigger story in your results. 
For example, if 10 high-value leads say they couldn’t figure out how to contact sales, your first thought probably shouldn’t be whether that result is statistically significant. 
That said, significance tests are handy for checking whether a change is meaningful or just part of normal variation. Your latest weekly report shows a 4% drop in private customers from Campaign X.The stakeholders starts to panic because wasn’t Campaign X supposed to increase private customers?! Before spending hours looking for answers, run a quick significance test. It might just be noise, like in this case.
-png.png?width=2512&height=633&name=In_what_role_are_you_visiting_our_website%20(1)-png.png)
Significance tests are especially useful when you want to compare two groups or segments, especially when you plan to take action based on the results. If you're reorganizing the web, adjusting messaging, or reallocating resources based on your data,  you want confidence that the observed change represents a real shift in your audience, not just random noise. 
Some common use cases for when a significance test is useful: 
You’ve run two versions of a page and collected user data for each. A significance test helps you confirm whether one version truly performs better, and lets you see if the conversion rate on version B is significantly higher than on version A.
Sometimes you’ll notice that the proportion of responses to a certain survey question shifts over time, like if the percentage of respondents selecting “Potential customer” under “What is your role?” increases from one month to the other.
You’ve rolled out an update and want to know whether it actually made a difference. When comparing the data from before and after, a significance test can show if the change is statistically meaningful rather than just random fluctuation.
You can use our Z-score calculator for two proportions below to check whether the difference between two groups is statistically significant.
Simply enter:
Group 1 - number of responses (or proportion)
Enter how many respondents in Group 1 chose the answer or option you’re analyzing. If 120 out of 800 people said “Yes,” enter 120. You can also enter a proportion like 0.15 for 15 %.
Group 1 -  sample size
Enter the total number of respondents in Group 1 who answered the question.
Group 2 - number of responses (or proportion)
Enter how many respondents in Group 2 gave the same answer or option. If 90 out of 850 said “Yes,” enter 90 (or 0.106 for 10.6 %).
Group 2 -  sample size
Enter the total number of respondents in Group 2 who answered the question.
The calculator will then show whether the difference between the two groups is likely due to chance or represents a real change.
100,000 actions/month
Standard analytics for most needs. Up to 100 000 actions.
25 heatmap samples/day
Get started with heatmaps & recordings and start seeing things from your users' point of view.
30 responses/month
Start getting to know your users with surveys. Who are they? Why are they on your site?
Immediately after sign-up, you’ll receive a script to install on the website you’ve chosen. The script will only work on the chosen website. Your account won’t contain any data until you install the script. If you’d like to see an example of what it’ll look like when it’s up and running, please sign up for our online demo.
Check your email for an activation link
to complete the registration.