The Column Significance Test Nobody Wants to Do by Hand
If you've worked with cross-tab reports, you've seen the letter annotations. Column A is significantly different from Column B at the 95% level. Those letters come from column-proportion z-tests, and calculating them manually across a 50-question survey with 6 banner columns is the kind of work that makes people quit research.
SPSS does it. So does Q Research Software. But both require licenses, data setup, and syntax knowledge. For a quick check on whether two numbers are significantly different, you shouldn't need to fire up a $1,200 software package.
How the tester works
Enter your group labels, sample sizes, and observed percentages. The tool runs pairwise z-tests across all groups and returns p-values, significance flags (using the standard letter notation), Cohen's h effect size, and statistical power for each comparison.
The effect size matters because significance alone doesn't tell you if the difference is meaningful. With a large enough sample, a 1-point difference can be statistically significant. Cohen's h tells you whether the difference is small (0.2), medium (0.5), or large (0.8). Most market research differences fall in the small-to-medium range, which is useful context for clients who see a significant difference and assume it's automatically important.
The power analysis tells you the probability that you'd detect the difference if it were real. Useful for the reverse question: "We didn't find a significant difference. Is that because there isn't one, or because our sample was too small to detect it?"
Tech stack
React 18.2 + Babel CDN. Standard column-proportion z-test formula with continuity correction. Cohen's h for effect size. Power calculation using non-central distribution approximation. No external stats libraries. The math is implemented directly in JavaScript.
Try the Cross-Tab Significance Tester →