What Is A/B Testing?
A/B testing (also called split testing) is a controlled experiment where two versions of a web page, email, or element (A and B) are shown to different segments of users simultaneously to determine which version drives better results. Version A is typically the current design (control) and Version B is the variation with one change. Statistical analysis of the results determines which version is more effective.
What to A/B test
High-impact elements to test: headline copy (biggest single factor for engagement), CTA button text and colour, hero image vs video, form length (fewer fields = higher completion but lower lead quality), pricing page layout, social proof placement, page length, and checkout flow steps. Test one variable at a time to isolate the cause of any change in conversion rate.
Statistical significance
A/B test results need statistical significance before acting — typically 95% confidence level. This means there's a 95% chance the observed difference is real, not random. Most A/B testing tools calculate this automatically. A common mistake is ending tests early when you see a positive result — you need enough traffic and time (usually 2–4 weeks) to reach significance. Small sites with low traffic should prioritise qualitative research over A/B testing.
A/B testing tools
Google Optimize was discontinued in 2023. Current options: VWO (vwo.com) — full-featured, CRO platform. Optimizely — enterprise-grade. AB Tasty — mid-market. Posthog — open-source, includes feature flags and A/B testing. Vercel Edge Config + Middleware — for technical teams doing server-side experiments. Netlify Split Testing — simple A/B for static sites.
A/B Testing & Canvas Builder
Canvas Builder is an ideal tool for A/B testing — generate multiple page variants from different prompts and deploy them as test variants. The speed of generation (3 minutes vs days) makes rapid iteration practical.
Try Canvas Builder →