A/B Testing


A/B testing, at its simplest, is randomly showing a respondent one version of a design or page — Version A or Version B — and tracking the changes in behavior based on which version they saw. Version A is normally your existing design (“control” in statistics lingo); and Version B is the “test,” with one copy or design element changed.[1]

In a “50/50 A/B split test,” you are randomly selecting which version of a design to show. A classic example would be comparing conversions resulting from serving either version (A) or (B), where the versions display different headlines.

A/B tests are commonly applied to many forms of copy testing (including digital tests for clicked-on ad copy and landing page copy) to determine which version drives the more desired result. [2]



  1. ^ SEMPO, SEM Glossary.
  2. ^ American Marketing Association, AMA Dictionary.

Comments are closed.