What is A/B Testing?
An A/B test aims to compare the performance of two items or variations against one another. In product management, A/B tests are often used to identify the best-performing option. For example, two variations of a new user interface could be tested, and, in this case, the variation that receives the most user engagement would win the A/B test.
An A/B test is used to determine which version or variant of something will perform more effectively in the market. This strategy is commonly used by marketing and advertising professionals, who show multiple versions of an ad, marketing email, or web page to randomly selected users, and then analyze the results. Product managers can also use A/B testing to develop products that will resonate with users.
There are many benefits to using A/B tests, including:
- Marketers (or product managers) can focus on very specific elements to test
- The results are immediate and easy to analyze
- Unlike surveys, where users’ answers are theoretical, A/B tests measure real engagement with the asset
Why is A/B testing valuable?
With A/B testing (also called split testing or A/B split testing), teams can create true apples-to-apples comparisons of a single variant of an asset, to ensure their results reflect how actual users respond specifically to that variant.
For example, by sending out two entirely different sales emails, a marketing team can learn which of the messages performs better. But that team won’t necessarily know which specific element of the winning email resonated with readers. With an A/B test, that team can send out two nearly identical versions of the email with just a single element changed — the subject line, the call to action, etc. — and learn which of those elements users find more compelling.
If a team continuously employs A/B tests to measure the effectiveness of each element, over time that team will be able to build an asset (advertisement, product, website) that resonates with the company’s user persona.
Why should Product Managers use Testing?
Although it has historically been primarily a tool of marketing and advertising, A/B testing can also help product managers build better products.
With an A/B test, a product manager can experiment by releasing multiple versions of a new feature, layout, or another product element to a randomly selected segment of their user base — and learn which of those versions users respond to most favorably.
How do you run an A/B test?
There are many ways for a product manager to conduct an A/B test. One useful example is the approach offered by Product School, in which A/B testing follows a five-stage process:
Stage 1: Determine the data you’ll be able to capture.
First, determine what types of information you’ll be able to collect and analyze, before building your experiment and running the tests. If you skip this step, you might waste time and resources developing an experiment where you can’t accurately measure the results.
Stage 2: Develop your hypothesis.
Based on the data you know your team will have available, you’ll now want to identify the opportunities for your experiment and formulate a theory about how users will react to a specific element of your product.
For example, you might assume that users will want the steps required to complete a task using your new feature to be ordered in a particular sequence. That’s your hypothesis.
Stage 3: Build your experiment.
Now you’ll want to develop the details of your test. This will have your team create a variant of your planned feature — using, for example, the same functionality but with the steps sequenced differently.
During this stage, you’ll also need to generate the different segments of your user base that will receive the variants of your new feature. You’ll also want to define the metrics you’re going to measure. Will you gauge user preference of one variant based on surveys after your users have had a chance to engage with the product? Or will you base it instead on monitoring usage data and, if so, how will you determine users’ preferred method?
Stage 4: Run your test.
Now it is time to send the different versions of your new feature out to your various user segments and wait to see how the groups respond to each version.
Your team will need to determine for yourselves how long to run your A/B test, how much data to collect, etc. — because this will vary for each company and because you want to gather and analyze enough data that you know you’re working with a statistically significant representative sample of your user base.
Stage 5: Measure your results.
Finally, you will review the data you’ve collected from your A/B test and make a determination about which of the two features (or layouts, or color-schemes, or whatever you are testing) earned the most positive response or the greatest degree of engagement from your users.