Create an email A/B test
1. Choose an email campaign for A/B testing that is large, has a high potential for conversions, and is long-running.
Any email campaign can benefit from A/B testing to improve performance. Still, the campaign should be significant enough to warrant the time and money that will go into the process of A/B testing. For instance, an email confirming an order wouldn’t benefit from A/B testing, but the email that convinces the customer to buy that product or service would. Factors for a good candidate for A/B testing include: Large enough mailing list, preferably 1,000 subscribers or more. High potential for conversions or ROI if the campaign succeeds. It runs for a long enough duration to test and refine through the iterative process of A/B Testing.
2. Assemble a test group large enough to produce reliable data. 100 is the bare minimum for testing, but for accurate data, several thousand would be best.
For an A/B test to be successful, the number of emails sent must be significant enough to produce reliable data. Split the list into two equal groups. One group will receive version A, and the other will receive version B. Multivariable A/B testing requires larger test groups because it produces more variants of the email. For example, changing both the CTA and the subject line would result in four different copies of the email that would need four test groups. In general, testing one variable at a time is considered the best practice for the most reliable results.
3. Choose the variable you will test. The two most commonly tested variables in email campaigns are the subject and CTA.
To find a variable, start with a hypothesis. For example, you believe that the email campaign would drive more traffic to the website with the intent to purchase if the leading picture was more personal because of the personal nature of the product being sold, appealing to the emotional nature of the choice to purchase. Here are examples of other variables to test: Testing images will help determine what kinds of pictures drive conversion. For instance, a picture of the product vs a picture of what the product empowers the consumer to do. Changes in content can range from small changes like switching the hook to large changes like sending two separate emails entirely. The timing of the A/B test is a critical factor. Testing times will typically be a few hours at most, and testing at different times of day will produce different results. Senders can be professional, personal, corporate, inconspicuous, and more. Choosing the right sender for the email campaign does a lot to establish its voice and effect clicks.
4. Produce your variant email. For whatever variable you choose to test, produce two emails that are otherwise the same.
Using the previous example of the photo hypothesis, choose another photo, one that invokes the emotional impact of the product instead of its use. Using the original article as control allows you to gather data on your hypothesis. Isolating variables helps you understand what changes resulted, a critical step to replicating them.
5. Determine how you will decide the winner, in line with the goals of the campaign.
If the campaign aims to improve email open rates, then tracking how many conversions each variation of the email brought would determine the winner. Consider the health of your funnel. For example, don’t trade short-term gains in open rates with misleading titles at the cost of long-term health. In general, test as far down the funnel as possible to get the most accurate data.
6. Collect the data on your winning condition. Repeat the test with a different variable, like time of day sent.
If the subject line is being tested, look at the relevant data on how well both variations enticed customers to click, how much traffic it drove to the page, and how much revenue that difference in performance helped generate, and determine which email won and if your hypothesis was correct. A/B testing works best when done again. Changing the variable tested is particularly helpful. For instance, an email test done between 8 A.M. and 10 A.M. will produce different data than one between 3 P.M. and 5 P.M. Trying different timings can reinforce the results of the test if the same article continues to outperform the other. In addition, tracking the volume of conversions based on the time the test was conducted can provide valuable information about when to send out the emails. A new variable may improve the email further. If both emails are underperforming, continue trying the same variable. The factors that drive conversion are highly personalized and may depend on many unknown or unanticipated factors.