Minimize A/B test validity threats

1. Gather your campaign to brainstorm a list of any technical and environmental factors or variables, that have potential to corrupt your test before starting.

For example, to educate and involve them in monitoring for unexpected test pollutants.

2. Integrate your A/B testing tool with Google Analytics and verify that the revenue numbers match up.

If you see discrepancies of more than 2x difference, do not proceed with testing until integration and setup is complete.

3. Reduce your site flicker to 0.0001 seconds to ensure your visitor does not see the control before the treatment loads.

Optimizing your site for speed helps to ensure your test is valid.

4. Conduct quality assurance reviews for every device type, operating system, and browser by looking for improperly displayed or failing tests.

For example, a treatment may work well on an iPhone, butĀ could act funky on Android.

5. Run your test for as long as necessary to reach the correct, pre-determined, sample size.

For example, test results are not valid if you stop the test at 90% significance.

6. Perform tests in full week increments to get data from every day of the week, and time of the day.

Be wary about running tests during the holidays, as this is only relevant to that season.

7. Use a representative sample population by including traffic from all sources, days of the week, and new and returning traffic.

For example, your PPC traffic does not behave the same way as the rest of your traffic, so alone it is not representative.

8. Analyze your annual traffic and conversion data to account for anomalies.

For example, if you have a spike in sales during the spring, then tests run during this period cannot be generalized to other times of the year.

9. Talk to your team prior to running tests, and take inventory of any marketing campaigns during the test window.

For example, running a PPC campaign may influence and invalidate your A/B test.