In the year 2017 literally everyone is talking about A/B testing. Each new feature of your website needs to be validated before it is ready to go live. For those who are not yet part of the A/B movement, here is a short summary.
A/B testing is a website optimisation method where randomly one out of two or more different versions of the same product (app or website view) will be shown to the user. The users don’t know that they are part of an experiment and that their interactions with the website are being tracked. The test is successful if you get a winner variation after a certain time period. That means that one variation performs better and reaches a better conversion rate. We talk about a significant winner if the confidence interval of your test is above the magical 95%. This number is a good indicator how well the the winner variation can be applied to the product to reach the same results over a longer period.
The simple rule of thumb for getting significant data from the analysis is the more users are part of your test the more meaningful and reliable are your findings. This is pretty straight forward but what if your website is not one of the lucky fellows like Google or Facebook with millions of visitors every day? No worries, you are still able to get great results by running A/B tests on your “low-traffic” website (low-traffic means <100 users a day) if you keep the following in mind:
- Expand your testing period if necessary
Don’t stop your tests just after a few days. Especially with a few users your test needs a certain time to find its balance. If you have good and convincing results after a short time period thats mostly related to the premature testing dilemma. This phenomenon shows that early conversion peaks level off to real and reliable trends. To get better insights of how your product is really performing let your test run another few days or even weeks if necessary.
- Set your uplifts in relation to the total number of all tested users
One of the reasons for the premature testing dilemma is that one single conversion of 100 users raises a higher uplift than one conversion of 1,000 users. Consequently the results in the very beginning of the test are much more meaningful than they really are. Therefore, you should never take or communicate these early results as first reliable trends.
- Test significantly different versions
In case you are testing a low-traffic website with too similar variations you have no chance to get reliable result. Small design changes like expending the line spacing or adding a box shadow might not even trigger a different behaviour of the user. Therefore it is essential for the reliability of your results that you change the one thing of your product you think could get the greatest difference very significantly.
- Mix quantitative and qualitative testing
Your A/B test isn’t delivering the results you need. You could invite real users and let them take the test in your presence. On the one hand you have the chance to ask follow-up questions directly and get more insights from the users directly. On the other hand it is possible that they behave differently because they know that they are part of a test. A combination of A/B testing and face-to-face user interviews can give you more trustworthy insights of your websites performance or how to refine your A/B test to get better test results.
- Accept your low-traffic results and communicate them as such
Use your experiences from previous tests or the website traffic data in general to point out the expected number of users in the test. If you set yourself reachable goals matching your expectations you won’t be disappointed of your results afterwards. With these outcomes and the possible discrepancies of the premature testing dilemma in mind you are able to communicate these less significant findings as future trends.
- Automated testing
In the near future it will be possible to add virtual generated users to your A/B test really easy with robot tools like Routine Bot or Buildbot. These so called bots will act and behave like real people. This means there is a solution to your problem of too less users in your tests already on its way. Stay tuned.
If you keep these tips and tricks in mind you should be able to get better and more reliable results, even if you have only a few users in your test. By implementing new features to your product which you base on better and more reliable A/B tests you can improve the user experience of your product and this might also result in a growing user base.