without winning variant

How to analyze A/B test results without a winning variant

Last week’s post talked about analyzing the results when a winning variant is found.
This weeks topic will in turn dig in to how to analyze the results when the test does not find a winner. Spoiler alert: you can still gain plenty of valuable data, so don’t worry if that happens.

Winning variant: not found

There are multiple reasons that lead to this outcome. For example, it is possible that the difference between the variants wasn’t significant enough to produce a higher conversion rate. Maybe you hit a spot by changing something for the worse that played a role in the conversion rate, like the screenshots or the icon. Or maybe whatever was changed doesn’t matter that much on the product page.

The main reason however is that the hypothesis was not strong. Formulating a strong hypothesis is crucial for the success of an A/B test. However, to do so, proper research must be conducted beforehand to avoid weak hypotheses or making too small or subtle changes that may not have a significant impact.

Valuable insights: Found

Despite the test not producing a clear winner, the test is not worthless. The user behavior data can still be analyzed, the test performance leaves a trail of clues, and you can always learn for the next test. Using the user behavior data, you are able to tell how many of the users actually spent time on the product page. Or rather, how many seconds you have to catch their interest.

Let’s say the test was comparing different versions of the description. A heat map tells you how people scrolled the page and if that part of the page gained any traction. The number of clicks of any element on the page shows how many clicked to extend the description. If neither show any or much activity, and most users didn’t spend long browsing the product page, the description isn’t performing. Most likely your users don’t pay attention to it and it doesn’t affect the conversion rate that much. The good news is; you don’t need to pay that much attention to it but rather focus on other areas. As long as you make sure it is not dragging your conversion rate down.

Typically the most impactful tests are those that test screenshots. Naturally, they are the most eye-catching element and we are naturally drawn to anything visually interesting. It is possible, that while it is the most impactful element, what was changed took away from the existing version.

Conclusion

When a winning variant is not found, it is important to examine the possible reasons behind this outcome. Weak hypotheses, too subtle changes, or irrelevant elements could be the culprit. However, despite the lack of a clear winner, don’t give up. By focusing on areas that are more impactful , it is possible to improve performance and increase conversion rates. A/B testing is an iterative process, and learning from previous tests is crucial for future success. Therefore, it is important not to dismiss inconclusive results but instead to analyze them carefully and use the insights gained to improve future testing cycles.

PS. Our crew members are awesome at testing – if you hit a wall, just reach out!

Share

Table of Contents

Turn impressions into installs

See Geeklab in action and experience why top developers use Geeklab in everyday marketing.
Join today!