We’re glad you found here so you won’t repeat these A/B testing mistakes in your ASO efforts! We listed a few common mistakes that can happen to the best of us (well, not us). Since you’re here to learn, you will be in good hands. Let’s take a look!
“Let’s test these two things at once to save time!”
Stop. No. You don’t want to do that. Testing multiple features at once might give you a higher conversion rate or IPM (install per mille), but there is no telling which feature had the most impact. Let’s say you test the icon and the screenshots at once, the results will not show if it was the screenshots or the icon that actually improved the results. Furthermore, it is more difficult to further test to possibly gain even better results.
Do this instead: always focus on just one feature to test. Screenshots, icon, theme, motivations – you name it, but just focus on one at a time.
“We’re in a hurry, just run the test for a couple days.”
Nooooooooo. You need to run the test for at least a week. This way you capture the effects of different user behavior between weekdays and weekends. We have different ways of using our phones throughout weeks and months even. Therefore, don’t do this in a hurry.
Do this instead: make sure to let the test run for a sufficient amount of time (but not too long like months and months). Even if meeting statistical significance faster, let the test run for at least a week.
“After this test we’ll hit jackpot and increase our download massively.”
Well, technically it is not impossible. But most likely that is not how it works. Usually testing different iterations one step at a time is the way to go. A/B testing is not a shortcut but an iterative process to find out what works best.
Do this instead: really focus on building the hypothesis and what you want to achieve. Test one step at a time and analyze the results. Not all tests are going to be a great success, therefore we keep testing.
“I don’t like the results, let’s go with my favorite variation instead.”
Sure, but why test if you don’t trust data over personal preference? That’s the great thing about A/B testing, that it helps form decisions based on data instead of assumptions. It is easy to get emotionally attached to a favorite variation but it is not a good idea if data tells you otherwise.
Do this instead: trust the data.