Analytics & Testing

Tips for A/B Testing on Google Play Experiments

For Android app developers, Google Play Experiments can provide valuable insights and help increase installs. Running a well-designed and well-planned A/B test can make the difference between a user installing your app or a competitor’s. However, there are many instances where tests have been improperly run. These mistakes can work against an app and hurt its performance.

Here is a guide for using Google Play Experiments for A/B testing.

Setting Up a Google Play Experiment

You can access the Experiment console from within the Google Play Developer Console’s app dashboard. Go to Store Presence on the left-hand side of the screen and select Store Listing Experiments. From there, you can select “New Experiment” and set up your test.

There are two types of experiments you can run: Default Graphics Experiment and Localized Experiment. Default Graphics Experiment will only run tests in regions with the language you selected as your default. Localized Experiment, on the other hand, will run your test in any region your app is available in.

The former allows you to test creative elements like icons and screenshots, while the latter also lets you test your short and long descriptions.

When choosing your test variants, keep in mind that the more variants you test, the longer it can take to get actionable results. Too many variants can result in the tests needing more time and traffic to establish a confidence interval that determines the possible conversion impact.

Understanding the Experiment Results

As you run tests, you can measure the results based on First Time Installers or Retained Installers (One Day). First Time Installers are the total conversions tied to the variant, with Retained Installers being users who kept the app after the first day.

The console also provides information on Current (users who have the app installed) and Scaled (how many installs you would have hypothetically gained had the variant received 100% of the traffic during the test period).

Google Play Experiments and A/B Testing

The 90% Confidence Interval is generated after the test has run for long enough to gain actionable insights. It shows a red/green bar that indicates how conversions would theoretically adjust if the variant was deployed live. If the bar is green, it’s a positive shift, red if it’s negative, and/or both colors means it could swing in either direction.

Best Practices to Consider for A/B Testing in Google Play

When you’re running your A/B test, you’ll want to wait until the confidence interval is established before making any conclusions. Installs per variant can shift throughout the testing process, so without running the test long enough to establish a level of confidence, the variants might perform differently when applied live.

If there is not enough traffic to establish a confidence interval, you can compare conversion trends week by week to see if there are any consistencies that emerge.

You’ll also want to track impact post-deployment. Even if the Confidence Interval states that a test variant would have performed better, its actual performance could still differ, especially if there was a red/green interval.

After deploying the test variant, keep an eye on impressions and watch how they’re impacted. The true impact may be different than predicted.

Once you’ve determined what variants perform best, you’ll want to iterate and update. Part of the goal of A/B testing is to find new ways to improve. After learning what works, you can create new variants keeping the results in mind.

Google Play Experiments and A/B Testing Results

For example, when working with AVIS, Gummicube went through multiple rounds of A/B testing. This helped determine what creative elements and messaging best converted users. That approach yielded a 28% increase in conversions from the feature graphic tests alone.

Iteration is important to your app’s growth. It helps you continually turn up the dial on your conversions as your efforts grow.

Conclusion

A/B testing can be a great way to improve your app and your overall App Store Optimization. When setting up your test, ensure that you limit the number of variants you test at once to expedite the test results.

During the test, monitor how your installs are affected and what the Confidence Interval displays. The more users that see your app, the better your chances are at establishing a consistent trend that validates the results.

Lastly, you’ll want to constantly iterate. Each iteration can help you learn what converts users best, so you can better understand how to optimize your app and scale. By taking a methodical approach to A/B testing, a developer can work towards growing their app further.

David Bell

Dave Bell is an entrepreneur and recognized pioneer in the fields of mobile entertainment and digital content distribution. Dave is the Co-Founder & CEO of Gummicube - the leading global provider of data, technology, and services for App Store Optimization.

Related Articles

Back to top button
Close

Adblock Detected

Martech Zone is able to provide you this content at no cost because we monetize our site through ad revenue, affiliate links, and sponsorships. We would appreciate if you would remove your ad blocker as you view our site.