This probably isn’t the first time you’ve read about A/B testing. You may already be doing A/B testing for your email subject lines or social media posts.

A/B testing sometimes split testing, involves comparing two versions of the same web page, email, or other digital assets to determine which version performs better.

This process allows you to answer critical business questions, helps you generate more revenue from the traffic you already have, and forms the basis for a data-driven marketing strategy.

Why do we use it?

  1. Isolating the effects of a single change among all others that have been published

    A single release may contain multiple features, especially in release-train style rollouts, and mapping positive/negative impact in metrics is difficult or impossible.
  2. Excluding factors like seasonality or changes in the user mix.

    Metrics such as conversions or revenue are highly susceptible to regular seasonal fluctuations, external (global) events, or to a different mix of users using the app (e.g., as a result of a marketing campaign)
  3. Finding out if users would actually use something

    Just asking a user what they think of a feature and whether they would buy or use something won’t give relevant answers. A/B testing can enable you to get representative quantitative data about it.
  4. Detecting the unexpected impact of changes.

    Changes we make can sometimes have an impact on some unanticipated metrics - e.g. highly certain purchase options can lead more users to get it, but actually, reduce total revenue.

When do we use it?

  • If you have a lot of traffic and moving parts, it’s a challenge to improve and maintain the user experience and business performance of the app.
  • Validating whether a feature adds or negatively impacts performance is difficult due to seasonality, external events, consumer demand/behavior, etc.
  • Various forms of user research give you insight into user motivation and mindset but are not the best indicator of whether someone would use or buy something.

How do we use it?

We will explain how to run A/B tests in a subscription-based mobile app.

Hypothesis: By highlighting that our 12-month subscription is the ‘Best Value” for the users, we can drive more users to purchase it, thus increasing our average and overall revenue.

Variations: 

  • Baseline: Subscription plans are shown without any highlights
  • Treatment: 12m plan has the ”Best Value” label shown

Targeted users: All, but analysis split per US and Rest of World

Metrics:

  • Conversation rate to purchase
  • Average LTV
  • Revenue
  • Distribution of purchases between subscription options

Results:

Conclusion:

If your choice was the winner - well done! And if you didn’t predict the more successful option, that’s not a problem either. A/B testing is always a learning opportunity, even if it means sticking with the existing solution or setup. And if the experiment yields no results, i.e., no significant difference, then go back to the drawing board. You can always come up with new hypotheses or identify new success metrics and run a new test.