A/B Testing for In-App Surveys

Introduction

A/B testing is an experiment where two or more variations of a survey are shown to users at random with the goal of finding the best performing variation. When it comes to surveys, the best performing variation is usually the one with the highest response rate.

Refiner does have a full fledged AB Testing feature suite. However, it’s still possible to build a simple AB Test scenario for your Web- and Mobile App Surveys with just a few clicks. On this page we’ll go through every step needed to set this up.

Please note that the A/B Test setup described on this page lets you test variations of your survey questions and the design of your survey. It doesn’t allow testing different Target Audience or Trigger Event settings against each other.

Create a variation

As a first step you want to duplicate the original survey. You can find the duplication feature under the three dots action menu:

You can name the first survey “Variation A” and the second”Variation B”.

The new survey will be in draft mode. It’s important to keep it in draft mode for now.

Make changes to variation

Now it’s time to make changes to the new survey. We recommend to make only small changes to your survey, let a test run for some time, and then improve your surveys based on the results. If you change too many elements at once, you won’t be possible to understand which changes drive the improvement in response rates.

You can make changes to the wording of your survey, add or remove questions, or change the survey design in different ways.

To make the A/B Test work, it’s important to keep the Targeting Audience and Trigger Event section of the new survey untouched (see below).

Set target audiences

When two or more surveys in your environment qualify to be shown to a user, one of them is randomly picked. This mechanism forms the basis of the A/B Testing setup described on this page.

As both variations of your survey have the same Targeting options, one of them would be shown first to the user and the other variation would be shown at a later stage. To prevent that a user sees both variations, we’ll need to exclude users that saw Variation A from the Target Audience of the Variation B and vice versa.

To do so, we’ll create two user segments:

  • Users that saw Variation A
  • User that saw Variation B

These segments are simple to build by adding a “Saw Survey Variation X” filter:

Both segments will start filling up once both surveys are published and users start to see them.

Once you created the two segments, it’s time to use them in the Target Audience section under “Exclude Segment” of your survey. The idea here is that a user who saw “Variation A” should NOT see “Variation B”.

The Target Audience for survey “Variation A” should thus have “Saw Variation B” in the excluded segments, and “Variation B” should exclude users in the “Saw Variation A” segment.

Let the test run

Once you have the two survey variations ready to go, it’s time to publish them. Over time you’ll see that both variations will collect survey responses and that the to user segments you’ve create will fill up.

If everything is set up correctly, any user will see only one of the two variations. You can verify this by opening the user details panel.

When both variants were shown a couple of hundred times (we recommend at least 1,000 survey views), you can start comparing the results by creating an ad-hoc dashboard for both of your surveys. If one of your variation has a significantly higher response rate, you can choose them them as a winner of your test.

Was this helpful? Let us know with a quick a vote