Broadly speaking, design comparison allows you to test multiple versions of your product. The goal is to identify which one performs best in terms of usability and desirability.
With this kind of testing, the aim is to learn how different variables affect overall performance. To get the most out of it, establish the metric(s) you’ll be measuring before you start testing. For example, you could improve a call-to-action’s (CTA) click rate by testing two variations of the CTA button.
You can test different variants using Maze — not just prototypes, but any block type. This article explores two different approaches you can take: using a single maze, or two separate ones.
In this article:
- Are variant comparison and A/B testing the same?
- Different approaches
- Option 1: Using the Variant Comparison block (Recommended)
- Option 2: Changing the order/version manually on the same maze
- Option 3: Using separate duplicate mazes
- Additional resources
Are variant comparison and A/B testing the same?
A/B testing and variant comparison both involve comparing different variations. This makes these terms sometimes be used interchangeably by people who do research. However, rigorously speaking, A/B testing and variant comparison are distinct in their focus and methodology.
A/B testing, also known as split testing, compares the performance of two variants in a live production environment. This method splits organic users into a control group (version A) and a treatment group (version B), assigned randomly to minimize bias. Quantitative metrics such as click-through rates, bounce rates, and session duration are compared to determine which variant performs better. Statistical significance is crucial in A/B testing to determine if the differences in metrics between the variants are conclusive, and not merely coincidental.
On the other hand, variant comparison is a broader term that encompasses evaluating multiple variations or designs. It involves evaluating and analyzing different design elements, layouts, or concepts to identify the most effective or appealing option. Variant comparison can involve both qualitative analysis, such as direct user feedback, and quantitative analysis, such as prototype metrics measuring user interactions.
Different approaches to variant comparison
There are different strategies you can use when defining which aspects to compare:
- Versioning: When testing completely different versions of a design. For example, to validate which user flow seems more intuitive to users.
- Multivariate: When testing variations within a design. For example, to validate which color or CTA copy resonates the most.
There are also different approaches to allocating users, or exposing them to different treatments:
- Exclusive comparison: Each tester is assigned to only one version of the design. In product research, the terms Monadic, Between-subjects, and Between-groups can also be used to describe this approach.
- Alternating comparison: All testers are exposed to both versions, although in a different order. The terms Sequential, Within-subjects, Within-groups, and Consecutive are also used to describe this approach.
Option 1: Using the Variant Comparison block (Recommended)
The Variant Comparison block allows you to test and compare multiple design variants. This helps identify the best-performing designs in terms of usability and desirability.
The Variant Comparison block is an exclusive Organization plan feature.
To reduce bias, you can choose to display these versions sequentially (consecutive comparison), or one at a time (exclusive comparison).
✅ Advantages | ❌ Limitations |
---|---|
Allows you to keep a single maze | Only one design file can be linked per maze. Therefore, this approach requires you to use a single design file for both versions. |
Allows you to hire testers from the maze panel |
Because it’s not possible to add prototypes to a live maze, both versions must be ready before publishing the maze. |
All your results in one place: Simply compare the differences between blocks |
Who can use this feature
The Variant Comparison block is an Organization plan feature.
Step 1: Add a Variant Comparison block
To create a new comparison, open your draft maze, or create a new maze.
In the blocks list, click Add block, then select the Variant comparison block from the drop-down.
Only one Variant comparison block can be added per maze.
Step 2: Define your approach
In the block options, pick a name for each of the variants to make it easier to tell them apart in the results (e.g. “CTA Blue” and “CTA Red”) . These names won’t be visible to participants.
Click + Add variant to add more variants. You can add up to 5 variants.
Pick a variant distribution method:
- Exclusive: Participants are randomly split between variants, and see only one of them.
- Alternating: Participants see all variants in a randomized order.
Step 3: Add blocks for each of the variants
Next, add blocks for each of the variants.
You can add any block type, except screeners or other variant comparisons. For example, you could add a prototype or website test, an opinion scale, or an open question with dynamic follow-ups.
After setting up the first variant in a Variant Comparison block, use the Duplicate () button. This will create a copy of that variant and duplicate all blocks within it.
Please note that it’s only possible to import one prototype per maze. After importing, the prototype will be linked to all prototype blocks across the maze. This means that the imported file must include all the flows you want to test.
Change the start screen to select a different flow within the design file. Learn how to test multiple flows using a single Figma prototype
Step 4: Preview, publish, and share
Before publishing your maze, preview and test it thoroughly until you are satisfied with the content and performance.
After previewing and publishing, share the maze link with the participants. Learn more about sharing mazes
The variant distribution will be handled in the background according to the method you’ve selected.
Step 5: Analyze results
In the Results dashboard, you’ll see a similar structure as in the builder. Here, the Variant Comparison block encompasses the blocks belonging to each of the variants.
For some block types (Prototype Test, Multiple Choice, Opinion Scale, Yes/No, Open Question) you can compare different versions side-by-side in one view.
For other block types, go through each block within the Variant Comparison to compare the results between them.
Learn more about analyzing results of a variant comparison block
Option 2: Changing the order/version manually on the same maze
In this approach, you’ll use a single maze with both versions (i.e. A and B).
✅ Advantages | ❌ Limitations |
---|---|
Allows you to keep a single maze | Only one design file can be linked per maze. Therefore, this approach requires you to use a single design file for both versions. |
Same results & reports: You just need to compare the differences between blocks and versions | Because it’s not possible to add prototypes to a live maze, both versions must be ready before publishing the maze. |
Allows you to hire testers from the maze panel | Manual process that requires tracking the number of respondents. It doesn’t guarantee an equal number of respondents on each version/order. |
Step 1: Set up your maze
Set up your design file so that it includes both versions you want to test.
Import the prototype into Maze and create two prototype blocks, one for Version A, the other for Version B.
Both blocks need to be set up using screens from the same file. Once you send the maze live, you won’t be able to change or refresh the file.
If you follow a consecutive approach (i.e. you’ll show participants one version, then the other), at this stage, Version A will appear before Version B.
Change the start screen to select a different flow within the design file. Learn how to test multiple flows using a single Figma prototype
For exclusive evaluations (i.e. you’ll only display any given participant one of the versions), you’ll initially hide the block corresponding to Version B.
To hide a block, hover over the block tile, click the ••• menu, and select Hide for participants.
Step 2: Share your maze & collect results (Part I)
After publishing the maze, share the maze link with the participants. Learn more about sharing mazes
You can use URL parameters to identify the version or audience of the maze. For instance: ?version=AB
or ?version=A
Step 3: Edit the maze
Keep monitoring the results as they come in. Once you’ve received enough results on that version, open the Build tab to go back to the maze editor.
If you’re showing both versions consecutively, change the order of the blocks so that Version B appears first.
If you’re showing one version at a time, hide Version A and display version B.
When you’re ready, update the maze with the latest changes.
Step 4: Share your maze & collect results (Part II)
Share the link once again with participants. Learn more about sharing mazes
If you’ve used URL parameters, change them to reflect the current version. For instance: ?version=BA
or ?version=B
Step 5: Analyze the results for both versions
You can now compare the results between the two versions.
If you’ve used URL parameters to identify the versions, they’ll appear as metadata in the Results dashboard, as well as the CSV export, if you’re on a paid plan.
Option 3: Using the Duplicate feature to compare prototypes
In this approach, you’ll use the duplicate feature to create two separate mazes: one for Version A, the other for Version B.
You can either display these versions sequentially (consecutive comparison) or one at a time (exclusive evaluation).
✅ Advantages | ❌ Limitations |
---|---|
It’s possible to use two separate design files. | You must take the appropriate steps to make sure that roughly the same number of participants tests both mazes. |
Distributing both mazes to participants is a more manual process than the alternative approaches described in this article. | If using hired testers and placing panel orders for each maze, you may get the same testers in both mazes. This is especially true if you make both orders back-to-back, and/or target a more niche audience. |
It’s not possible to combine results for both mazes. |
Step 1: Set up your first maze
Create and set up a maze for Version A.
Import the relevant file in the prototype block(s).
Step 2: Set up the second maze
In your prototyping tool, create a new version of the design (e.g., change the color or the copy of the CTA button). You can either edit the existing file, or use a new file altogether.
Back in Maze, duplicate the first maze you created. This will create a new maze with the same missions and questions as in version A. This new maze will also include the latest design changes.
Open the duplicate maze — this will be Version B. Here, you have a couple of options:
I. Use the same prototype
If you change the original prototype and then duplicate the maze, the new maze will also include the latest design changes. This is a good option when you want to use a different version of the same prototype you previously imported.
This option allows you to preserve the paths you previously set.
Refresh the prototype in Version B to get the latest changes you've made to the prototype.
II. Unlink prototype and import a new one
If you want to use a different file in version B, you can unlink the current prototype after duplicating. This will allow you to use a different prototype for version B. Learn more about unlinking prototypes
Keep in mind this option requires you to redo all paths across all prototype blocks, since unlinking a prototype also irreversibly removes all previously defined paths.
Step 3: Share your mazes & collect results
After publishing both mazes, share the links with the participants. Learn more about sharing mazes
At the moment, it’s not possible to randomize which maze participants will see, or to prevent testers from taking both mazes. This means that you must take the appropriate steps so that both versions are evenly distributed among the participants:
Link sharing: Split your audience into two and send different communications for each version. Additionally:
- You can use URL parameters to identify the version or audience of the maze before manually distributing them to participants (e.g.
?version=A
and?version=B
). - While we don't directly support or endorse them, there are also third-party custom URL tools you can use to distribute maze share links—for instance, Linkly, or Nimble Links.
Panel: The panel isn't a great fit for this approach, since there isn’t a way to prevent panel testers from taking both tests. This is especially true if you make both orders back-to-back, and/or target a more niche audience.
Prompt: When sharing mazes via an in-product prompt, you should add a parameter to the prompt URL, or choose different Amplitude cohorts as the target audience.
Reach: Assigning alternating testers with either an 'A' or 'B' tag allows for creating separate segments. When creating a campaign (one for maze A, the other for maze B), you can then use each of these segments as recipients.
Step 4: Analyze the results for both versions
When testing is done, compare the results from both mazes.
While it’s not possible to directly consolidate results for both mazes, you can embed both reports side-to-side in your tool of choice to easily compare them.
If you're on a paid plan, you can also export and combine the CSV files with the results from both mazes.
Additional resources
- For additional tips, read our blog post: From idea to impact - Your guide to A/B testing prototypes
- See this case study to learn how the team at Braze validated their designs with an A/B test using Maze.
Still need help?
If you have any questions or concerns, please let our Support team know — we'll be happy to help!