Currently, each prototype test on Maze must have at least one defined path, and it is only considered complete once a tester reaches the end screen of the path.
This means that the ability to let testers navigate through your prototype without a defined goal in the context of concept testing is not fully supported yet.
This article explores couple of workarounds using a prototype block and some adaptations in your design file.
In this article:
- Before you start
- Free roaming/exploration for live websites
- Set up your design file
- Create a free-roam test in Maze
- Analyze your results
Before you start
- Before creating a free-roam test, it's important to consider the modifications that you'll need to do on your design file and the implications that may have on the performance of your files.
- Because participants will be manually ending the test without reaching the success screen as they typically would, results and reporting will appear skewed.
Free roaming/exploration for live website tests
The workarounds explored in this article are only needed when testing prototypes.
Unlike prototype tests, you don't have to pre-define paths when setting up a website test. To run a free roam test on a live website, simply skip the path creation step.
Should I use a single design file or a separate file for free roam testing?
When setting up your design file, you can either use the same prototype where you conduct your normal task-based usability tests, or use a separate prototype.
Option A: Use a single file
It's only possible link one design file per maze. This means that, if you want the same maze to include mission blocks for both the free-roam testing and the task-based usability testing, you need to use the same design file.
However, this workaround may require adding more frames to your existing file, which in turn could lead to performance concerns in larger files. Learn more about creating an optimized testing file
If using a single design file for your mission and your free-roam blocks, you should also ensure that all relevant frames are accessible in Maze by adding multiple flows.
You might also need to change the start screen before setting your path, so that the free-roam block starts with the correct screen.
You should also note that adding a free-roam test alongside other mission blocks will skew the results data.
This shouldn't be a problem if your primary goal is to analyze screens and paths at the block level in your Results dashboard. This functionality will still work, as seen in the Analyze your results section below.
On the other hand, global report metrics and analyses are calculated at the maze level. Since free-roam results will overwhelmingly count either as an indirect success or as a give up (depending on the approach), this will skew down metrics such as the usability score.
Option B: Use a separate file
Creating a separate design file for your free-roam test allows you to keep the file size lower, since the file only needs to include the frames that are relevant for that test.
This approach also maintains the integrity of your results, since there is a complete separation between results and reporting for the free-roam tests and for other missions.
On the other hand, you would need to create a separate maze because only one prototype can be imported per maze. This option does therefore require you to maintain two separate design files and mazes.
Setting up your free roam test
In the design, include the following element to allow your testers to explore the design without being given a specific task:
Approach A: In the task instructions, encourage users to "End task" once they've completed the test
- A "hidden pixel" element in the start screen that navigates to a placeholder end screen.
- A placeholder end screen linked to the hidden pixel interaction.
Participants won't interact with these elements. You'll just interact with them once when setting up the mission, to mimic a two-screen path.
After importing the prototype into Maze, set up a single two-screen path that leads from the initial screen to the placeholder final screen.
Because you expect testers to click the End task button to complete the mission, it's particularly important that you give clear instructions about what they're expected to do.
Please note that the End task button only becomes available after participants click the prototype for the first time.
Approach B: In the prototype, include a clickable element leading to a generic success screen
- A banner or some other element that testers should click in order to complete the mission once they've finished exploring, or they think they've completed the task.
- A generic success screen that's triggered when clicking the banner.
After importing the prototype into Maze, set up a single two-screen path that leads from the initial screen to the placeholder final screen.
Because testers will need to click the banner in order to complete the mission, it's particularly important that you give clear instructions about what they're expected to do on that block — preferably, both on the label/element you've added to the design, or on the task instructions.
Analyze your results
With this workaround, you won't be able to analyze your Results data as you normally would for a prototype block. Metrics such as the usability score or the misclick rate will be heavily skewed. That said, you'll still be able to analyze your testers' behavior by looking into paths and screens.
If you opted for Approach A, all give up
If you opted for Approach B, the only expected path is the two-screen path you've set up leading to the generic success screen. Because testers will typically take unexpected paths before they reach the last screen of the mission, most relevant interactions will be recorded as an indirect success.
To avoid skewing your usability metrics, you may want to use a separate maze in order to conduct free-roam testing.
Path analysis
When analyzing paths in a free-roam test, make sure that the Give up tab (if you opted for Approach A) or the Indirect success tab (if you opted for Approach B) are selected, since this is where all your meaningful data will be located.
Dive into the aggregated path data to see which were most common among testers.
Open each path to analyze the respective heatmaps.
Screen analysis
Click each screen to see the heatmaps/clickmaps of your testers' interactions.
Tester analysis
At the bottom of the page, you can also see details from each individual tester session.
Still need help?
If you have any questions or concerns, please let our Support team know — we'll be happy to help!