When you test your maze with users, you might be asking how many testers you need. We recommend you test each maze with twenty users or more to get valid results.
Variability between users’ performance
With Maze, you do quantitative usability testing and collect metrics about your designs’ performance with real users. As a standard practice, twenty testers are recommended to allow for the variability that might exist between users.
Some people might be novice users of your product and will take more time to learn the interface. If you test with too few users, the difference in user performance can affect the viability of your results.
Similarly, although not as influential, some people may find the concept of usability testing new, and the testing interface confusing at first. To account for the novelty of the experience, we recommend you start your maze test with a simple task that gives context to users.
External circumstances
When you conduct unmoderated remote testing (i.e., you send your testers a link, and they take it on their own), external circumstances might influence the results you get.
For example, your testers might leave the maze tab open when distracted by something else. This natural behavior happens in a live product too—so it’s worth thinking about it.
However, when measuring usability metrics, this may lead to outlier metrics, which will not be design-related, such as high time on screen or high give up rate. Having more people take your maze will help you account for this deviation if it happens.
Confidence in the data
If your product is mission-critical and you want to be confident all issues have been uncovered before you implement your design live, it’s better to test with more users to make sure all harder-to-find problems have been discovered.
More testers mean greater confidence that the results you get are accurate and increased confidence in the results. For mission-critical designs that can potentially affect a high number of users, you should test with as many users as you can.
Conclusion
In quantitative studies, the more users you can test with, the better chance your results are accurate, and you’ve uncovered even the harder-to-find problems.
As a rule of thumb, we recommend you test with at least twenty users to account for all enumerated occurrences. When your project is critical or if you want to have high confidence in the data, test with as many users as you can afford.