Usability testing goes beyond one-time fixes after shipping: it’s a powerful method for understanding your users’ behavior/preferences and validating solutions.
Regular product research provides invaluable insights into what’s working, what needs improvement, and overlooked opportunities. Without it, you risk spending time on something that doesn’t meet user needs, leading to wasted resources and additional costs.
But how do you tackle this? How do teams reflect on their research goals and design effective studies, especially if unfamiliar with conducting research?
This topic was covered in the webinar ‘Designing Effective Usability Tests’, with Roberta Dombrowski, Research Partner at Maze. Review the full webinar recording
This guide recaps the key takeaways from the webinar, and outlines the process of conducting usability testing effectively. It helps you identify opportunities and problems early on, to make sure that not only are you building right, but you’re building the right thing.
Step 1: Define goals & evaluation criteria
What specific tasks do you want participants to accomplish within your product? Examples include finding information, completing an online food order, or booking transportation.
Establish what you are looking to measure in your study, and identify clear success criteria to effectively gauge user performance. Consider what the expected outcome of each task should be.
Evaluation criteria can be classified into two primary categories: actions and attitudes.
Action-oriented criteria revolve around tangible metrics that reflect user engagement and interaction, such as mission success, bounce rate, misclick rate, and session duration when testing prototypes or websites. These metrics help evaluate the ease of use of your product.
On the other hand, attitude-oriented criteria focus on the subjective experiences and perceptions of your participants, including satisfaction, ease of use, clarity of the messaging, and confidence in the product.
Step 2: Choose your approach
Consider the following vectors when designing your test:
Moderated vs. Unmoderated: Moderated testing involves live sessions, offering deeper insights for complex sites or prototypes. In unmoderated testing, participants complete tasks autonomously, and sessions are analyzed at a later stage. Learn more about when to use unmoderated vs. moderated testing
Qualitative vs. Quantitative: Quantitative testing relies on numerical data, answering questions about "how much" or "how many,". Qualitative testing delves into observations, comments, and feelings, providing insights into "why" something is happening. Maze supports both approaches: for instance, in an unmoderated maze test, you can add Opinion Scale or Multiple Choice blocks to gather quantitive data, and use Open Questions with dynamic AI follow-ups to dig further into those answers.
Remote vs. In-person: In-person testing provides the opportunity for more direct interaction, but may be more costly, logistically challenging, and time-consuming. Remote testing offers more flexibility and the ability to reach participants in different locations.
Each approach has its merits and drawbacks, and none is inherently better or worse. The key is to select the one that aligns best with your goals and resources.
Step 3: Outline your script & build your maze
The script is the backbone of your usability test, and it serves as a guide for designing a cohesive study.
Regardless of the chosen approach, the script should include any relevant context and instructions, warm-up/background questions, tasks to be performed, and wrap-up questions.
Warm-up questions
The script flow often begins with broader warm-up questions. Typical introductory questions include:
"Can you describe a day in your life in your current role?"
"What is the biggest challenge you face throughout your day?"
These are followed by questions designed to capture participants' current experiences and challenges:
"Can you describe your experience with similar products or services?"
"What are your main goals when using this type of product?"
Tasks
Next, create specific tasks for the participants. These tasks can be a prototype test, a website test, or a card sorting exercise. For example:
"Find and apply a discount code to your shopping cart."
"Locate the help section and find information on returning a product."
Consider the order of tasks to avoid guiding users towards a specific success path, allowing for a genuine evaluation of their interaction with your product.
Make sure tasks are simple and clear so that participants understand what's expected of them. Aim for 3–4 tasks per study to avoid overwhelming your participants.
Learn best practices for writing mission tasks and descriptions
Wrap-up questions
Lastly, post-study questions allow participants to reflect on their experience and provide feedback on the tasks.
Here, you may use blocks such as Opinion scales for rating, or Open-ended questions with dynamic AI follow-ups to dig further into the tester’s experience.
Examples:
"How easy was it to complete the tasks assigned? Please use a scale from 1 to 5, where 1 is very difficult and 5 is very easy."
"What was the most challenging part of the tasks, and why?"
"Do you have any suggestions for improving the process you just went through?"
Step 4: Set up your testing file
When testing prototypes, we highly recommend creating a dedicated file specifically for testing with Maze, for a clean and lightweight testing environment.
To do this, duplicate your Figma file and, in the new version, remove any pages, frames, assets, images, and elements not relevant to testing before importing.
Important guidelines:
- Include a note in the filename and/or within the testing file itself clearly stating that it is linked to a live maze and shouldn't be modified.
- Set a supported share permission for your Figma file. Otherwise, the import will fail. Updating these settings after import will prevent testers from opening the maze.
- Avoid making significant changes to a Figma file linked to a published maze, as these could cause the maze to stop working. For example, removing a screen or an interaction could make it impossible for participants to complete the maze.
Learn more tips about optimizing your Figma files for testing
Step 5: Import your prototype
Once you have a Figma file to test, log in to your account at app.maze.co and create your maze. Add a Prototype test block to the draft maze, and import your Figma prototype.
A few tips:
- The Figma plugin is a great option when importing very large files.
- Enable Clips to record participants as they go through your prototype test.
- Need to compare different versions of your prototype? Use a variant comparison block.
-
Expected paths are the benchmark to assess whether participants succeeded in the task.
Our team is currently working on a complete free-roam testing option, which will be released soon. In the meantime, you can find a workaround to set up your prototype to allow for path-less testing.
Step 6: Preview and publish
Reviewing your work before sharing a study with participants is a critical step to guaranteeing the effectiveness of your research. This involves identifying any typos, errors, or biases in the script or study build, as well as evaluating the clarity of questions and tasks.
Learn how to preview your unmoderated mazes
We recommend asking someone within your organization who isn’t directly involved in the project to review. Their fresh perspective may bring to light potential issues or areas for improvement that may have been overlooked by those closely involved in the project. By thoroughly testing your work before launching the usability test, you can address any issues and optimize the test for the best possible outcomes.
Once your maze is ready, don’t forget to thoroughly preview and test it before publishing to verify that the content and performance meet your expectations.
You can make some edits after publishing, but several changes are restricted to avoid misleading results due to modified variables.
Click Start Testing to set your maze live and choose from the various options for sharing with your testers.
Step 7: Share with testers
There are different approaches for distributing your maze to participants:
- Copy maze link: Share the maze link to internal or external testers, via email, SMS, internal communication channels, etc. Learn more
- Hire from our Panel: Recruit participants from the Maze tester panel. Organization plan customers can use advanced filter criteria and screening questions to hire testers relevant to the target audience. Learn more
- In-product prompt: Add a popover to your website that directs users to the maze. This is also an effective way of recruiting new testers. Learn more
- Send Reach campaign: Email your participant database and create segments for targeted audiences. Learn more