Harness the capabilities of GPT-4 with the AI Follow-up block. This block explores deeper insights from your testers' answers, like you would in a moderated session.
You'll start by defining an initial open question. From there, the AI assistant generates up to three additional follow-up questions, based on each tester’s initial response. This allows you to delve further into answers, and unveil insights that might otherwise have remained hidden.
When results start coming in, the AI assistant summarizes testers' initial response and subsequent conversations. Analyzing multiple answers gives you a clearer insight into collective sentiment.
In this article:
- Who can use this feature
- How confident can I be about the quality of the AI-generated follow-up questions?
- Adding an AI block
- Previewing your AI block
- Supported languages
- Understanding your AI block results
- Reporting follow-up questions and results
Who can use this feature
AI follow-ups are available in the Organization plan
How confident can I be about the quality of the AI-generated follow-up questions?
We block content that goes against OpenAI’s usage policies.
Maze also gives the AI specific instructions to follow research best practices. This includes, but isn't limited to:
- Not asking leading or biased questions. Questions are formulated neutrally and objectively. It must avoid any language or phrasing that could steer participants towards a particular response or opinion.
- Only asking one question at a time. Asking one question at a time helps avoid ambiguity and simplify analysis. This makes it easier to draw meaningful conclusions from the research findings.
- Other survey/question design principles
Please note that, because AI models are still in their infancy, it’s not possible to fully guarantee that they’ll respond exactly the way you’d intend.
That said, testers can stop the conversation anytime. The reporting mechanism allows maze creators and testers to flag poor-quality questions.
We also recommend testing out your AI block setup via the Preview panel before sharing your maze with testers.
Adding an AI block
To add an AI block:
- In the blocks list of your draft maze, click Add block.
- Select AI follow-up from the drop-down.
- Enter the initial question.
- Define the desired number of follow-up questions, up to three.
Previewing your AI block
On the Preview panel on the right-hand side, you can test how this block will appear to testers.
After entering your question, click Simulate answer to have the AI assistant generate an answer the same way a tester would.
The option to simulate an answer is also available in the maze preview. Learn more about previewing your mazes
Testers always have the option to skip AI-generated follow-up questions.
Supported languages
GPT-4 supports English as its primary language, but can also handle other languages, since it was trained on a variety of multilingual internet text. This includes data in Spanish, French, German, Italian, Portuguese, Dutch, Russian, Chinese, Korean, etc.
However, while it can generate text in languages other than English, its proficiency and accuracy may vary in different languages. We recommend previewing and testing your AI blocks to ensure it performs well in your desired language.
The exact number of languages GPT-4 supports can vary as OpenAI updates and improves the model. They've been working on expanding language support; future updates may bring increased proficiency in additional languages. Please refer to OpenAI's official documentation for the most up-to-date information on language support.
Understanding your AI block results
Sentiment
As testers complete your maze, you'll start seeing insights on the Results dashboard.
Select the AI follow-up block to see an analysis of the collective sentiment across all responses, split into Positive, Neutral, or Negative.
Themes
Themes enable you to label responses to a follow-up question block, helping you identify recurring patterns across results.
Responses
Scroll down on the Results page to see AI-generated summaries for each tester's response on the block.
Click View conversation under each summary to see the exact answers to the initial and follow-up questions.
Use the sentiment dropdown if you need to override the AI-attributed sentiment of an individual response.
Maze report
Maze reports make it easy to analyze, share, and present your results data. Maze auto-generates a report for every live maze tested with at least one tester.
See a demo report with AI follow-up results
Flagging follow-up questions and results
If you’re not happy with the AI-generated follow-up questions, summary, or sentiment analysis for a specific answer, you can report it. These reports help our team to further refine the model. This way, the follow-up questions become more relevant, and the summaries more accurate and concise.
To report an issue with the AI-generated content of the block:
- Open the More menu (•••) next to each answer.
- Select Report from the drop-down.
- Select your reasons for reporting: poor-quality follow-up questions, summary, or sentiment.
- If possible, add more details about why you’re reporting the AI-generated questions/results. This will help our team better understand the issue, and train the model accordingly.
Still need help?
If you have any questions or concerns, please let our Support team know — we'll be happy to help!