Discussion instructions in AI Moderated studies let you customize how the AI moderator conducts follow up conversations with participants. Think of it as briefing your moderator before the session begins.
Where to add probing instructions
There are two types of discussion styles in Maze, and where you add discussion instructions depends on which type you're using.
Freeform goals: this discussion style allows the AI to drive the conversation freely. Discussion instructions are attached at the goal level. Your instruction shapes how the moderator explores the entire topic.
Structured style: this discussion style follows the specific questions you’ve written. Discussion instructions are attached at the question level and apply only when follow-ups are enabled. Your instruction shapes how the moderator asks and probes after each individual question.
An instruction you add to one goal or question only applies there. It won't carry over to other parts of the study. This is intentional: it keeps the moderator's behavior accurate and predictable within each part of the conversation.
What's already built in
Before you add any custom discussion instructions, the AI moderator already follows core qualitative research principles out of the box:
- Explores rather than summarizes: The moderator invites participants to walk through their experience step-by-step instead of asking for a verdict. It asks one question at a time and uses the participant's own language when following up, so conversations feel natural and grounded.
- Follows the "why" thread: When a participant says something like "I liked it" or "I wouldn't use that," the moderator digs into the reasoning. When someone generalizes with "I always" or "it never works," it asks for a specific recent example.
- Stays neutral: No validation, no nudging. If there's an image on screen, the moderator refers to it neutrally and lets the participant describe what they see. It's comfortable with silence and won't rush to fill pauses.
- Respects boundaries: If a participant signals discomfort, the moderator backs off gracefully without pressing further.
When to use discussion instructions
Here are some common reasons teams add custom instructions:
Giving context the moderator doesn't have
"These participants were recruited because they cancelled their subscription in the last 30 days. You don't need to establish whether they cancelled. Focus on the events and feelings leading up to that decision."
"Our users typically check the dashboard 2-3 times a day. If someone mentions checking it significantly more or less than that, that's unusual and worth exploring.”
Set boundaries on what the moderator should avoid
"Do not mention any competitor names, even if the participant brings them up. Redirect by asking about their general experience instead."
"Never reference the pricing or cost of the product. If a participant asks about price, say that's outside the scope of this conversation and move on."
Accounting for cultural or audience context
"Participants are based in X country. They may express dissatisfaction indirectly, using phrases like 'it's okay' or 'it could be better.' Treat those as signals worth exploring, not as neutral responses."
Add conditional follow-ups
Customize the conversation path based on what participants reveal
"If a participant mentions they work in a team of more than 10 people, ask how decisions about new tools typically get made in their team.”
Emphasize a specific angle or sensitivity
"This study explores personal health routines. If a participant seems hesitant or uncomfortable at any point, acknowledge it warmly and offer to skip ahead. Do not press for details on medical conditions."
Templates
We've created a set of templates based on common qualitative research approaches. When you apply a template, it doesn't just paste generic text. The system reads your discussion goal and generates a relevant probing instruction tailored to that goal using the template's methodology.
Templates are a starting point. Review and adjust them to fit your specific study context.
How reliably does the AI follow instructions?
We’ve tested discussion instructions extensively. In the vast majority of sessions, the AI moderator follows them as written. That said, some honest expectations:
- It's an AI, not a script: Just like prompting ChatGPT or Claude, the moderator interprets your instructions with some flexibility. Occasionally it may rephrase or reorder things slightly. The intent will be followed, but the exact wording may vary.
- Safety guardrails take priority: If your instruction conflicts with baseline research ethics (for example, asking the moderator to pressure a participant), the guardrail will override your instruction. This is by design.
- Clarity helps: The more specific and unambiguous your instruction, the more consistently it will be followed.
Best practices
Write instructions for the moderator, not the participant: You're briefing a colleague, not writing a questionnaire. Say "Ask the participant to describe…" rather than "Describe your experience with…"
Avoid duplicating your question list: If you've already added "How often do you use this feature?" as a study question, don't repeat it as a discussion instruction.
Be specific over being thorough: Two sharp, concrete instructions outperform a wall of general guidance. The moderator already knows how to run a good interview, so your job is to tell it what's unique about this one.
Test with a dry run: Before launching, do a quick pilot session yourself. You'll quickly spot if an instruction is being misinterpreted or if something important is missing.
Still need help?
If you have any questions or concerns, please let our Support team know — we'll be happy to help!