Maze was built on a simple idea: if teams could move faster with user insights, they’d make better decisions.
That’s why we didn’t just streamline research—we reimagined its speed. From recruiting to testing to analysis, we integrated and accelerated every step. The result? Research became fast enough to actually happen. And that velocity became our edge.
Speed wasn’t just a feature. It was the unlock.
It still is.
But the definition of speed is changing again—and so is the way products get built.
AI is rewriting the rules of product development
LLMs are transforming how teams ideate, prototype, and ship.
Suddenly, decisions that used to take weeks now happen in hours. The opportunity to learn at that same pace is huge.
And yet, we’re seeing two troubling trends emerge.
First, some companies are skipping insights altogether—shipping feature after feature in the hope that something sticks. It’s velocity at the expense of understanding.
The result? Products that solve for output, not outcomes.
Second, and arguably worse—others are replacing real insight with automated noise. We’ve all seen it: AI-generated interviews that sound coherent but say nothing. Summaries that flatten nuance. ‘Insights’ no one can verify, trace, or trust.
Neither path builds better products. Both erode confidence—in research, in teams, and ultimately, in decisions.
That’s not a flaw in the tech. It’s a failure in how it’s being applied. Speed without trust isn’t a solution. It’s a liability.
Hype, not headlines
Maze has always taken the long view.
Two years ago, we quietly launched Follow-up—an AI-powered feature that lets you dynamically probe user responses in unmoderated tests. At the time, it was cutting-edge. But we didn’t brand it as ‘AI’ at launch.
Why?
Because we’ve never built for hype or headlines. We’ve built for value.
We believe tools should earn their place in your workflow—not demand your trust just because they’re new. That same belief is guiding what we’re building now.
We’re not building AI for the sake of AI. We’re building tools that make researchers more effective—and make research more credible.
Introducing AI moderator: your research co-pilot
So here’s the bit where I announce that yes, we’re building an AI moderator for user interviews.
But this isn’t a chatbot in a lab coat.
AI moderator is a purpose-built assistant that helps researchers:
- Ask better questions on the fly
- Follow up dynamically in real time
- Synthesize with transparency and traceability
- (And much more we’ll reveal in time…)
It’s not here to replace you—it’s here to amplify you.
In a recent Maze article on humans vs. AI-led research, we spoke about the new research model: a hybrid model of collaborative intelligence. And that’s exactly what AI moderator is.
The insight behind the idea: What we understand about researchers
If you ask our users what they love most about Maze, the answer is almost always the same: it’s easy to use. But that ease isn’t accidental—it’s designed.
From day one, Maze was built for access. For speed. For the product team that needed insights yesterday, and the company just starting to scale research.
We’ve always believed that making research easier to run doesn’t mean making it less rigorous. It just means designing for the real world: where time is short, teams are stretched, and insights need to move at the speed of product.
We’ve done this by building for the lowest-maturity organizations first—then layering in depth. It’s why Maze feels simple and powerful. And it’s why ease is so hard to copy: it’s not a UI decision, it’s a product philosophy.
With AI moderator, we followed the same principle. We ran the research (of course we did—we’re a research company), and we heard one thing loud and clear: the biggest blocker to AI adoption in research isn’t capability—it’s trust.
Researchers told us:
“If I’m going to put this in front of my team, I need to know it won’t produce garbage insights. I need to stay in control.”
So that’s what we built. We call AI moderator a co-pilot for a reason:
- You stay in the driver’s seat
- Every quote is traceable
- Every insight is editable
- Every output is grounded in your goals, your context, your judgment
And because real insight doesn’t happen in isolation, AI moderator is part of something bigger: a full-stack research platform where moderated interviews, usability tests, card sorts, and survey threads live in one place.
So your insights connect. Compound. Scale. Without duct tape.
Because you’re not just running studies—you’re defending decisions. And credibility isn’t optional.
Why closed beta? Because context is non-negotiable
We’re releasing AI moderator in a closed beta—not because the tech isn’t ready, but because we know real research doesn’t happen in a vacuum.
It happens in complex teams, with organizational dynamics, shifting priorities, nuanced context, and real stakes.
This beta isn’t about testing features. It’s about validating fit.
Can this tool elevate your system? Can it scale your standards? Can it earn trust across your org?
Simply dropping an AI moderator and saying ‘right, that’s shipped’ feels, for lack of a better word, irresponsible. AI is here to stay, and we believe it has the ability to accelerate insights and empower researchers to move faster and more strategically.
But only if organizations build AI tools with intentionality and trust at the forefront.
That’s what we’re building toward—and we want to build it with you.
Built for the full-stack researcher
If you’re reading this, you might be one of them.
You don’t just run studies. You architect systems. You coach teams. You influence roadmaps.
You connect speed with rigor. You turn research into infrastructure.
AI moderator is your leverage engine—not a shortcut, not a gimmick.
Because the future of research isn’t about doing more with less. It’s about scaling the value you already bring—with tools that respect the complexity of your work.
This isn’t just another new feature. It’s a complete reframe.
This isn’t just a feature launch. It’s a strategic step forward.
We’ve never seen research as a service function. We see it as infrastructure.
Because research doesn’t just deserve a seat at the table. It should shape the conversation. When it’s done both fast and right, research becomes the backbone of organizational decision-making—not a bottleneck to be bypassed.
So we’re building the infrastructure for AI-assisted research—without cutting corners.
We’re not trying to be the first to ship AI. We’re being the first to get it right.
Want to help shape this?
We’re looking for forward-thinking researchers to join our closed beta for AI moderator.
If you’re building systems, pushing for influence, and ready to reimagine the role of AI in research—we want to work with you.
Let’s redefine what research can be. Together.
—Jo