Apr 19, 2022
Validating early concepts to improve the Maze Report
How Maze uses unmoderated concept testing to co-create a solution with users, prioritize product decisions, and gain stakeholder buy-in.
Task: Validate concepts to improve how users were using the Maze Report
Timeline: from April to July 2021
Role: Product Design Lead
Concept testing is about validating ideas with users early in the product development process to make informed decisions about what to build.
In this case study, we'll outline how we used unmoderated concept testing to co-create a solution with users, prioritize product decisions, and gain stakeholder buy-in.
1. Context
Maze is a product research platform that empowers teams to test, learn, and act rapidly. A key component of our product is the automated Maze Report, where the data collected from Maze tests are consolidated, formatted, and easily shared with team members.
We saw our users creating mazes and analyzing responses, but they weren't taking full advantage of the Maze Report to share their findings with the rest of their team—and we wanted to know why.
2. Understanding the problem
Before jumping into solutions, we took a step back to better understand how people organized their research, the tools they used, and the steps they took to share learnings with others. To start, we surveyed users who’d recently used the Maze Report, added a feedback form to the product to measure satisfaction, and gathered any related support tickets we could find in Zendesk.
We were also curious about how people interacted with the reports. So we used Hotjar and Amplitude to gather behavioral data on how often users visited reports, how they navigated through them, and how frequently they chose to 'Share report' with others.
3. Defining the problem
To make sense of the info, we organized it into groups in Notion, and created themes to summarize the data. Here are a few of the key learnings and takeaways that helped us get inside our users’ heads:
Finding #1: People were seeking more control over how they share their findings. We saw many people taking screenshots of their data and pasting it to a presentation tool, and then adding context, next steps, and other creative embellishments.
Finding #2: People share knowledge in a number of ways. Some respondents chose to present their findings in person or through a video presentation. Others chose to share via Slack, Notion, or email. While some people had a clear preference, over half (54%) were using a combination of methods. This told us that we needed to support both async and synchronous use cases for sharing research.
Finding #3: People process information in different ways, and sometimes data alone isn't enough to tell the full story. As a result, we needed to enable people to combine text and visual assets to efficiently communicate their learnings.
4. Potential solutions
The next step was to share what we’d learned with our product team so we could begin brainstorming solutions. We kept our users’ needs front and center to make sure our solutions stayed close to the actual problems people were experiencing.
We used the following statement as a springboard for ideating:
“How might we enable users to tell a better story with their data in the Maze Report?”
After individually generating possible solutions, we presented our ideas back to the group. Finally, we combined similar proposals and cast votes for the concepts we thought were the most promising.
The winners included adding:
1. More context: Let people add additional text, images, or quotes to their data to help highlight key insights, explain next steps, and create action items.
2. Executive summary: Provide a one-page top-highlights list for users to summarize the full report, along with options to personalize the presentation.
3. Build from scratch: Give users a blank canvas and let them select which blocks to include in a report to craft the story they want to tell.
We created simple mock-ups to communicate these ideas and kick-start the discussion. Our concepts were far from polished, but we were ready to get started with our Maze test.
The goal was to give the user just enough context to understand how this feature might work, while also leaving room for them to help us build on top of our proposal.
A Maze test aimed at understanding how to best improve our reports.
5. The Maze Test
There are lots of options for testing your concepts, from focus groups to moderated interviews. But a survey is by far the easiest and most efficient to set up. If you’re new to this space, check out our Maze guide to concept testing for an in-depth look at different methodologies and tools.
Next, we needed to select the appropriate survey testing methodology. This is influenced by several factors, like your participant sample size, time, and budget constraints. In our case, we used protomonadic testing, where each respondent rates various concepts before taking a comparative test to determine which variant they prefer.
When setting up our test, we considered the following factors:
- Include a sample image and explanation of each concept to best convey the purpose and function of how it would work.
- Capture both quantitative data (e.g. opinion scale, multiple choice) and qualitative data (e.g. open questions) to get a rounded view of a user’s perception of a concept.
- Ensure that all users see all three concepts before moving on to assess each concept individually. This helps reduce the risk of order bias.
6. Results & next steps
Within one day of sending the Maze test to our users, we’d already collected 38 responses. So we brought the product team together to evaluate which concept we felt most confident to pursue. By having a more informed user voice, we could discuss the feasibility and viability trade-offs in a more nuanced manner.
The maze's results showed that "Concept A: Executive Summary" had the highest combined weighted total, followed closely by "Concept B: Add context." However, when discussing scope, implementation, and time constraints, we decided to go with Concept B for the initial release, which we could more easily build on in future iterations.
Quantitative and qualitative results capture from our Maze test.
7. Key takeaways
- Set the stage: Introduce concept testing to your survey participants to make sure they’re familiar with this type of testing. It’s also a good idea to explain how many concepts they’ll review to give a clear idea of the time investment.
- Lookout for bias: To avoid order bias, make sure that the order of concepts is randomized. When testing two or more concepts, use monadic testing and divide your concepts into separate mazes to reduce potential order bias.
- Take advantage of unmoderated: We were able to capture a wide range of learnings at scale while quickly sifting out the ideas that weren't worth pursuing by starting with unmoderated methods. With our focus narrowed, we could then move to more time-intensive moderated sessions to dive deeper into specific topics.
- Keep users front and center: Human-centered design allows you to engage and collaborate with your users from the start. Avoid designing features solely from a business or technical standpoint.
The best solutions are found where desirability, feasibility, and viability meet.
8. Toolstack
- Notion (Knowledge base and compiling qualitative response data)
- Figma (Interface design)
- Figjam (For workshops)
- Hotjar (Understand users behavior across our product)
- Zendesk (Find supporting attitudinal data)
- Amplitude (Find supporting behavioral data)
- Autopilot (Communicate with users)