Chapter 3
Product experimentation: How to conduct and learn from experiments
By now, you have conducted product research to help inform the product strategy. You know exactly what your users’ pain points are and how the existing options on the market are solving them. It's now time to test your product hypotheses and determine the most effective solutions. Learn what makes a successful product experiment, when to run one, and how to implement the results.
What is product experimentation?
Product experimentation is the process of testing different hypotheses for product improvement through quantitative and qualitative data. These improvements can range from small design and color changes to new features. You can run many possible experiments to optimize your product, including A/B tests, usability tests, multivariate testing, focus groups, or user interviews.
What are the components of a successful product experiment?
Product experimentation is a key part of the product development process. You can build a culture of product experimentation by running experiments continuously throughout the product lifecycle. The goal is to make small changes to your product’s design or functionality to ensure you’re always delivering the best possible experience.
To conduct successful product experiments, you should focus on:
- Centering the user: The experiment itself should pass or fail based on the users’ feedback and adoption metrics, but the decision of what to experiment with should also come from the user. For example, you can conduct a voice of customer analysis or host quarterly customer feedback sessions to maintain user-centricity.
- Implementing cross-functional teams: All product and customer-facing teams should collaborate on product research and experimentation. Since they directly engage with the user, they can distill what users value. This also keeps them updated about the latest product advancements they can use as selling or retention points.
- Learning continuously: The beauty of product experimentation is that you learn how your users experience your product when presented with changes. You get to understand your users' needs and behaviors through continuous product discovery and research, which helps you develop the best possible solution based on evidence—not assumptions.
- Finding an easy-to-use testing platform: Testing tools let you run different types of experiments and review large amounts of data at scale. So, it’s crucial that you get a user testing platform like Maze that everyone on your team can use and that lets you gather insights on your users' behavior.
Your step-by-step guide to product experimentation
After you’ve completed the product research process, consider conducting product experiments on your live product. Here's how to do it:
1. Set your experiment goals
You need to set goals for your research and make sure they’re aligned with your product strategy and overall business objectives. Focus on running experiments that influence bigger goals. For example, if one of your business KPIs is to increase time on page, think about how your experiments put you closer to reaching the target.
Take a look at these example goals:
⛔ Implement a new website banner based on A/B testing results
⛔ Test three different ‘Learn more’ button placements on the mobile app
✅ Increase time on page by experimenting with different banner designs
✅ Increase the number of most qualified leads (MQLs) coming from mobile devices by testing multiple ‘Learn more’ button placements
The first two goals might end up driving an increase in time on page or MQLs, but unless you specify that aim, you may not be able to capture the real impact of your experiments.
To narrow down your results and keep your experiments on track, try setting a North Star goal and counter metrics.
- North Star goal: A strategic target that guides the team toward a common objective
- Counter metrics: Cautionary metrics to ensure you're on track to reach the North Star goal
For example, if your North Star goal is to reach 100 new daily sign-ups, you might use traffic rates, time on page, or 'learn more' button clicks as counter metrics. Pay close attention to your counter metrics while conducting experiments, as any significant alterations could result in a drop in your North Star metric.
2. Formulate a hypothesis
Remember, hypotheses aren’t siloed ideas. They’re well-researched statements that you’re trying to validate or debunk. “Well-defined goals should guide the experimental design and analysis—and having a clear and testable hypothesis is crucial for meaningful results,” says Sonal Srivastava, Senior UX Researcher at Amazon.
When you formulate a hypothesis for your product experiments, come up with a clear description of what you expect to happen based on previous product, market, and user research. Here’s how to do it:
- Determine the key elements you’re testing on your experiment (pricing strategies, specific features, design changes, new user paths)
- Outline the expected result and say what you believe is going to happen
- State your assumptions based on your expectations
- Write the hypothesis using this formula "If [this changes/experiment], then [expected impact], because [assumption]"
For example, a hypothesis could look like this:
If we move the 'Learn more' button to the top right corner, then we expect users to click on it more. That's because findings from usability tests showed that most users clicked on the upper right side of the website when asked to sign up for the newsletter.
3. Choose your testing methods
Based on your goals, hypothesis, and resources, you should determine how you’ll conduct and measure your experiments. Use a mix of quantitative and qualitative data to make decisions. As Sonal explains, “Qualitative insights shed light on why users may prefer one option over the other, even if the quantitative results appear similar.”
The most common methods for product experimentation are:
- A/B testing (or split testing): In A/B testing, you split the audience into two groups and show each group a different version of your design to see which performs better. You can conduct A/B tests on your landing page copy and see which variant drives more conversions, test button placements, or test design ideas.
- Usability testing: Testing usability is crucial as it tells you how your users interact with your product and whether they find it intuitive and easy-to-use. Quantitative tests collect usability metrics like path completion, task completion rate, or error rate to assess performance. But you also need to conduct qualitative tests to listen to and observe how your users feel about and experience your product.
- Multivariate testing: Similar to A/B testing, this is the simultaneous evaluation of multiple variations of a product, feature, or design. Use multivariate testing to analyze the compound impact of different elements or variations of your product. This method helps you determine the highest-performing combination of changes.
- Tree testing: Evaluate the intuitiveness of your product’s information architecture or navigation structure. With tree testing, you can gather insights into user mental models and determine how easily they can find what they're looking for. For example, you can discover that your users expect to find your website's blog under 'resources' and not 'more.'
- User interviews: Talk to your users and ask them questions regarding your experiment. These conversations usually happen one-on-one with the participant and the moderator. During these sessions, you can ask open-ended questions following a structured script or allow the participant to speak aloud as they navigate your product.
- Fake door testing: This method helps you test the demand for a new feature or product without investing in its development. Add the new option to your website, implement a tracker, and send the people who click on it to a landing page explaining that the feature isn't available (yet). After a set time, you can analyze the number of clicks and see if there’s enough interest to start exploring the idea further.
Product tip💡
Use a usability testing tool like Maze to conduct usability tests and use the Clips feature to gather qualitative and quantitative insights. Record your participants' audio, video, and screen so you can paint a full picture of your product's usability.
4. Design variations for your experiment
You could design a low-fidelity paper sketch or a high-fidelity clickable design to run your experiment. Work closely with your design team to ensure previous user insights influence each variation. The method you're using for this experiment will determine the number of variations you'll test—and the teams that need to be involved in the design. Examples of potential experiment versions include:
- Testing landing page copy: Make mockups with different copy and CTAs
- Experimenting with colors: Get the design team to come up with different versions of the branding
- Designing flows: Ask the UX team to design a few versions of expected user paths
Use prototyping tools like Figma, Sketch, Marvel, InDesign, or AdobeXD to create these clickable versions of the experiment. Then, bring those designs to a testing tool like Maze to conduct research on those experiments.
Netflix is an example of a company that is constantly testing innovations. Juliette Aurisset, Director of Product Experimentation at Netflix, explains that "if every single Netflix member is, on average, in 20 experiments and each experiment has three variants, that's 3.5 billion Netflix experiences." So, everyone goes through a unique Netflix experience, and you can do the same at a smaller scale with your product.
5. Find the right audience
When testing your experiments, you want to get a representative sample of your actual and potential users. It's essential to determine who you'll be testing and how many people you'll need to ensure your experiment is viable. Try to gather a diverse group of research participants to avoid leading a biased study.
A good way to conduct legitimate research is by assigning tests randomly. “Randomly assigning users to different experimental conditions helps minimize bias and ensures a fair comparison,” says Sonal. “Sufficient sample size is important to obtain reliable and statistically significant results. And, employing appropriate statistical methods to analyze the collected data helps for accurate interpretation.”
In product experiments, unlike other types of research, you want to use your users as a primary source of information. So, make sure you conduct experiments using your current users as your audience.
6. Carry out controlled experiments
Once you've developed your designs and know who you're testing, you can start conducting experiments. You can run studies on your live website or use mockups or simulations. Launch your experiments on your testing tool and let them run for some time before reviewing the data.
If you're testing your live website or product, make sure you review your counter metrics frequently to avoid letting experiments hurt performance. Also, avoid having all users test a new idea at the same time. Carrying out controlled experiments will guarantee that you don't affect the experience of most of your users in case they go wrong.
7. Track different metrics to determine the success of your experiment
When you build a culture of experimentation, you can monitor different metrics depending on the various types of experiments. Choose metrics that will determine the success of your experiment and help you decide which solution works best for your audiences, all while keeping the bigger goal in mind.
"I look beyond the primary metric to consider secondary metrics that may reveal additional insights. Analyzing user behavior, engagement, or other relevant aspects often uncovers nuanced differences between the options," says Sonal.
When multiple diverse audiences use your product, you can experiment with algorithms to offer a personalized product and experience. As Sonal explains, "While overall results may be similar, variations can exist among specific user segments. Understanding these differences lets you tailor the product to better meet the needs of different user groups."
8. Analyze the data
Once you've completed your experiments, review the results. "Based on user interactions, behavior, or other desired outcomes, use statistical analysis to compare variations and assess the experiment's impact on the chosen metrics," explains Sonal. Look at the metrics in context and review the results with statistical comparisons.
If the results aren’t conclusive, meaning they’re too similar, you can either:
- Assess the statistical significance: Determine whether your results are reliable and not caused by chance. "Even if the results appear similar, a narrower confidence interval and higher statistical significance provide more reliable findings," explains Sonal.
- Test again: In product-facing research, you should always ensure the user is the one making decisions. If you're in doubt, iterate and retest. “I recommend iterating and testing again if the results remain inconclusive. Refining the options or introducing new variations allows you to gather more data and validate findings," says Sonal.
- Consider additional factors: Assess what other factors might have influenced the results. “If you can’t determine a clear winner, I recommend considering other factors such as development costs, implementation complexity, or user impact. Assessing trade-offs and making decisions beyond immediate experiment results will help guide the next steps,” shares Sonal.
- Craft personalized user experiences (UX): If you get similar results on different groups, consider creating tailored experiences for each user based on their unique preferences. Conduct UX research before trying product experiments in this area to ensure you’re making the right assumptions.
9. Present your conclusions
Your penultimate step is to set up a meeting with stakeholders to go through your findings. Explain the goals, methods used, audience size, and results. Then, list all the action steps for each team. Open the room for feedback, and populate the backlog with new action items for sales, marketing, development, product, and research teams.
This phase is crucial to reflect on what you've achieved, keep all stakeholders in touch with the users’ preferences, and reinforce the importance of making user-centric product and business decisions.
10. Iterate and continue experimenting with your product
When you follow a continuous product discovery approach, product experimentation isn’t a one-time thing. Remember the Netflix example: this company frequently conducts product experiments to keep innovating and offering the best product experience possible.
Continuous product experimentation becomes a competitive advantage and can help you grow or stay at the top of your industry by listening to your customers first and often. "Product organizations need to truly understand their customers to be competitive," says Xiangyi Tang, Head of User Research at Pitch on the 2023 Continuous Research Report.
Iterate and continue experimenting with your product to build solutions for your users. "With continuous research, product organizations can constantly course-correct to work on the right problem and provide the right solution," says Xiangyi.
When to run product experiments
Typically, you should do product experiments after you’ve conducted initial product, user, and design research. This way, you’ll have a better understanding of your target audience and confirmation that your concepts and solutions work. Now, it’s just a matter of identifying how and what works better.
Skipping previous research on your initial design or early minimum viable product can lead you to spend valuable resources testing ideas that aren’t aligned with your user base. Consider running experiments on your product when:
- Developing a new feature: Determine the best way to present the feature in your product. You can test the feature name, its design, the UX, or the functionality itself. Try A/B testing different names or colors or fake door testing your feature idea.
- Optimizing the user interface (UI): Gather your users’ opinions on your UI changes to implement the design they enjoy the most. If you’re using Maze, you can do this through Live Website Testing, In-Product Prompts, Surveys, or Prototype Testing before launch.
- Building a pricing strategy: Experiment with different price points and strategies and implement the ones that drive the most conversions. You can test various discounts or even and odd-even prices. For example, offering plans at $9.99 instead of $10 might increase your sales.
- Testing copy to improve conversions: You may not be the best judge of your product's copy since you're already familiar with what it does and the terms you use internally. Your users need to clearly understand what you do and what they're getting out of your product. So, test your copy through surveys, five-second tests, and opinion scales.
- Improving the onboarding process: A high conversion rate means nothing if you can’t retain your customers. Experiment with your user onboarding process until you're confident you can get your new customers up to speed with your product.
Reap the benefits of running product experiments
Running experiments on your product helps you create products that serve your customers better and drive superior business growth. Additional benefits include:
1. Increase customer retention rates: By continuously listening to customer feedback, innovating on your product, and taking the time to understand the customer’s journey, product experimentation can boost your retention rate
2. Gain a competitive advantage: Running product experiments allows you to stay one step ahead of your competitors as you constantly evolve based on market and users' demands and industry trends
3. Shorten decision-making loops: When you focus on the user and make evidence-based decisions after experiments, you speed up the decision-making time and take innovative solutions to the market faster
Product experimentation also allows you to grow your organization's research maturity by implementing a culture of continuous improvement where the user is front and center.
To test fast and often, you need a product discovery tool that can support your research needs. With Maze, you can recruit from a diverse panel of participants and use different research methods to collect high-quality customer insights.
Continue learning about product research and how to conduct competitive product analysis to gain insights about the market and improve your product.
Frequently asked questions about product experimentation
What is product experimentation?
What is product experimentation?
Product experimentation is the process of iterating on your product and evaluating the performance of those innovations. You can test the experiments through A/B testing, usability tests, multivariate testing, focus groups, or user interviews.
How do you set up a product experiment?
How do you set up a product experiment?
Follow these steps to set up a product experiment:
- Align your experiment goals with bigger strategic product and business objectives
- Formulate a hypothesis to validate your experiment
- Choose your testing methods to evaluate the success of your experiment
- Decide which metrics to track with your innovation
- Design variations for your experiments and assign theme names
- Find an audience to test your experiments on
- Carry out controlled experiments with a sample size of your users
- Analyze the data comparing results with the goals
- Present your conclusions to key stakeholders and determine next steps
- Iterate and experiment on other aspects of your product
Why is product experimentation important?
Why is product experimentation important?
Product experimentation helps businesses:
- Understand user needs and stay ahead of the competition
- Increase customer satisfaction and retention rates
- Use data to inform decisions and speed up decision–making
What are the methods of experimentation?
What are the methods of experimentation?
Some common methods for conducting product experimentation include:
- A/B testing or split testing
- Quantitative and qualitative usability tests
- Multivariate testing
- Tree testing
- User interviews
- Focus groups
- Fake door testing