A/B testing, also known as split testing, is a common method used to compare two variations of an element, such as a landing page or an email subject line with the goal of increasing conversion or open rates.
Split testing can also be done earlier in the process to learn if a design will be effective with users once implemented live. In this article, we’re going to take a look at how Maze allows you to split test your design in a few easy steps.
What is A/B testing?
A/B testing is the process of testing two (A vs. B) or more variations of a design element to find out which one performs better. Applied to design, split testing is used with the purpose of validating a design hypothesis early in the product development process.
To get started, you'll need to create the first version of a design, also called the Control version. By changing one variable in the Control, such as the placement of a subscribe box field or the call-to-action (CTA), you’ll create the second design version, also known as the Variation.
One of the most important things with split testing is to determine beforehand the metric you’ll measure success by when testing is complete. There are many insights you could get from an A/B test, and knowing what you’re measuring is essential.
How to A/B test your design with Maze 💁
With user testing tools like Maze, you can A/B test your prototype very easily. We currently support Figma, InVision, Marvel, and Sketch prototypes. By importing the first version of your prototype to Maze, you'll create the Control test.
💡 Tip: Learn more about creating your first maze.
To create the Variation, go back to your prototyping tool and make changes to the same prototype—no need to create a new one. We recommend changing just one single variable so that the testing results are accurate and reflective only of that change. Here are some ideas on what you can test:
- Text: titles, descriptions, or CTAs
- Visual elements: images, icons, colors
- User flow: the path taken by the user to move through the prototype
Once you're done making the changes, go back to Maze and click on the Import new prototype version button inside your project. This will create a new maze with the changes you've made to your prototype.
Finish by defining missions for both mazes: write titles and descriptions, and lay out the expected paths you expect users to take. Once you’re done, set both maze tests live, and share the link with your testers.
💡 Tip #1: Avoid testing with the same participants. For the best results, randomize the selection of testers for each test version.
💡 Tip #2: For a detailed guide on split testing with Maze, check our documentation.
Put your prototypes to the test with Maze
Maze's robust prototype testing enables validate usability with real users before building and investing valuable resources.
Analyze results and implement the winner 📊
As mentioned above, it’s important to determine beforehand the metric you’ll be measuring success by. With Maze, you get high-level results for each mission, such as success rate, misclicks rate, and time spent on screen. You can also view metrics for individual sessions like the duration of the test or all clicks on a screen made by a tester.
Before creating your A/B test, establish the success metric valuable for your product.
For instance, when testing user flows, you can compare the success rate of each mission for both mazes, and also look at the paths users have taken most often. When testing CTAs, misclick rate and time spent on screen are important metrics to indicate performance.
Once testing is done, compare the KPIs of the Control to the KPIs of the Variation test. Naturally, the design that performed better is the winner and you'll have a clear understanding of what to implement.
💡 Tip: Consult our short guide to help you understand the results on your dashboard.
Evidence-based design decisions take into account real user preferences. By split testing your design at the prototype phase, you will create a product that addresses user needs from the start.
More so, you'll prevent many costly back-and-forths between the design and development teams, and iterate on design before kicking off development.