Chapter 5

Evaluative UX research: Examples and methods for high-impact experiences

Learn how to test, refine, and perfect your designs using proven evaluative research methods and best practices.

evaluative research illustration

What is evaluative research?

Evaluative research is a research method used to evaluate a product or concept and collect data to help improve your solution. It offers many benefits, including identifying whether a product works as intended and uncovering areas for improvement.

Also known as evaluation research or program evaluation, this kind of research is typically introduced in the early phases of the design process to test existing or new solutions. It continues to be employed in an iterative way until the product becomes ‘final’. “With evaluation research, we’re making sure the value is there so that effort and resources aren’t wasted,” explains Nannearl LeKesia Brown, UX Researcher at Figma..

According to Mithila Fox, Director of Product Research at Contentful, the evaluation research process includes various activities, like content testing, assessing accessibility or desirability. During UX research, evaluation can also be conducted on competitor products to understand what solutions work well in the current market before you start building your own.

“Even before you have your own mockups, you can start by testing competitors or similar products,” says Mithila. “There’s a lot we can learn from what is and isn't working about other products in the market.”

See your product through a new lens with Maze

Maze helps your team resolve product issues, optimize designs, and align with user expectations—all in a few clicks.

Evaluative research vs. evaluative research design

Evaluative research is often conflated with evaluative research design, but they’re not the same. The key difference between the two lies in their scope and focus.

Evaluative research focuses on assessing the usability, effectiveness, and performance of a product. Common methods used in evaluative research include usability testing, UX surveys, A/B testing, tree testing, and heuristic evaluations.

Evaluative research design is a strategic framework that guides how evaluative research is conducted. Evaluative research design is used in various fields, including education, social sciences, and healthcare.

Evaluative research design includes three main types of evaluations: formative, summative, and outcome (more on these shortly).

Why is evaluative research important?

Evaluative research is crucial in UX design and research, providing insights to enhance user experiences, identify usability issues, and inform iterative design improvements. It helps you:

  • Refine and improve UX: Evaluative research allows you to test a solution and collect valuable feedback to refine and improve the user experience. For example, you can A/B test the copy on your site to maximize engagement with users.
  • Identify areas of improvement: Findings from evaluative research are key to assessing what works and what doesn't. You might, for instance, run usability testing to observe how users navigate your website and identify pain points or areas of confusion.
  • Align your ideas with users: Research should always be a part of the design and product development process. By allowing users to evaluate your product early and often you'll know whether you're building the right solution for your audience.
  • Get buy-in: The insights you get from this type of research can demonstrate the effectiveness and impact of your project. Show this information to stakeholders to get buy-in for future projects.

When to conduct evaluative research?

Evaluative research is an ongoing process throughout the product design and development cycle, from early design stages to post-launch. Here are some of the key times to conduct evaluative research:

  • Early design phases: Test initial designs to identify fundamental usability issues. This stage helps refine and direct the design process before further development.
  • During development: Formative evaluation is conducted during the development process to test and refine designs before finalization. It involves iterative testing and gathering feedback to continuously enhance the product based on user interactions and feedback​.
  • Post-launch: After launching a product, evaluative research enables you to continue to monitor how users interact with it in real-world settings. This helps identify ongoing usability issues and areas for improvement. Regular usability testing and user feedback collection ensure the product evolves with user needs.
  • Final assessment: At the end of the design process, summative evaluation assesses the overall usability and effectiveness of the product. It benchmarks the new solution against previous versions or competitors, ensuring that the final product meets user expectations and performs well in its intended context.

Evaluative vs. generative research

Evaluative research

Generative research

Purpose

Assess and validate existing designs and concepts

Explore user needs, behaviors, and motivations to inform new designs

When to use

Throughout the product lifecycle: early design phases, during development, post-launch, and continuous improvement

Early stages of the design process, before product concepts are fully developed

Focus

Determine if a product meets user expectations and identifies usability issues

Understand user problems, generate ideas, and explore new opportunities

Methods

Surveys
Tree testing
Usability testing
A/B testing
Closed card sorting

Ethnographic studies
User interviews
Surveys
Focus groups
Open card sorting

Data collected

Quantitative and qualitative data on user interactions and experiences with the product

Qualitative data on user needs, behaviors, and contexts

The difference between generative research and evaluative research lies in their focus: generative methods investigate user needs for new solutions, while evaluative research assesses and validates existing designs for improvements.

Generative and evaluative research are both valuable decision-making tools in the arsenal of a researcher. They should be similarly employed throughout the product development process as they both help you get the evidence you need.

Generative research methods include:

Evaluative research methods include:

  • Tree testing
  • Usability testing
  • A/B testing

Some research methods, such as UX surveys and user interviews, can be used in both generative and evaluative research, depending on the context and goals.

When creating the research plan, study the competitive landscape, target audience, needs of the people you’re building for, and any existing solutions. Depending on what you need to find out, you’ll be able to determine if you should run generative or evaluative research.

Mithila explains the benefits of using both research methodologies: “Generative research helps us deeply understand our users and learn their needs, wants, and challenges. On the other hand, evaluative research helps us test whether the solutions we've come up with address those needs, wants, and challenges.”

Tip ✨

Use generative research to bring forth new ideas during the discovery phase. And use evaluation research to test and monitor the product before and after launch.

Types of evaluative research design

There are three types of evaluative studies you can tap into: summative research, formative research, and outcome research. Although summative evaluations are often quantitative, they can also be part of qualitative research.

TL;DR: Run formative research to test and evaluate solutions during the design process, and conduct a summative evaluation at the end to evaluate the final product. Measure specific outcomes through outcome evaluations to understand the impact on user behavior and satisfaction.

Formative evaluation research

Formative research is conducted early and often during the design process, to test and improve a solution before arriving at the final design. Running a formative evaluation allows you to test and identify issues in the solutions as you’re creating them, and improve them based on user feedback.

Summative evaluation research

A summative evaluation helps you understand how a design performs overall. It’s usually done at the end of the design process to evaluate its usability or detect overlooked issues. You can also use a summative evaluation to benchmark your new solution against a prior one, or that of a competitor’s, and understand if the final product needs assessment. Summative evaluation can be used for outcome-focused evaluation to assess impact and effectiveness for specific outcomes—for example, how design influences conversion.

Outcome evaluation research

Outcome evaluation research assesses the effectiveness of a design by measuring the changes it brings about in specific user behaviors and satisfaction. This type of evaluation focuses on the short-term and long-term impacts on users, such as improved task completion rates or increased user engagement. Outcome evaluations help understand the real-world effectiveness of a product design and support data-driven decisions for future improvements.

5 Key evaluative research methods

“Evaluation research can start as soon as you understand your user’s needs,” says Mithila. Here are five typical UX research methods to include in your evaluation research process:

Evaluative research methods

Surveys

User surveys can provide valuable quantitative insights into user preferences, satisfaction levels, and attitudes toward a design or product. By gathering a large amount of data efficiently, surveys can identify trends, patterns, and user demographics to make informed decisions and prioritize design improvements.

There are many types of surveys to choose from, including:

  • Customer satisfaction (CSAT) surveys: Measure users' satisfaction with a product or service through a straightforward rating scale, typically ranging from 1 to 5
  • Net promoter score (NPS) surveys: Evaluate the likelihood of users recommending a product or service on a scale from 0 to 10, categorizing respondents as promoters, passives, or detractors
  • Customer effort score (CES) surveys: Focus on the ease with which users can accomplish tasks or resolve issues, providing insights into the overall user experience

🚀 Want to supercharge your surveys to maximize insights?
Maze AI enables you to get more from your feedback surveys with AI-powered capabilities. From asking the Perfect Questions, to digging deeper with Dynamic Follow-Up Questions—Maze AI makes uncovering in-depth insights simple.

Closed card sorting

Unlike open or hybrid card sorting, closed card sorting uses predefined categories to evaluate specific usability issues in an existing design.

By analyzing how participants group and categorize information, researchers can identify potential issues, inconsistencies, or gaps in the design's information architecture, leading to improved navigation and findability. The results offer quantitative insights into users' mental models and expectations.

Closed card sorting allows researchers to:

  • Understand how users categorize information
  • Identify mismatches between user expectations and the existing structure
  • Improve the overall user experience by ensuring that navigation aligns with user logic and behaviors

Tree testing

Tree testing, also known as reverse card sorting, is a research method used to evaluate the findability and effectiveness of information architecture. Participants are given a text-based representation of the website's navigation structure (without visual design elements) and are asked to locate specific items or perform specific tasks by navigating through the tree structure. This method helps identify potential issues such as confusing labels, unclear hierarchy, or navigation paths that hinder users' ability to find information.

Tree Testing - Maze

Tree testing allows researchers to:

  • Evaluate the clarity and effectiveness of category labels
  • Identify problematic navigation paths and areas of user confusion
  • Optimize the overall information architecture to improve user findability and task completion rates

When comparing tree testing vs. card sorting, tree testing evaluates the effectiveness of a site's navigation by having users find specific items in a text-based hierarchy. On the other hand, card sorting involves organizing information into categories to reveal users' mental models and improve information architecture.

Usability testing

Usability testing involves observing and collecting qualitative and/or quantitative data on how users interact with a design or product. Participants are given specific tasks to perform while their interactions, feedback, and difficulties are recorded. This approach helps identify usability issues, areas of confusion, or pain points in the user experience.

There are different types of usability testing, including:

  • Guerilla testing: Quick and inexpensive, conducted in public places to gather immediate feedback from random users
  • Five-second test: Participants are shown a design for five seconds and then asked questions to capture their first impressions
  • First click testing: Evaluates the effectiveness of the first click a user makes to complete a task, crucial for understanding navigation efficiency
  • Session replay: Records and replays user sessions to analyze behavior and interactions with the design

💡 Recruiting participants for UX research can be tough
With the Maze Panel, you can quickly and easily recruit research participants that meet your criteria, from a pool of over 3 million participants.

A/B testing

A/B testing, also known as split testing, is an evaluative research approach that involves comparing two or more versions of a design or feature to determine which one performs better in achieving a specific objective. Users are randomly assigned to different variants, and their interactions, behavior, or conversion rates are measured and analyzed. A/B testing allows researchers to make data-driven decisions by quantitatively assessing the impact of design changes on user behavior, engagement, or conversion metrics.

This is the value of having a UX research plan before diving into the research approach itself. If we were able to answer the evaluative questions we had, in addition to figuring out if our hypotheses were valid (or not), I’d count that as a successful evaluation study. Ultimately, research is about learning in order to make more informed decisions—if we learned, we were successful.

Nannearl LeKesia Brown, UX Researcher at Figma

Nannearl LeKesia Brown
UX Researcher at Figma

Evaluative research question examples

To gather valuable data and make better design decisions, you need to ask the right research questions. Here are some examples of evaluative research questions:

Usability questions

  • How would you go about performing [task]?
  • How was your experience completing [task]?
  • How did you find navigating to [X] page?
  • Based on the previous task, how would you prefer to do this action instead?

Get inspired by real-life usability test examples and discover more usability testing questions in our guide to usability testing.

Product survey questions

  • How often do you use the product/feature?
  • How satisfied are you with the product/feature?
  • Does the product/feature help you achieve your goals?
  • How easy is the product/feature to use?

Discover more examples of product survey questions in our article on product surveys.

Closed card sorting questions

  • Were there any categories you were unsure about?
  • Which categories were you unsure about?
  • Why were you unsure about the [X] category?

Find out more in our complete card sorting guide.

Pro tip 💡

Need a jumping-off point for your research questions? Not sure where to start, or short on time? The Maze Question Bank is our attempt at the internet’s best open-source question repository—and it’s entirely free, too.

Evaluation research examples

Across UX design, research, and product testing, evaluative research can take several forms. Here are some ways you can conduct evaluative research:

Comparative usability testing

This example of evaluative research involves conducting usability tests with participants to compare the performance and user satisfaction of two or more competing design variations or prototypes.

You’ll gather qualitative and quantitative data on task completion rates, errors, user preferences, and feedback to identify the most effective design option. You can then use the insights gained from comparative usability testing to inform design decisions and prioritize improvements based on user-centered feedback.

Cognitive walkthroughs

Cognitive walkthroughs assess the usability and effectiveness of a design from a user's perspective.

You’ll create evaluators to identify potential points of confusion, decision-making challenges, or errors. You can then gather insights on user expectations, mental models, and information processing to improve the clarity and intuitiveness of the design.

Diary studies

Conducting diary studies gives you insights into users' experiences and behaviors over an extended period of time.

You provide participants with diaries or digital tools to record their interactions, thoughts, frustrations, and successes related to a product or service. You can then analyze the collected data to identify usage patterns, uncover pain points, and understand the factors influencing the user experience.

In the next chapters, we'll learn more about quantitative and qualitative research, as well as the most common UX research methods. We’ll also share some practical applications of how UX researchers use these methods to conduct effective research.

See your product through a new lens with Maze

Maze helps your team resolve product issues, optimize designs, and align with user expectations—all in a few clicks.

user testing data insights

Frequently asked questions

What is evaluative research?

Evaluative research, also known as evaluation research or program evaluation, is a type of research you can use to evaluate a product or concept and collect data that helps improve your solution.

What is evaluative research design?

Evaluative research design is a structured approach to planning and conducting research that assesses the usability, and impact of a product. It includes formative evaluation to test and refine designs during development, summative evaluation to assess the final product's overall effectiveness, and outcome evaluation to measure long-term impacts on user behavior and satisfaction.

What are the goals of evaluative research?

Evaluative research assesses the effectiveness, identifies improvement areas, and measures user satisfaction. It checks if a design achieves its objectives, enhances user experiences, and offers insights for future enhancements.

What is the difference between evaluative and formative research?

Evaluative research measures the success of a completed design, focusing on outcomes and user satisfaction. In contrast, formative research occurs during development, identifying issues and guiding iterative adjustments to meet user needs.