Chapter 3
How to overcome cognitive bias in user research
Overcoming cognitive biases and preventing them from damaging UX research is challenging, but this chapter is here to help. In this part of our guide, we'll explore how to identify and confront different cognitive biases, and prevent the subtle ways they can impact UX.
A crucial part in overcoming cognitive biases is being aware that they exist, then noting the ones that may be present in your own subconscious. Cognitive biases are our brains’ way of making it easier to navigate the complex world we live in, by giving us ‘shortcuts’ to information or opinions. However, they are often built on a lack of information, meaning they instead result in misjudgements, beliefs, or assumptions.
Biases are especially challenging to register in our own behavior or beliefs, because of how deeply-rooted in our subconscious they are—so, how can you work to truly eliminate cognitive biases in UX research? Keep reading to find out.
What are the indications of cognitive biases in UX research?
Indications of cognitive biases being present in UX research can vary, and how to spot biases depends on what bias is in effect, what you’re researching, and who you’re working with. From initial problem definition and product discovery, to reporting and acting on your findings, biases can show up at any stage during the research process. While the signs and impact of cognitive biases vary, some broad indications to look out for include:
- When the same, or similar, anecdotes are repeatedly referenced, with little evidence that this is a common issue
- A pattern where the same research methods are continuously being used with the same group of participants
- When broad assumptions are being made or declared a pattern, without quantitative evidence
- Always receiving one-directional feedback, such as overly positive or overly negative
- If external or environmental factors are disregarded when faced with a problem, and focus is solely placed on an individual
- Only paying attention to data or anecdotes which confirm opinions, rather than seeing the whole picture
If you’re concerned insights may be biased, a great way to identify this is by triangulating qualitative feedback with quantitative data to see if there is an inconsistency between what people are saying and what the data shows they’re doing—if the two don’t align, this may be a sign to dig deeper into the reliability of the information you’re receiving.
How to overcome different types of cognitive biases in UX
How to overcome cognitive biases in user research depends on what effect that subconscious belief is having on your research—in this section of the guide, we’ll break down what to watch out for, and how to avoid different cognitive biases. Let’s get into it.
How to avoid creating leading questions and insights
Biases this impacts: anchoring bias, framing effect, serial-position effect, question-order bias, peak-end rule, clustering illusion
Providing objective insights is crucial during product development, to ensure recommendations are based on evidence and not opinions. Collecting evidence-based insights heavily depends on the way your research is designed. Things like anchoring bias, the framing effect and clustering illusion can all appear early on when creating your research questions, conducting interviews, and reporting on results.
Keep it neutral
Start by keeping your research and interview questions neutral: avoid using any emotional or leading language, such as like/dislike, good/bad—using these adjectives can provide cues to your participants to start thinking of something in a positive or negative light, without them naturally getting there on their own.
It’s also good practice to not rely too much on these types of answers, because they’re individually subjective—you should be more focused on understanding users' needs, rather than opinions. Remember that what we think we want, often isn’t the same as what we need.
When it comes time to share your results, try to use objective language to avoid people getting subconsciously attached to the experience being positive or negative overall. If you do share an emotionally charged insight or feedback, back it up with further reasoning, such as what the behavioral data is showing.
Start broad, then get specific
To avoid leading questions and making assumptions about your participants’ experiences, it’s good practice to start your research from a broad lens, then start to focus on specifics as you begin to understand needs and behaviors.
For example, if you’re asking about someone’s travel experience, you can first ask “Have you traveled anywhere in the past 10 years?”, which allows them to provide you with more context about if they’ve even traveled in the first place. Then, you can narrow it down by asking them to “Recall the most recent trip and which apps or services you used to help plan?”.
Starting broad also helps to mitigate the risk of assumptions. If you first asked them to recall the most recent trip, this is making the assumption that they do travel, and have done recently—but you may not know either of those facts yet. It’s all too easy to ask an ordinary question, but accidentally lead someone’s answer. So start by casting a wide net, and bring it in as you get closer to the answer.
Pre-define success as a team
Our brains naturally gravitate to data that supports our beliefs, or seems easiest to tell a story with. We look for meaning in data, and create meaning if there isn’t any. So determining your success metrics ahead of time can help to narrow your focus when it comes to analyzing results, and ensure reporting is objective.
Doing this with your team as a whole also adds an extra layer of safety, as everyone is aware of what’s been agreed on, any parameters, and you have a chance to check cognitive biases together. To help combat subconsciously cherry-picking data, you should also always (where possible) work with others to collaboratively review and analyze results, and confirm others’ interpretations of the insights.
Alternate order
While you’d be justified to assume that having a consistent question order for interviews or surveys is most objective and fair, it can in fact be more helpful to alternate the order of questions or answer options, where possible. Varying the order helps account for question order bias or framing effects that could otherwise occur—this is sometimes referred to as counterbalancing.
For example, if you’ve asked participants to share their feedback from a long list of options, you could randomize the options to mitigate people always choosing the first or last option. You can also easily switch up the order of interview questions, where the order doesn’t matter. Of course, sometimes, order does matter—you might need question one for context, or question six as a second-part of question five. Just be aware of the biases that could come into play here, and remember to check the data and back it up with other evidence as well.
How to avoid stealing the spotlight and making assumptions
Biases this impacts: confirmation bias, false-consensus bias, fundamental attribution error
It can be all-too-easy to fall into the trap of thinking your users will agree with you, or even centering yourself in the problem (or solution). Effects like false-consensus and confirmation bias can hugely impact your ability to hone in on what findings are accurate—but unbiased research means stripping away biases until you’re left with the objective truth.
Note assumptions early on
The first step is to note your own assumptions. Doing this with your team is a great way to highlight the biases that exist within the team, and even on a wider organizational scale. It can be challenging to keep an objective stance on your own product(s), especially if you use them outside of work as well, but just because you know a lot about something doesn’t mean your thoughts or feelings are more accurate than someone else’s. By noting your assumptions, you and your team are recognizing and admitting any biases you may have, and taking an active approach to try to diminish them.
Determine research goals to inform recruitment and questions
Having clear, specific goals in mind for your research is another way to mitigate any biases which might impact participant recruitment or questions.
If you go into a research project without any goals—or with goals which are too broad, or unclear—you could end up selecting the wrong participants, and hindering the validity of your results. Cognitive biases like confirmation bias can lead to assumptions about user groups, meaning—if your objectives aren’t clear—you may end up recruiting people based on a subconscious misconception that you think will confirm your expectation.
In addition, determining the goals will also help you better identify and construct the right research questions to guide your research—because there’s no value in gathering accurate answers if the questions were wrong in the first place.
Peer review your work
A second pair of eyes is always helpful, and in UX research it can be even more so. Having your work reviewed by peers—particularly those outside your specific product area—can provide not only a fresh perspective, but also double check for any biases or assumptions that may have snuck in.
Just as it’s challenging to proofread our own work, trying to spot biases in our work is also a difficult feat. Peer reviewing your work from someone who has no attachment to the decisions (i.e. isn’t impacted by the results one way or the other) is an invaluable way to ensure objectives, questions, or analysis remains neutral.
Avoid false memories with co-analysis
Biases like the serial-position effect, peak-end rule, false-consensus effect and even confirmation bias can subtly work their way into research, but especially so in moderated research methods like interviews or focus groups. False memories or biased recollection can happen when participants are influenced by one another, and during analysis of these sessions, when you struggle to accurately remember everything exactly as it happened.
It’s easy to slip into biases without realizing, which is why it’s so important to keep concise, continuous notes during UX research. Working with a colleague present to listen in and take notes is another way to ensure you can reflect objectively on the events. You can also invite peers to help during analysis, to revisit recordings and share their interpretation of the experience.
Get comfortable with blank space
When you’re excited about a project or keen to understand a problem, it can be challenging to sit back and let participants take their time. But sometimes, users need space to think through their thoughts, recall a past experience, or fully articulate what they’re trying to express.
Rather than rushing people through a research session to get to the crux, get comfortable with the silence and pauses that will come up in your sessions. If a participant recounts a very detailed experience, and you need it more concisely, paraphrase what you heard and allow them to confirm or adjust what you interpreted to ensure you’re gathering the right information. Ultimately, giving participants the space to form their answers and make decisions will give you a more accurate reflection of the situation.
How to combat experimenter and participant bias
Biases this impacts: empathy gap, social desirability bias
Ultimately, all cognitive biases relate back to the people involved in the research—whether that’s the experimenter or the participants. Biases like social desirability are particularly noticeable in moderated research, but can occur in any type of research at any stage, right from initial meetings with stakeholders, to an insights shareout at the end of a project.
Limit reactions
One way to combat experimenter and participant bias is to limit your reactions as a researcher. This may sound a bit cold or standoffish, and of course you can forewarn participants that you may seem a bit nonplussed, but it’s important to stay neutral when going through any interview or product feedback process.
The ways a researcher’s reactions may skew or influence a participant’s behavior is endless—for example, if you react positively to positive feedback, they may be more likely to only tell you what’s working well, or what they like about your product, as it got a positive response. Ironically, this would ultimately be less helpful to your product than if they were completely honest, and told you what challenges they experience.
Remember that actions speak louder than words
It’s important to not just rely on what people say when you’re in research sessions with them, but rather watch what they do and how they do it. By paying attention to what people are doing, you will be more accurately informed about where the experience starts to break down and become challenging.
It’s also important to note that when you’re face-to-face with someone, it’s more likely they’ll share positive feedback with you (rather than negative), to appear more socially desirable. Keep an eye out for words and phrases with emotional connotations, such as “I liked this”, as they may be influenced by bias—dig deeper into how this is linked back to their experience, e.g. why did they like it, what problem did it solve?
Provide space to recharge
The final reminder is for you, and can often be overlooked—remember your own wellbeing, as well as the wellbeing of participants and the research itself. Your own energy matters just as much as that of the users you meet, and can easily impact the research, as participants will be able to sense if you feel drained or unable to concentrate. So take breaks, ask for help, and lean on your colleagues when needed.
Tips to avoid cognitive biases in user research
A lot of the ways you can overcome cognitive biases ultimately come down to being aware of you and your team’s assumptions, and employing UX research best practices. It can be challenging to admit biases, so when you’re conducting research, you’re placed in a unique position to provide people with a safe space to air biases with solidarity and free of judgment. Some other tips to help avoid cognitive biases include:
List assumptions with your team early on
- Be open and discuss any preconceived assumptions you and your team may have about your users and how they use the product. Assumptions don’t have to be negative opinions, but they may be—this should be a safe space for everyone to bring their biases forward.
- Remember to start broad with your assumptions, then funnel down and get specific, in terms of what may impact particular research methods or questions, when it comes time to conduct the research.
Create a research plan with objectives and goals
- A detailed, carefully-constructed research plan will help you and your team remember what you’re hoping to learn, and who your participants should be. It provides a guiding framework for every step of the project, from research questions to participant recruitment and analysis.
- By outlining your goals early on, you can avoid incidentally crafting your own narrative later, based on inaccurate or biased patterns found in data.
- Having a specific plan to refer back to also offers a reminder to everyone involved in the project, and allows you to more easily spot and call out biases that may pop up.
Conduct research together
- To help keep your observations unbiased, ask a colleague to join research and analysis sessions to review each other’s work and hold each other accountable.
- Working together on a project not only reduces the risk of assumptions creeping in, but it can provide a fresh perspective and second pair of eyes to check for other errors.
- Most research and continuous discovery platforms, like Maze, allow multiple people to collaborate and work on research projects and analysis simultaneously.
Question people’s reasoning
- One of the best ways to help combat cognitive biases is simply asking for reasoning, or evidence, when someone states a belief or opinion.
- In day-to-day life, a lot of our sentiments and thoughts come from assumptions or biases, without us even realizing, but when it comes to research, this needs to be avoided or it can risk impacting your end product.
- By asking for clarifying information or for someone to provide data to back up the opinion, you can ensure every decision is made from an objective, unbiased place. Ask for clarifying information or data when someone provides you with a belief or opinion about your users
- When it comes to working with research participants, you can do this by triangulating qualitative feedback with quantitative data to note any gaps in what users say, versus what they do.
Refrain from oversharing details and emotions
- Go through your research materials, including your research plan, testing script, and research questions, and comb through for any emotive wording or biased language. Remember the framing effect and consider how your language could prime participants to think in certain ways.
- When it comes to moderating and conducting user research, focus on using neutral phrasing and body language, such as “Thanks for sharing” or “I appreciate your feedback” to acknowledge participants’ responses, rather than say anything which sways them one way or the other.
Provide space for your participants
- Like we discussed earlier in the guide, one often-missed tip for cultivating unbiased research sessions is to allow your participants time.
- You want to avoid influencing participants’ answers at all costs, which may mean sitting in silence, and giving people a chance to recall their experience, form their thoughts, or structure their ideas, before jumping to conclusions or finishing their thoughts for them.
- And remember, if your participants appear fatigued or overwhelmed, remind them of their right to withdraw, or give them a break—answers impacted by a lack of energy are biased, too.
Make user feedback effective by removing cognitive biases
Removing biases from your research empowers you to deliver accurate insights to your product teams, thereby increasing the productivity, accuracy, and effectiveness of your product.
We live with cognitive biases in all areas of our life, but they can strongly hinder product development. By understanding your own biases and knowing how to spot them in others, you’re already well on your way to breaking down the biases that crop up in your UX research.
The fundamental goal of all products is to serve their users, and to provide an exceptional user experience—by ensuring your research is free of bias, you ensure the decisions that influence the product are accurate, honest, and impactful.
Frequently asked questions about overcoming cognitive biases in user research
How do you overcome cognitive biases in user research?
How do you overcome cognitive biases in user research?
The first step to overcoming cognitive biases is being aware that they exist, then noting the ones that may be present in your own subconscious. Biases are especially challenging to note because of how deeply-rooted in our subconscious they are—but by looking out for certain signs, you can identify and combat biases in UX research:
- Identify your own assumptions with your team early on
- Create a research plan with clear objectives and goals
- Conduct research and analysis with a colleague
- Question people’s reasoning and ask for evidence
- Refrain from oversharing details and emotions
- Provide space for your participants to form their own opinions