Have you ever launched a new product or feature, only to discover everything you expected users to do... they didn’t? Your team put in countless hours, but it didn’t pay off the way you thought it would—in fact, your user base is shrinking. Whether it’s the UI design, platform functionality, or navigation, something is causing a disconnect.
Enter: usability metrics.
What are usability metrics?
Usability metrics are the specific measurements and type of statistics used to review the usability of your product. They can track how quickly users complete certain tasks, how frequently they make mistakes, their overall satisfaction using the platform, and more. By reviewing different types of usability metrics, you can paint a picture of the user’s experience and understand the overall usability of your product.
Different usability testing methods provide different metrics. Determining which metrics to track is an important step when creating your usability testing plan, as you’ll be able to tailor tests to provide relevant insights.
Types of usability metrics
Usability testing metrics are usually divided into four categories—each providing insights into different areas of your product's usability. With a robust usability testing platform, you can automatically generate reports to digest and organize your metrics. This gives an at-a-glance view of how your product is performing for any key stakeholders—and helps prepare for future sprints.
To maximize your takeaways, include a mix of these four types of metric:
- Completion or success metrics: Measure whether users can complete tasks effectively. When attempting a task, the design can fail, succeed, or indirectly succeed.
- Duration metrics: Track the average time users spend on a particular screen, or review how long it takes users to perform a task. Time is a vital indicator of your design’s complexity—these metrics show how efficiently users navigate and operate your product.
- Error metrics: Unlike a technical error or bug, this refers to actions users perform on your website or app that don’t lead them to the expected solution—e.g. a user wants to log in but clicks on the ‘sign up’ button instead. Error metrics highlight areas of confusion in the user interface (UI) or convey challenges with functionality.
- Satisfaction metrics: Gauge how satisfied your users are with their overall experience using your product. Metrics like system usability survey (SUS) or net promoter score (NPS) evaluate the overall sentiment after interacting with your product.
Remember 💡
When discussing metrics, particularly completion or success, you're measuring the effectiveness of the design, not the user. Bear this in mind as you write your plan, instructions, and report.
Why is measuring usability metrics important?
“The biggest benefit to conducting usability testing is that you get to build scalable products with a short learning curve, which then translates into satisfied users,” says Belén Ardiles, Product Designer at StockFink. This benefit can be more easily tapped into if you’re tracking usability metrics, as these will be the backbone of your decision-making process.
Remember: surveys and interviews can give you valuable quantitative insights, but they won’t provide broad statistics or necessarily show you a numerical translation of how your users interact with and experience your product’s UI.
Let’s take a look at some of the other benefits of tracking usability metrics.
Get past the Aesthetic-Usability effect
Conducting usability and other forms of user testing reduces the risk of test participants falling for the Aesthetic-Usability effect. This effect is a cognitive bias that occurs when users perceive aesthetically-pleasing designs as more usable. While it’s great if a product’s UI is beautiful, it also needs to be functional. Relying on a specific set of usability metrics ensures you have an objective approach to how your design performs, instead of relying on potentially-subjective understandings of what users say they like.
Identify user-facing issues
When you're knee-deep in a project, it can be easy to lose sight of the bigger picture—the ultimate goal is to create a user-centric product. If you track usability metrics, you can more clearly identify pain points or areas for improvement within the user experience (UX).
For example, if a high percentage of users are unable to make a purchase, it may be an indication that your product's design or navigation needs to be adjusted to clarify the checkout process. By identifying these issues early on, your product team can steer focus and make data-backed decisions to create a more effective product.
We shouldn’t be afraid to talk directly to users and get them to point out design opportunities. Usability testing should be a mandatory step in design so we can come up with interfaces that communicate and reflect the vision and value the platform is expected to give.
Belén Ardiles, Product Designer at StockFink
Prioritize product improvements
Usability metrics help you decide which improvements are most important right now, and which can be focused on later. Let’s say one of your company’s OKRs is to increase conversion rates by 3% month-over-month. You get these insights from your usability studies:
- Users couldn’t find their account settings button in the main navigation, and 72% of users located it through search instead
- On average, people spent 15.7 seconds looking for the signup button, with 67% of users abandoning the task
- Most of the participants took an alternative path to find the pricing information, but this task still achieved a 90% indirect success rate
While all of those usability problems are important, there’s one metric which clearly has a direct impact on your company’s sign-up goals—and ultimately conversion rate—as users are abandoning the task. By identifying this failed completion metric, you can quickly review usability results and start rectifying the problem.
Get stakeholder buy-in on decisions
Sometimes different product stakeholders have different opinions. You might think the navigation bar should be black with white text, while a colleague believes it’ll stand out more as white with black text. The team vote is split 50/50, but luckily, you’ve got a usability test coming up. By tracking success metrics, you identify that 83% of users completed navigation quicker with option A. The decision is clear.
Belén Ardiles, Product Designer at StockFink, and her team had a similar experience. “I discovered that if we had developed the prototype as we designed it, we’d have failed to make the app truly intuitive to the end user,” said Belén. “Test participants experienced difficulties around handling alerts and taking proactive steps to get educated and mitigate future risk.”
When you conduct usability tests, your design and UX choices going forward are anchored in user-backed data, so the decision comes down to what’s working and what will benefit your users—rather than internal debates or personal preferences.
Track progress between iterations
Quantitative usability metrics mean you can easily compare results—see how the feature performed last time, and assess if the new design has improved following a change. When you conduct usability tests at regular intervals—from initial concept development to post-launch optimization—you start building a narrative of data that can be used to track your progress and overall success.
Product tip ✨
With Maze, you can continuously monitor your live website to catch trends or eventual drops in the user experience, and make adjustments accordingly.
Gain competitive advantage
Our 2023 Continuous Research Report shows 83% of respondents believe testing should happen at every stage of the product lifecycle. Yet, 78% believe their company doesn’t research enough—product teams see the value in ongoing usability testing, but aren’t able to use continuous research to make decisions throughout the development process.
Whether due to limited resources, budget, or stakeholder investment, it’s clear that regularly monitoring usability metrics like customer satisfaction or engagement rates is invaluable. It can help you identify areas where competitors are falling short, and tailor products to fill these gaps.
Key usability metrics and how to calculate them
When you start determining which usability metrics to track, start by revisiting your project objectives and consider which metrics will best inform those goals. There are a ton of different ways to track the results of usability studies, but these are the key metrics you can’t afford to skip:
The formulas below used have been sourced from various platforms to find the most widely-used formulations for calculating usability metrics. Remember to measure results against your platform's specific goals and benchmarks so you're optimizing for the metrics that matter most to your product and users.
1. Completion rate
Type: Success metric
The completion rate—also known as a success rate—allows you to assess the percentage of users that can navigate your product intuitively. Task completion is usually presented as a binary value of ‘1’ if users completed the task and ‘0’ if they didn’t.
“The usability metric that helped me the most was completion rate—it allowed me to validate whether or not the software’s main objective was met, and if users were finding it easy to follow the predetermined path. I used the user data to modify the app which added value and gave us an advantage against competitors,” says Belén.
How to measure the completion rate
To calculate the task completion rate, you should divide the number of completed tasks by the total number of given tasks and multiply it by 100.
Completion rate = (Number of completed tasks / total number of assigned tasks) x 100
You can calculate completion rate per user or by study. For example, if a participant gets a 10/10 task success, that user would have a 100% completion rate. But, if 8/10 users complete 3/10 tasks, and the other two have a perfect score, your usability study completion rate would be 44%.
#of tasks completed = (8 x 3) + (2 x 10) = 44
Total number of assigned tasks = 10 x 10 = 100
Completion rate = (44 / 100) x 100 = 44%
Product tip ✨
If you’re using Maze, this metric is automatically calculated—you’ll see it presented in your usability study downloadable report.
1.1 Direct or indirect success
Type: Success metric
These metrics are an extension of the completion rate and tell you whether the user completed the task as you expected. For example, if you think someone will click on the signup button from the homepage and that’s exactly what they do, it’s a direct success. But, if they go to the login page first and click on the signup button from there, that’s an indirect success.
It’s a good measure of usability if the majority of your users can complete their tasks using the flow you designed. With a usability testing platform like Maze, you can set the expected paths you think users will take as they navigate your platform. If it coincides with the one users take, the result is a direct success.
How to measure direct or indirect success
To measure this metric you’ll need to analyze the expected path and the actions your user took. If you find a match, there was a direct success; if you don’t, it’s an indirect one. You can calculate the percentage of direct or indirect success with the following formulas:
Direct success rate = (Number of completed tasks with direct success / total number of completed tasks) x 100
Indirect success rate = (Number of completed tasks with indirect success / total number of completed tasks) x 100
1.2 Fail rate
Type: Success metric
This metric is also a way to categorize your completion rate. Unlike direct or indirect success, a user fails when they can’t solve a task and simply abandon it.
It's important to ensure tasks are achievable in the first place before tracking ‘fail’ usability metrics. For example, if a task is designed to be completed in five minutes but across all users it takes an average of 10 minutes to complete, this wouldn’t necessarily be a ‘fail’ usability metric, because they _did _complete the task, but outside the time parameters—and the average result says perhaps the original timeframe was unrealistic.
However, if a task requires users to enter information that is impossible to provide, this would be considered a ‘fail’ in your usability metrics. You’ll recognize this one if you’ve ever been asked to fill in a digital form with a required field that doesn't permit the correct input, e.g. being asked to provide a phone number in a text-only box.
How to measure fail rate
You can measure this by using usability testing tools that record users’ clicks during the usability test. Or, to calculate a fail rate, follow this formula:
Fail rate = (Number of failed tasks / total number of assigned tasks) x 100
2. Time on screen
Type: Duration metric
Time on screen measures how long a user spends on a particular screen. Users spending a long time on a screen is an indicator they can't find what they’re looking for (except for blog posts or pages with readable content where the user might naturally spend more time). If you review the time on screen along with metrics like misclick rate, you can spot issues with the interface, labels, or layout of the page.
How to measure time on screen
Time on screen is usually calculated by your testing platform. It records the amount of time your user spends on the same screen before moving to a different one. However, if you’re doing this in person, you can calculate it using a stopwatch or clock.
3. Time on task
Type: Duration metric
Time on task measures the duration users take to complete a task; users taking too long to complete a task might indicate they can't find what they're looking for or are lost trying to complete the task.
Time on task isn’t a metric that you should review on its own. You need to contextualize the number and try to understand why it’s taking so long for users to complete a task. You want to know if users are taking longer than expected on a task because of the design, copy, instructions, or information architecture.
How to measure time on task
Similar to time on screen, your testing platform should calculate this as your users go through your test. It can be particularly useful to review screen recordings at the same time, to pinpoint where users are struggling. If you’re doing this in person, you can use a timer to manually measure the time your users spend on each task and have a moderator track where difficulties arise.
Product tip✨
Maze can record usability metrics automatically for you during unmoderated usability evaluations. Try these templates for usability testing and create test projects within minutes.
4. Misclick rate
Type: Errors
The misclick rate is the average number of misclicks outside the hotspots or clickable areas of your product. This usually happens when your user finds your platform unintuitive or is expecting it to act similarly to other websites.
According to Jakob's Law, “users spend most of their time on other websites”, so they expect yours to function like the ones they already know. If it doesn’t, this can cause usability issues. Review the misclick rate along with a detailed click heatmap to see exactly where your users are clicking, and adjust your design accordingly to clarify.
How to measure misclick rate
To calculate your product’s misclick rate, you need to divide the number of misclicks by the total number of clicks and multiply it by 100.
Misclick rate = (Number of user misclicks / total number of user clicks) x 100
For example, if a user made 15 clicks during the test and three were misclicks, your misclick rate will be 20%.
5. Number of errors
Type: Errors
This is a pretty self-explanatory metric: it calculates the number of errors your participants make while attempting to complete a task. This can include errors made while performing a task i.e. accidental actions, slips, or oversights. The higher the error rate, the harder users find your product to use.
For example, let’s say your users can’t register because the platform has strict password requirements—if these prerequisites aren’t disclosed to the user, they’ll end up trying different combinations until they make it or give up. A high error rate here might indicate you need to disclose how many letters, numbers, and special characters the password should contain.
How to measure the number of errors
To measure this, you simply need to keep count of the times your users make a mistake while completing a task. Depending on the testing tool you’re using, you might get a detailed description of each error so you can classify them. Or, you might get this rate included in an overall usability score.
6. Task level satisfaction
Type: Satisfaction metric
To test task-level satisfaction, you should ask your users how satisfied they are with the task they attempted to complete. Do this by including a questionnaire of three to five questions at the end of each task.
Your questions shouldn’t be open-ended, and users should be able to answer using a scale, to gather quantitative results. Open-ended questions can be difficult to measure and may result in unclear or subjective responses. For example, asking "How satisfied were you with your experience?" may result in a range of responses that are difficult to compare or analyze.
This metric is relevant to measure user satisfaction and gather insights into what’s causing users to struggle or succeed. Frame questions in a specific, measurable way, such as “On a scale of 1-5 where one is difficult and five is easy, how easy was it to complete this task?" This allows users to answer in a way that is easy to compare and analyze across multiple respondents.
How to measure task and test level satisfaction
To assess task level satisfaction, use one of these questionnaires:
- After scenario questionnaire (ASQ): Ask three questions about the task they’ve just completed—measuring satisfaction with time, support, and ease of use
- Usability magnitude estimation (UME): Users need to assign a rating to tasks in just one question
- Single ease question (SEQ): Participants need to rate how easy it was to complete the task on a scale from one to seven
7. Test level satisfaction
Type: Satisfaction metric
To assess test level satisfaction, you should use a longer questionnaire (10+ questions) and share it with users as soon as they finish the test session. This metric allows you to understand how a user feels directly after interacting with your platform.
How to measure test level satisfaction
To calculate test level satisfaction, use one of these surveys:
- System usability scale (SUS): Ask your users to answer 10 questions at the end of the session and gauge their input on: whether they’d use it again, if they had the right support, and if they found it easy to use. If you’re using Maze, you can speed through setup with this system usability scale template.
- Standardized user experience percentile rank questionnaire (SUPR-Q): Measure the overall user experience after the test by asking 13 questions on usability, trust, appearance, and loyalty
- Computer system usability questionnaire (CSUQ): Ask 19 questions to your users for them to answer using a scale of one to seven, to get an overall satisfaction score
Should I be tracking usability metrics?
Depending on the usability testing methods you choose to conduct, you’ll measure a different set of metrics—but every type will offer some insight into your users and their experience with your product.
Here’s a quick recap of the usability metrics you should track, depending on the insights you want to get:
- Task performance: Completion rate, direct success rate, indirect success rate, failure rate
- Task duration: Time on screen, time on task
- Task accuracy: Misclick rate, number of errors
- User satisfaction: Task level satisfaction, test level satisfaction
If you want to calculate those metrics automatically, opt for a continuous product discovery tool like Maze: you can get all your usability testing results automatically turned into a ready-to-share report, which allows you to instantly act on the data.
The metrics matter
It's easy to forget to plan for usability metrics—after all, you'll gather data either way, right?
But if you want to maximize your insights, it's well worth taking the time to consider what types of metric will best inform your product and answer your research questions.
Once you've pinpointed this, you can design your usability test around these metrics, and start gather data that offers a well-rounded, contextualized understanding of your product's usability.
Frequently asked questions about usability metrics
How can I measure website usability?
How can I measure website usability?
To measure website usability you need to conduct usability studies, here’s how to do it:
- Define your testing objectives
- Determine the usability research methods
- Choose a testing tool
- Recruit participants
- Create test scenarios
- Build tests
- Conduct the study
- Review the results
- Implement changes
- Continue testing your live website for usability
What are the different types of usability metrics?
What are the different types of usability metrics?
There are four different types of usability metrics:
- Completion or success metrics: Measure if users succeed or fail when completing test tasks
- Duration metrics: Track the time users take to finish tasks
- Errors: Measure the actions users take on tests that lead them to an undesired solution
- Satisfaction metrics: Determine how satisfied your users are with your product and its UX
How can I improve user engagement on my website?
How can I improve user engagement on my website?
You can improve user engagement on your website by:
- Testing your site for usability and identifying bottlenecks
- Making your screens more intuitive
- Improving your page loading time
- Writing engaging and clear copy
- Creating long-form content and inviting users to interact in comments
How can I improve user flow on my website?
How can I improve user flow on my website?
You can improve user flow on your website by getting to know your users and mapping the customer’s journey. By identifying their pain points and each touchpoint they have with your product, you’ll be able to provide better and more intuitive solutions that they’ll use and recommend. Study your expected paths, content hierarchy, and copy.