UX and Product Metrics: A Guide to Numbers Worth Measuring
Metrics inform us about countless aspects of our products. Without data, we wouldn’t be able to know if our users are satisfied, how many of them are onboarded, and if the product is profitable. Yet, you can track so many metrics that they would become one big noise if you decided to keep an eye on all of them. As a UX practitioner or Product Manager, you need to be strategic about picking the right metrics for your product and processes. In today’s episode, we will discuss the following:
What are the reasons to measure?
How UX and product metrics are different?
UX metrics worth tracking.
Product metrics worth tracking.
And a word on NPS.
📣 We want to hear from you
It’s been 20 weeks since we launched Fundament. We are very grateful that you have trusted us and subscribed to our newsletter. No matter if you have been with us since Episode 1 or joined just today, we want to hear from you. Please take a minute and fill out this short anonymous survey. This data will help us tailor our future articles 🙌
Reasons to measure
There are numerous reasons to measure numbers from both UX and product perspectives. The three most obvious ones are to get a sense of whether the product is moving in the right direction, learn if it’s healthy, and if it’s profitable.
Thanks to the collected data, product people are able to make more informed, data-driven decisions. Metrics can also tell pretty early that there are flaws with the usability of the product and help prioritize design initiatives around areas with the biggest room for improvement. Lastly, metrics tell you how successful your product is.
The majority of metrics that we will discuss in this article make sense only when the product is operational, but some of them can be applied in pre-production phases such as product discovery and development.
Metrics are usually linked with KPIs or OKRs, so by nature, they are something we report to executives. Some metrics might remain internal and never be reported, such as the Time Spent on Task or the Error Rate.
UX versus Product metrics
Despite the fact that UX Designers and Product Designers work closely with Product Managers, at the end of the day, they care about different factors. UX designers are advocates of the users’ voice, which means that their ultimate goal is to provide the most usable and user-friendly solution to the users. In return, users are satisfied because they fulfill their goals.
Users eventually stick to the product and pay for it. How many people create accounts, how many convert for paying users, and which features are used the most are what Product Managers care about. Most of these metrics are influenced by good (or bad) user experience, but the ultimate owners of those are the PMs.
UX metrics worth tracking
There are myriad UX metrics available, but it absolutely does not make sense to keep track of all of them. You have to be strategic in selecting metrics depending on the product maturity and team setting. Here’s the list of metrics that, based on my experience, are the ones worth measuring:
Customer Satisfaction Score (CSAT)
Customer Effort Score (CES)
Time spent on task
Error rate
Page load time
Customer Satisfaction Score (CSAT)
Customer Satisfaction Score is an easy metric to track. It basically tells how satisfied the customers (users) are with your product or service. This metric is usually owned by the Customer Success team but can easily be adopted as a UX metric owned by the UX Design team.
How to measure CSAT
In a survey, ask your users how satisfied they are with your product or service using a scale. Depending on how sensitive you want it to be, use a 1-5, 1-7, or 1-10 scale, where 1 would always be "least satisfied”. After taking the measurement, simply take the average from all the answers, and that’s your Customer Satisfaction Score.
But if you do just that, you won’t know why your users are scoring their experience in a certain way. To learn that, add another question to the survey. Ask why they gave your product or service such a score in an open-ended question. These comments will tell you why people love (or dislike) your product and in which areas there’s room for improvement.
When to measure CSAT
Depending on the maturity and complexity of the product or service, the measurement of CSAT can be triggered differently. In complex, rich-in-features products, you might want to measure CSAT for specific user flow and trigger an inline survey after a user goes through the flow a couple of times. If your product is an internal tool, you can email your users with a link to the survey on a regular basis, e.g., quarterly or monthly.
Customer Effort Score (CES)
Customer Effort Score is an easy metric to deploy and track. It implies how easy it is to interact with a certain feature or the whole product.
How to measure CES
Similarly to CSAT, you can start by creating a simple survey consisting of two questions. In the first one, you ask how easy it was to interact with a certain feature on a scale of 1-5, 1-7, or 1-10. In the second one, you can ask what factors influenced such a score. If you ask just for the score, this metric becomes The Single Ease Question (SEQ).
When to measure CES
Typically, you want to trigger the survey after a user interacts with a brand-new feature or goes through a specific flow that you need to gather some data about. The main difference between CSAT and CES is that CES can also be utilized in a usability testing study. During such a session, once a participant finishes a task, you can ask how easy it was for them. If the score is low for the majority of participants, it indicates that the user flow might be overcomplicated and difficult to comprehend.
Time spent on task
Another useful and easy-to-implement metric is Time spent on task, sometimes called Task completion time. It’s basically the amount of time a user needs to complete a certain task expressed in minutes and seconds.
How to measure Time spent on task
There are two methods of measurement. You can do it by yourself using a stopwatch on a moderate user testing session. Another method is to ask the participant of an unmoderated user testing session to report the time. In some scenarios, you might want to go a bit deeper and do intervals, taking measurements for different parts of the user flow separately.
When to measure Time spent on task
The only scenario when it’s worth taking this measurement is during a task-based usability testing session. According to Jeff Sauro’s recommendation, to report time after the study, the geometric mean or the median should work better than the arithmetic mean.
Error rate
This metric tells you how many errors users make on average before completing a task. The high error rate indicates issues with the usability of the tested prototype.
How to measure Error rate
Count all the errors a participant makes in the user testing session and classify them by severity afterward. Look for U-turns, unintentional clicks, and clicking on wrong items. Calculate the error rate by dividing total errors by total tasks and multiplying by 100.
When to measure Error rate
A task-based user testing study is the best scenario to measure the Error rate. It will work for both moderated and unmoderated variants.
Page load time
For some, putting this metric on the UX metrics list might not be an obvious choice. However, the time a user has to wait until the page is loaded and ready to interact with has a major influence on the overall user experience. According to a study by Portent, a load time between 0-4 seconds is best for optimum conversion rates. Google’s Core Web Vitals will score your website Good if the Largest Contentful Paint (LCP) is under 2.5 seconds and Poor if it’s above 4 seconds.
What’s important to remember is this metric is very technical and influenced in its majority by non-design factors, so the ultimate owner of it would be either a Tech Lead or a PM.
Product metrics worth tracking
Numerous product metrics are just waiting to be tracked. But in today’s episode, I listed only six. These are the six metrics that are worth tracking:
Customer Acquisition Cost (CAC),
Customer Lifetime Value (CLTV),
Stickiness (DAU and MAU),
Retention rate,
Feature usage,
MRR and ARR.
Depending on the product maturity and its specifics, you might want to track more than just these six metrics, and that’s absolutely fine.
To track them, you can use specialized software such as Qualtrics, Pendo, Amplitude, Mixpanel, Datadog, or a custom backend solution.
Customer Acquisition Cost (CAC)
Customer acquisition cost (CAC) tells how much, on average, it takes to win a new customer. To calculate CAC, you need to sum up the total cost of marketing and sales and divide it by the number of new customers.
This metric goes hand in hand with CLTV – Customer Lifetime Value, and you should look at them in parallel. If your CLTV is low and CAC is high, it might require finding more cost-effective methods of winning new customers.
Customer Lifetime Value (CLTV)
Customer Lifetime Value (CLTV/CLV) represents the total amount of money a customer is expected to spend on a company's products or services over the course of their relationship. To calculate CLTV, take the average order value and multiply it by the purchase frequency and the average lifetime. For example, if you run a SaaS product and the subscription cost is $25 monthly, and an average customer lifetime is two years, your CLTV is $600.
Stickiness (DAU and MAU)
Daily active users and Monthly active users are product engagement metrics that show the number of users your product has daily and monthly. An active user is anyone viewing or opening the product daily or monthly.
DAU can quickly reveal the effect of introducing new features or removing old ones. MAU, on the other hand, can be a good indicator of the overall health of the product.
Retention rate
Once you start seeing new customers coming every day, you will feel like you did a great job, and everything should now go smoothly. But what if they run away from your product just after a few days and never come back?
That’s what the Retention rate represents – the higher, the better, which means more customers stay with your product after joining. If it’s low, it’s probably the best time to shift the effort from customer acquisition to other initiatives and look for improvements in your product that would make your users stay.
Feature usage
This metric helps in tracking which features are used most frequently and by whom. Features on the top of your chart are the ones that bring the most value to your customers and deserve special treatment.
It also can tell which features are not being touched by the users very often, which may indicate a particular feature does not bring enough value or has usability flaws. Be careful about making any calls just by looking at this metric. Some features, by their nature, will not be visited very frequently. For example, administrative features are used quite rarely, and it does not mean that you should sunset them.
MRR and ARR
Monthly Recurring Revenue and Annual Recurring Revenue are the set of business metrics allowing you to predict the total revenue generated by your product every month and every year. These numbers are important for two reasons. Firstly, they help in financial forecasting and planning investments, ergo planning product initiatives. Secondly, these are the hard numbers that are easy to share with and understand by stakeholders and shareholders.
A word on NPS
Net Promoter Score is a metric representing customer loyalty. To measure NPS for your business, deploy a survey with the question, “On a scale of 0-10, how likely are you to recommend our product to your friend or colleague?” where 0 means not likely at all and 10 is very likely. People choosing 0-6 are called Detractors, 7-8 Passives, and 9-10 Promoters. To calculate the final score, use the following formula:
Net promoter score = total % of promoters – total % of detractors
Yes, you are right. An NPS can have a negative value.
There’s some controversy around NPS being a UX metric. Why it should not be your only way to measure customer sentiment, or even worse, usability, head over to these two articles:
Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It) by Jared M. Spool
Net Promoter Score (NPS) is not harmful. Believing in silver bullets is by Aga Szóstek
Further reading:
A Guide to Task-Based UX Metrics by Jeff Sauro, PhD, Jim Lewis, PhD
11 Website Page Load Time Statistics [+ How to Increase Conversion Rate] by Kristen Baker
♻️ Share this article
If you found this post useful or entertaining, consider sharing it with one of your design or product management pals!
💼 Job Alert
Quale is actively searching for a Freelance UX Researcher. This is a remote position (Poland). For more details, check this LinkedIn post.
This is not an advertisement or an endorsement.
🛠️ Tool of the week
Alphredo
With this online tool, you can create transparent colors to match the opaque ones from your current palette. You can customize the background and saturation and export colors in HSLA, HEX, or JSON format. Alphredo was made by Adam Ruthendorf-Przewoski.