Measuring user experience

A system for continuous measurement of the product UX

Results

Now we have a system to continuously measure the user experience of our products and process to ensure timely follow-up on the feedback. Which is a great addition to the other user research methods we practice.

In the last two years, we gathered 11 908 quantitative and 2 062 qualitative responses from our engaged visitors. Every month we cluster the responses in different themes (e.g. general content quality, UI, tuition fees etc.) to inform our product strategy and detect issues.

11908
Quantitative responses

2062
Meaningful qualitative responses

My role

This project started as a hackday activity and grew into one of the metrics we use to measure success in the Engineering department.

In the different phases of this project, colleagues from the UX, Product, Engineering, Big Data and Management teams helped me with labelling visitors' feedback, automating data cleanup, and providing suggestions for improvement.

Context

Studyportals is a leading education choice platform visited by over 44M visitors in 2020 alone. The platform lists over 160K courses from more than 5K educational institutes on six portals like Mastersportal, Bachelorsportal, PhDportal.

Studyportal's mission is to make sure that no student would miss out on an education opportunity due to a lack of information. Studyportals helped at least 485 000 students to find their education.

Challenge

There is a lot of subjectivity when it comes to measuring design quality and user satisfaction. As a team lead of the UXD team, I was always interested in measuring the success more objectively. When this project started, we already had a robust system to measure conversions, but we lacked a stable user-focused counter-metric to ensure the long-term success of the product.

Choosing the metric

We wanted to go further than NPS. We already experimented for a few years with different ways to measure user satisfaction, but none of them included all the aspects we wanted. Also, the scores fluctuated too much, so it was hard to see how our changes affected the product.

Our choice fell on Standardised User Experience Percentile Rank Questionnaire (SUPR-Q). It focuses on four aspects of user satisfaction usability, trustworthiness, loyalty and appearance. And it contains only eight Likert items so the visitors can complete it quickly.

In addition, to understand the scores better, we added one qualitative question at the end.

Setup

To capture feedback from the visitors that are browsing our website, we made a simple intercept survey via Hotjar.

We show the survey to a small percentage of engaged visitors to reach 400–600 complete responses each month. That way, we don't distract most of our users and have a relatively low margin of error (below 2%) every month.

Hotjar records respondent's country, device, OS, browser version and the page on which the feedback was given. This allows us to segment the feedback, detect issues and understand visitors' feedback better.

Data cleanup and processing

Performing routine tasks every month is rarely a gratifying experience, so we automated big chunks of data cleanup and enrichment with Python.

For example, to simplify calculations, partial responses are removed. We also noticed that qualitative responses shorter than ten characters are rarely useful, so the script labels those responses automatically. This way, clustering of 500 qualitative responses takes only an hour every month.

Reporting

The first iteration of the report was a huge Excel file with notable quotes and score graphs. However, that required every stakeholder to dive into it and play around with Excel filters. As not everyone enjoyed it, I looked into different ways of communicating it.

The next major iteration was a few page report in Confluence with the following structure:

Every month together with all Product Owners and Designers, we go over the previous month report and align on the potential action points. These reports are also useful for quarterly and yearly planning sessions as they provide a quick snapshot of what we do well and what we can improve.

Impact

It became a part of the product discovery and company metrics. A low effort user research method that brings both the numbers and quotes that are appreciated by stakeholders of various backgrounds.

We were able to detect and fix more bugs, issues and outdated content with less effort.

Everyone in the company is exposed to user research outcomes every month.

Reflection

Additional references

  1. Leading and Lagging Measures in UX
  2. Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It)
  3. Best practices for graphing & displaying data
  4. Quantifying The User Experience: Practical Statistics For User Research
  5. An overview of the various questionnaires that measure software and website quality (pages 70–72)

Up next
Developing UX team organisation ›