While being a small team of UX designers, we involved an impressive amount of respondents in our user research sessions and surveys to make sure we are building the right things.
- First impression and preference tests
- Visual appeal surveys
- Card sorting
- SUPR-Q scores
- A/B tests
- Heatmaps and recordings
- 470+ user tests and interviews
- 12 design sprints
- Product-Market fit surveys
- Custom intercept surveys
- SUPR-Q qualitative comments
As you can see above, we used a wide range of user research methods ranging from usability testing and interviews to A/B tests and card sorting.
In most cases, we combined findings from the different research methods to get more reliable results. A good example of this was the research we conducted during the redesign of Mastersportal.
I was leading the UXD team at Studyportals, with whom we built a robust user research practice. I was contributing to the user research setup, interviewing respondents, analyzing and concluding research findings. Additionally, I was focusing on user recruitment automation, ways to present and document research findings.
Studyportals is a leading education choice platform visited by over 44M visitors in 2020 alone. The platform lists over 160K courses from more than 5K educational institutes on six portals like Mastersportal, Bachelorsportal, PhDportal.
Studyportal's mission is to make sure that no student would miss out on an education opportunity due to a lack of information. Studyportals helped at least 485 000 students to find their education.
The rapid growth of the Engineering and UXD teams posed a range of challenges. The biggest one was to be ahead of the development teams when it comes to user research. We already ran bi-weekly user tests. Mainly we tested what was already released, which lead to potentially developing features our visitors don't need.
Below you can see a simplified user research flow we had at the time with an approximate sample sizes guidance for each research method.
Having the bi-weekly user test sessions was a great way to improve our products and the design maturity of the organisation. However, we didn't always have enough topics to test every two weeks and, it was putting a significant strain on the team.
We wanted to find a sustainable way to integrate a diverse range of user research methods in the different stages of product development.
Our qualitative research activities could be put in two and, at times overlapping, categories:
- Tactical, feature-based and detail-focused tests
- Strategic, long-term research
To reduce the subjectivity of our findings, we combined qualitative and quantitative methods to see what students do and what they say.
As our websites visited by millions of students each month, it's easy to get fresh input about experiences with our product. Our intercept surveys often combined quantitative and qualitative questions. A few notable examples were:
- Standardised User Experience Percentile Rank Questionnaire (explained here).
- Product-Market fit survey (by Sean Ellis).
- Visitors' goal and stage in the educational programme search journey.
Product-Market fit survey
When in March of 2021, we asked our visitors how they would feel if they could no longer use our product, we got 82% of the respondents stating they would be very disappointed. Well above 40% benchmark.
Follow-up questions were useful to confirm our findings from the qualitative feedback we got from SUPR-Q.
Design sprints proved to be a fast way to understand and test complex ideas with the added bonus of involving stakeholders in the design process.
We run our first design sprint in 2016, a few months after Jake Knapp and John Zeratsky published the book about their process.
In the same year, I presented my experience with the facilitation of the design sprint at three companies and two conferences. In 2017, we ran five design sprints in parallel.
Screen recording viewing sessions
Another way to engage the engineering team and stakeholders in user research were 2-hour Hotjar recording viewing sessions. The main goals of those sessions were to find out:
- Where the users get stuck.
- The opportunities for improvement and bugs.
User recruitment was a shared task in the UXD team. It often required a lot of effort and delivered unpredictable results. That's why we automated the majority of the recruitment tasks. Check out my presentation about the user recruitment automation setup we use to get non-trained respondents for our research sessions.
We usually started the recruitment process 2–3 weeks before the session to have enough participants. To get access to non-professional respondents, we used Facebook ads, Hotjar polls and email newsletter.
Different research sessions required participants with different backgrounds, so we used personas to construct our screening surveys. To save respondents' time, we placed the critical questions first. Only after the consent form was signed, we would ask for personal details.
In recent years we mostly performed remote user research. The main reason was that it replicates the environment our users use our products better than in-person sessions. Additionally, it allows us to access way more students around the world. And with the help from Calendly, we saved a lot of time on session scheduling and reminders.
Just doing more user research was never the final goal of the UXD team. Instead, we wanted to build a design culture that ensures that we help students find their dream education.
Measuring design maturity is tricky and often based on self-reported surveys. In our case, we got a few more objective achievements:
- Everyone in the company was exposed to user research outcomes at least once every month.
- Every story larger than a quick fix was evaluated with ICE methodology (Impact, Confidence, Effort). The confidence component is directly connected to the research conducted.
- The UX and Product team conducted user research sessions and defined product strategy based on the research outcomes.
- Pragmatism goes a long way in introducing the new processes, but it's important to set principles that shouldn't be broken. For example, allowing stakeholders to participate in only part of the design sprint can discredit the results.
- No-shows are expensive for your credibility. Recruit more respondents than you need and pay more than participants’ hourly rate.
- Asking respondents to provide the phone number allowed to filter less interested candidates and reduced no-shows.
- Sometimes quantitative research opens doors for more qualitative methods.
- When to Use Which User-Experience Research Methods
- Triangulation: Get Better Research Results by Using Multiple UX Methods
- Leading and Lagging Measures in UX
- Net Promoter Score Considered Harmful (and What UX Professionals Can Do About It)
- Best practices for graphing & displaying data
- Quantifying The User Experience: Practical Statistics For User Research
- An overview of the various questionnaires that measure software and website quality (pages 70–72)