Usability tests check if people can use a product. Duh. But what does “usable” mean? Steve Krug, author of Don’t Make Me Think, summed it up: “It really just means making sure that something works well: that a person of average ability and experience can use the thing—whether it’s a website, a toaster, or a revolving door — for its intended purpose without getting hopelessly frustrated.”
This guide shares an overview of the processes and a few tips for those who have just started out, as well as for those constantly planning and polishing their usability studies.
What is usability testing?
Usability testing forms a user experience research method, usually a qualitative one, as the core of UX research. It mainly aims to check product ease-of-use and functionality.
A typical (potential) user of any digital product should be able to interact with it without frustration, and easily accomplish set goals. Imagine the findings and insights making usability testing as a polestar guiding product-related decisions. Not only design but strategy and feature prioritization as well, just to mention a few.
No perfect standard says how to do it. Usually, you’ll have to fine-tune and customize this method to fit the context. The most important first step: Learn the basics of usability testing, start doing tests and make them an essential part of your UX process.
What is the purpose of usability testing?
It not only serves as a method for discovering usability issues but also indicates what works. You might have to pay close attention because really intuitive and self-explanatory steps, interactions, and attributions might not even get noticed or mentioned at all by the participant during a usability test.
- Identifying main issues in the usability of a product
- Checking if users understand the steps to carry out a task and the navigation
- Observing how easily and quickly they accomplish tasks
- Validating the value proposition of an app or website – do your potential customers understand it?
- Testing competitors’ solutions. (Even without a test-ready prototype or a website in the initial phase of product development, you can get ahead by testing competitors’ solutions with the target group to gain insights on what to do better. Check how we did exactly that in one of our case studies.)
What NOT to use it for?
- Testing out the emotions and associations your design and visuals draw
- Getting quantitative data on usage
- Identifying preferences between versions – visuals or copy
- Validating desirability
- Estimating market demand and value.
👉 Pro tip: Does our product really meet a need? Are we targeting the right market? If you want you to find answers to these questions, don’t expect to get them from usability testing. Why not? Because it’s not the right time and the right method to find answers to these questions. Ideally, the product team has already passed that. If not, take a step back. In such cases, professional market research may work better for starters.
When to do it?
The basic answer: any and every time during the product cycle.
- When forming a concept (you can even test low-effort paper prototypes or competitors’ sites or apps)
- Beginning of a project – test the current solution you want to improve or rethink
- UX design or redesign phase
- During development
- Follow up
- Backing up some piece of strange quantitative data
📌 Example: 82% of your users drop off on the checkout page of a food supplement webshop. We know the numbers but we don’t know the whys. Running a few usability tests focused on the checkout page can show the underlying reasons. It might turn out it asks for card data too early, even before the chance to apply discounts.
Aim to have usability testing as a recurring activity in your UX design process. Do it as often as you can, in all stages of the product cycle. If you’re working on board for only a short time with a focused issue, then do it all along the project. Something can always improve the product, so don’t forget the lean UX cycle: think, make, check.
How many tests to do?
When it comes to one-on-one tests such as these, you might wonder: what makes a representative sample size? According to the Nielsen, testing five people in one cycle works, as 85% of the problems tend to surface by then. After a round, prioritize the issues and work on them, iterate, and test again.
Another popular framework, RITE (Rapid Iterative Testing and Evaluation), promotes faster processes and solutions. It favors running one test and correcting the upcoming problems immediately afterward.
Applying this framework allows you to test much more frequently, quickly and flexibly.
The image below illustrates the number of problems arising with usability test participants over time:
Failures prevented the user from completing the level, where errors just meant trouble and frustrations for the user. The vertical dashed lines signify the iterations. Having six rapid iterations let them get the prototype error-free by the twelfth test.
Feel like reading more about the topic of MVP’s and iterations? Nóri has recently written an article about product lovability in an early stage, check it out!
What type of user test should you do?
First, adjust the scale, rigidity, and the frequency of your projects’ usability testing. Your approach and methodology might really change when working at an agency on a short-term project or as part of a product team handling the iOS app segment for the longer run.
Always consider multiple factors, and keep mindful of the resources before putting your vote in on a certain approach or method.
In-person or remote? Monitored or unmonitored? No one approach always works better than the other.
For some projects, you learn the most when you just go out wherever your target audience members hang out – the mall, the airport, the coolest new café – and do quick-and-dirty testing to get early feedback (guerrilla research).
Some products merit a large-scale controlled study. Some even build their own usability labs with two-way mirrors and other fancy equipment.
If the sheer thought of investing in the construction of a usability lab alone discourages you, aim for a cute budget lab.
It might seem tedious at first: all the recruitment, preparation, set-up, analysis and report for one short little test, but it will save effort and even money in the long run.
Just go for it! Even if you can only do informal, run-in guerrilla tests in the street or the mall, that still works far better than no testing. Your effort will reap actionable insights from potential users.
How? Testing early and often will put you on the right track, giving users something they want and can easily use. That avoids costly changes post- or mid-development, or worse, after launch.
With live products, testing out ideas first lets you know what function or interaction is worth the effort to include and also what to let go of if it no longer provides users any value.
Here’s your step-by-step guide:
Step 1: Plan your study
To properly plan ahead, first clarify:
- The product to test: Are you onboarding a mobile app, a website filtering system, or a kiosk interface prototype?
- The platform: When testing a mobile app, determine if the OS matters and if it might bias the study. If so, let participants choose the OS to test on. This approach works best since, for example, a back button on Android devices can matter a lot in the user experience.
- Research objectives: One objective can be to check if users understand the passwordless registration and login. Another sees if they can easily navigate to the product detail page. Turn your high-level objectives into concrete research questions. Don’t try to cover the whole story. Concentrate on a few tasks, pages, areas and assumptions to test and stick to them.
📌 Example: Don’t test the whole eBay site. You won’t have enough qualitative data to compare between participants. The time does not allow for making it meaningful, if you don’t focus on a particular area.
- Target audience
- Who are they?
- How many participants do you need?
- How are you going to reach them?
- What will you give as a gratuity? (money, gift card, discount, tickets, lifetime access to your product, etc.)
- How would you make sure they come from the target audience?
If you need HR representatives to test a remote hiring platform and need to check their experience, current position and qualifications, make a screener with specific questions.
- Remote or in-person: Base your decision on the type of product, project scope, objectives and target audience. Choose the one that proves more feasible (technically and financially).
📌 Example: If you target people with high-demand jobs and salaries, either go to them or schedule a remote test. A neurosurgeon won’t have time to come to your location and test for an hour. Adapt to their lifestyle.
- Write a test plan
- Setting (where, when, which devices)
- Roles (moderator, observer, note-taker)
👉 Pro tip: Have no more than two people with your participant in the room. It can lead to frustration and intimidation.
- Write a script:
- Tell the users the test’s purpose and how they might help with their honest feedback to ease their minds about the “test” situation.
- The term “usability test” is misleading. UX professionals always test the product, never the users. We must emphasize that the user does not cause failure, they cannot do anything wrong.
- Adjust the usability test script according to participants’ prior knowledge. Some explanation is required on what a prototype is if they have never even heard of the concept, but don’t overdo it for an IT guy.
- Come up with a realistic scenario. We want to hear about the users’ personal reactions to the product, we’re not usability evaluators writing reviews. Help them immerse themselves and engage in the testing.
- Come up with exact tasks you want your participants to perform. Usually, I write down the exact wording or the tasks but often modify them on the go based on the participant’s answers to the intro questions. They should find the scenarios and tasks as realistic as possible.
- Include intro question and mini interview. It builds rapport between you (the moderator) and the test participant. Also, it provides an opportunity to validate user personas. With enough data on real users (your participants), you can revisit the personas and make them more accurate.
📌 Example: “As a homeowner with a garden, you are looking to install an automated irrigation system so you don’t have to schedule your daily routine around watering. You googled the term with your city name, and stumbled upon this website.”
Step 2: User test participants
Recruitment always makes for hard, tiring work. It can make you feel like a tv commercial trying to reach out and “sell” the opportunity for usability testing to as many people from the target audience as possible.
What makes user testing hard?
First, many users have never heard of this UX research activity and the whole thing might sound suspicious and cumbersome. As nothing gets around this, just try what works best as a recruitment script – on the phone or in email – to evoke trust rather than suspicion or confusion.
Finding the right person
When deciding who to target, keep a clear persona or a certain group of people in mind. The main rule of thumb here would be to find someone who has not tried your app before. Your developers will obviously not give you the feedback needed.
That said, having a landscape architect try out a cryptocurrency exchange application probably won’t bring many impactful insights. You’ll need someone with some level of expertise for a more specialized product.
To make sure your potential user test participant fits the criteria, make a screener. You can have an online form or pose the screening questions directly on the phone. The work is hardly done then, however: you’ll still have to sort through all these people, keep track of everyone, handle your inbox, and schedule test sessions.
👉 Pro tip: Always schedule a couple more than needed, since usually 10-20% of scheduled tests are canceled by participants.
If you have the financial resources and can’t or don’t want to recruit, use a third party service, like PingPong to do it for you, selecting people from their database matching your criteria. Then, prepare for a remote test with a time limit, and for participants who don’t need a lot of explanation on the process, because they have voluntarily signed up for the test.
Step 3: Lead the test
This might sound very “researchy”, but make sure to test the usability test. Take it for a dry run or do a pilot. The pilot participant doesn’t have to come from the target audience. Run the session with a colleague or annoy your friends, just try out your plan from A to Z.
You will surely find some glitches in the prototype or vagueness in the task instructions, or that you have counted on a 30-minute usability test when the pilot actually took 51.
- Have a quiet room booked for the session.
- Remind the user of the appointment.
- Print the scripts.
- Set up the equipment and the recording – check again that everything works properly.
- Welcome the user.
- Introduce yourself.
- Summarize the procedure (“We’re going to test a food delivery application specializing in Asian food”).
- Emphasize that you are testing the app, not the participant.
- Implement the “think out loud” protocol.
- Remind the participant that we want to know how they would use the product at home without help.
- Present the consent form and NDA if needed.
- Share some practical info and give the participant the opportunity to ask questions: “The test is going to take one hour.”, “If you would like to stop the test, please let me know.”, “Do you have any questions about the test? Can we start?”
During the test
- Scenario: Make the scenario specific and realistic so the member of the target audience could/might perform in their own lives as well, like in setting the scene for a situational game. Help thoroughly paint the picture for the participant so they can easily imagine the scenario.
📌 Example: “Imagine that as marketing manager of a company, you must choose the newsletter service provider your company will use”.
- Warm-up questions
- Needs: Where does the problem arise? What are they looking for?
- Experience: How do they use similar products?
- Intentions: Why do they use those products?
- Knowledge: What do they know about the topic?
- Tasks: Weat our UX company always give the participants tasks to focus their attention on some area or aspect of the interface, whether an overall impression of the landing page or the act of buying a shoe on an e-commerce website. Three types of tasks:
- Broadly interpreted: These instructions show how users start using the application and figure out what it does. “Discover the ways you could use this application to enhance your cycling performance.” “Explore this landing page and see what this company offers.”
- Tasks related to a certain goal: These tasks aim to test certain processes. “Buy a TV!” “Sign up!”
- Tasks related to specific interface elements: “Where can you subscribe to a newsletter here?” “How can you put an ebook in the basket on this screen?”
Handy psychology basics
Some practice in psychology and interviewing comes in handy. The main principles and approaches below also help therapists during sessions:
- Be non-judgemental.
- Be non-directive (“Where do you find the chat function?”; instead: “What would you do if you need help or want to connect the developers?”)
- Don’t show reactions indicating whether participants are doing the “right thing” or not, remain neutral without overdoing the poker face.
- Make your users feel at ease.
- Probe participants to say everything on their minds. We naturally hold back and don’t say everything out loud. Imagine even an hour with no filter, saying everything you think of out loud. What a mess, right? Before the user starts to compliment the research room decor, specify that we don’t want to hear ALL thoughts, just those in connection with the product and the experience of the testing.
- Pick up on non-verbal cues (frowning, fidgeting, biting lips etc.)
- Don’t be afraid to stand the silence.
- Paraphrase (“So you are saying you expected this button to take you to the checkout page, right?”)
- Ask back (“Will clicking this button pause or exit the game?” – “What do you think it will do?”)
Steve Krug’s “Things a therapist would say” highlights many situations that occur very often in usability testing, offering possible verbal reactions from the moderator’s side. In my experience, these happen pretty regularly, so it merits preparation with some standard questions and reactions for these difficult situations.
After the test
Especially for the participants to ease up by the end at least, post-test questions matter a lot, especially those who seemed frustrated. Often, feeling like the test subject (in place of the product) causes frustration or angst in participants.
It happens that even if you try your hardest to emphasize that they are not the ones being tested, they will still remain tense until the very end of the session.
To comfort your participant, lead to the post-test questions asking them how they felt and how they perceived their experience. A bit of post-test chit-chat indicates the “serious” part of the test has finished and they can now really share their honest opinion.
It really opens up some people and even though they spoke sparingly during the test, they might give a lot of feedback during the post-test questions.
Other concrete post-test questions:
- Have I forgotten to ask you about anything? / Would you like to mention anything?
- On a scale from 1 to 10 how easy it was to use? (You might break it down according to different tasks)
- What three main things did you like most about the app or this experience?
- What three main things need improving, or confused you, or you didn’t like?
- Did it lack anything?
In addition, you can even go back to the problematic or really fast parts of the test when you couldn’t really get what the participant was thinking. If you want to go hardcore, have a lot of testers and go through the System Usability Scale (SUS) after the test to gather a mass of quantitative data from usability testing.
Also, don’t forget to thank the users afterward as well, and assure them their feedback proved useful and valuable.
What if things go wrong?
Nothing. Glitches happen. They happen to everyone, we included, especially at the worst, most inconvenient times. It happens because we work with different devices, prototypes, recording devices and types of software.
Speak honestly. Tell your participant you need to fix something or deal with an issue, but not at the expense of their time. Always keep to the agreed-upon time. If the glitch causes a major setback, politely ask the participant if they have a bit more time.
Most importantly: Don’t panic!
Brace yourself. These typical examples show what can go wrong in a user test and how to set them right:
- Recording cut off or not working. ➡️ Carry on with the test even if the recording is off. Take notes later.
- Prototype does not work properly, missing interactions, loops. ➡️ Steer it in the right direction by coming up with different tasks and guiding the user back to a familiar place. If the prototype totally doesn’t work, use the time on your hands to conduct an interview. You might find out something very important.
- The user seems very frustrated and cannot accommodate the “think out loud” protocol. ➡️ Ask probing questions. If that doesn’t work, they very likely still feel frustrated. Emphasize once again that they cannot do anything wrong, and help them imagine themselves on their own, naturally interacting with the product.
Comfort the user
Rule Number One: Assure your participant that they shoulder no blame. Never let the participants helping you with their valuable feedback feel frustrated.
- Treat participants like MVPs (most valuable person in this case) even if they may prove difficult like anyone else.
- Thank them.
- Make them feel comfortable.
- Emphasize we are not testing them – repeat if you see them act and react like in an oral exam.
Step 4: Analyze & Report
A successful testing session doesn’t mark the finish line. The next part matters just as much. You need to make sense of it, group the feedback, prioritize it and report back to your team or your client in a straightforward manner.
Analysis and reporting usually go hand-in-hand as we don’t have time to rewatch everything twice. Also, our analysis and interpretations go into the report as well with some other raw data, like user quotes.
What type of reporting do you need?
Lots of projects don’t require presenting huge spreadsheets and 15-page reports. After a round of tests, you can share the test notes and talk it through, providing only a more detailed report when a bigger cycle ends. It might mean five or six iterations and three or four sprints etc.
This always depends on your product roadmap, product strategy, and how much research you set out to do. Plan a strategy, plan ahead and think it through.
Your reports provide an opportunity to also include those qualitative findings. Did a user look baffled or impatient during a certain step? If you forget to ask how they felt, or if they didn’t admit to something clearly present in their facial expressions, then the observations might end up being highly subjective, and the interpretations 100% hypothetical.
Getting actionable insights from the test comes foremost. We don’t test for usability problems to end up in a report; we test so we can iterate on the product and improve its usability.
Research system or simple test reports?
For a longer project with many iterations and, on constant weekly usability tests, it might merit building up a research system to synthesize your endless number of observations and insights into a searchable, consistent system. This might prove extra useful working on a single product for a longer run, and you don’t only get involved for quick fixes in four weeks.
A research system represents a treasure for everyone involved in the product team (and also satisfies their OCD-like tendencies). In the latter agency-like model, building up such a system just wastes resources.
👉 Pro tip: For a leaner approach, try to include only the essentials. Task, Observation, Location (page, step), Issue Severity.
Prioritize usability issues
To avoid ending up with a big mass of findings in your client’s or team’s face, make sure to prioritize the issues.
If possible, set up a meeting after a round of tests, so you can help stakeholders make sense of the results and answer emerging questions.
To help them better understand and ideate on solutions, make your findings more comprehensible by providing snippets of test recordings, including screenshots and quotes from the participants. That will shift the nature of the usability tests from a very abstract and mysterious research activity to a tangible method contributing to informed product decisions.
It might also help organize the vast amount of feedback, information, and insights from usability testing sessions. Develop your own standards and system.
Distinguishing between usability issues, positive findings, feature requests, and general observations make a good starting point.
There’s no “one size fits all” solution for reporting. It really depends on your client/team and how you work together as well as the type of usability tests you’re running.
Fear not, it turns out less complicated than it may seem. No one has a perfect usability test, and we all make mistakes.
Remember, no usability test has no point or meaning; you always learn something that might help you improve the product. Just do usability tests often and include other stakeholders. This lets them see the value of your method and make their mindset more user-centered.
Usability testing can serve as a firm basis for the research you do on a digital product; however, it works best complemented with other UX research methods and quantitative data.
To see how to complement usability testing for the best insights at the right time, read about some other main UX research methods here.
Also, check out our book on product design where we explain our usability testing methods and experiences through three case studies.