Perform usability testing

1. Recruit testers who are as similar to your target audience as possible.

Ask actual customers to take the test, post a request for testers on sites like Craigslist or TaskRabbit, or employ the services of a user testing company with their own panel. Recruit a sample size of at least 5 testers. To get statistically reliable quantitative data, you’ll need at least 20 testers.

2. Decide which type of user test you want to run: moderated or unmoderated testing.

Moderated testing is when you sit in on the session with the tester, talking directly to them to give instructions and ask questions. You can use any video-calling tool with screen share features for moderated testing. Skype is a good free one; Google Hangouts is another. To record your test sessions and review them later, you’ll need a paid service like GoToMeeting or Zoom. Note that moderating requires restraint, neutral reactions, careful word choices, and skill in getting the test to unfold in the right way. Unmoderated testing is when the tester performs the test on their own by following a prepared instruction list, without outside interference. If you don’t have much experience moderating user tests, unmoderated may be a better option.

3. Write your test script - a series of tasks, sub-goals, and questions - to gradually lead your testers from start to finish.

Look at your web analytics to identify common flows that your users follow, and translate those pathways into step-by-step tasks. Write tasks as objectives for the testers to work towards, not exact instructions like Go here or Click this. Supply relevant information or context in your tasks wherever it’s needed for the tester to understand what you’re asking of them. Avoid jargon, or words and phrases specific to your company. Write plainly, in ways that a user in your target audience will understand. You can include an impression test or “five-second test” at the beginning to gauge people’s first impressions and test your brand imagery, communication, and home page effectiveness.

4. Collect data from screen captures and voice recordings of the tests.

Screen capture videos allow you to watch exactly how the testers interacted with your site or app, where they went, and what they did. Voice recorded narration lets you understand what the testers were thinking as they went through the test, and why they liked or disliked certain things. Face recording allows you to observe testers’ facial expressions and reactions as they interact with your site or app. Types of metrics you can collect through video analysis include: Time to complete a task, proportion of testers who complete a task, and number of clicks.

5. Collect supplementary data as well, like usability scores, completion rates, and survey responses.

Measure overall system usability with SUS, PSSUQ, SUPR-Q, and UMUX models. Measure the usability of an individual task using SEQ (Single Ease Question) models.  Collect the users’ final thoughts and major takeaways with wrap-up surveys after the users have finished interacting with the test site or app. Use a combination of different question types, such as written-response, slider-rating, or multiple-choice, to get a good mix of qualitative and quantitative insights.

6. Analyze your results by looking for the discrepancies between how you intended things to be used and how people actually used them.

Pay attention to what people say, do, and look at when they first see your site.  What the users say during the test is valuable, but don’t just take them at their word. Compare what they say to what they do. Observe whether they’ve misunderstood anything, or if they experience any small hiccups that they don’t bother to mention aloud. Use survey responses and metrics to identify which users had the worst experience, or which tasks were the hardest, so you can cut straight to the chase and watch those first.

7. Note down issues and list 1 or 2 broad topics that each issue relates to. For example, search, navigation, checkout, language, or images.

See how often each topic comes up within and across sessions, to understand the relative importance and frequency of different issues. Use post-it notes to write down issues, then sort the notes at the end as a visual aid to see big groupings. You should also compare these findings to users’ stated opinions from their videos and survey responses, to see what they considered important or what stuck out most in their minds.

8. Watch for positive moments as well as critiques, so you can accentuate what you’re doing right and not change or remove what your users like.

While the main goal of usability testing is to identify and fix what’s not working, you can also learn what is working well. Users may make explicitly positive comments aloud, but sometimes it’s just that they’ve breezed through something without difficulty and have obviously understood everything correctly. Don’t assume that a lack of explicitly negative comments means everything is perfect, though. Always pay attention to what the user is saying and what they are doing.

9. Account for your own biases and blind spots by having someone else look at the results, review the videos and compare notes.

If possible, get a second set of eyes on the results. Any one individual will have biases that may influence what they notice in the data, how they interpret it, and/or how they prioritize the findings. This can be especially true if the person analyzing the results is the same person who made the designs. Evaluating your own creations objectively is hard!

10. Save key video moments for use when demonstrating a point to your partners or investors.