Run and score a task-level survey
1. Run the After-Scenario Questionnaire (ASQ) to determine reliability, sensitivity, and concurrent validity.
The ASQ consists of three statements that a participant scores from 1-7 – strongly disagree to strongly agree – after completing a task: Overall, I am satisfied with the ease of completing the tasks in this scenario. Overall, I am satisfied with the amount of time it took to complete the tasks in this scenario. Overall, I am satisfied with the support information (online-line help, messages, documentation) when completing the tasks.
2. Run the NASA’s task load index (NASA-TLX) to assess elements such as a task, system, or team’s effectiveness.
The is broken into two parts. In the first, the user scores the effort involved in the task, in 6 categories. In the second, they assign a weighting to each category.
3. Run the Subjective Mental Effort Questionnaire (SMEQ) to determine how much mental effort a task requires.
The SMEQ has one scale and correlates highly with SUS scores, completion time, completion rates, and errors.
4. Run the Usability Magnitude Estimation (UME) survey to have users assign a number to a task to estimate the difficulty in proportion to the perceived difficulty.
For example, a task perceived as 100% difficult is twice as difficult as a task perceived as 50% difficult.
5. Run the Single Ease Question (SEQ) survey to determine task-level satisfaction.
The SEQ consists of one question after a task, asking the user to rate the task from Very Difficult to Very Easy.
6. Assign a number to the task experience, such as the number of problems encountered or the number of steps to complete a task, to see where you can improve.
Improving this number over time allows you to measure how designs have improved the UX, and then fix the bottlenecks and improve conversions, thus improving revenue.