Overview
Usability testing is a structured technique that involves real users performing real tasks to identify any usability problems, collect qualitative and quantitative data, and gauge user satisfaction with the product.
You might want to conduct usability testing when:
- Exploring design concepts to gauge potential value
- Designing a new product or redesigning an existing product
- Deciding between different design options
- Comparing competing products based on usability
- Verifying design changes
- As part of an ongoing, iterative development process
Who’s involved?
- Session facilitator
- Participant
- Optional: 1–2 observers (e.g. a project team member or key stakeholder)
How To
I. Plan and Prepare
- Identify and prioritize goals
Understand what aspects of the design you want to evaluate. - Choose and develop tasks
Determine the tasks the user will perform during the evaluation. Tasks should be selected to provide the data you need based on the goals. - Identify user groups and select participants
Determine distinct user groups based on their unique needs. - Determine logistics and schedule
Select a place where users would naturally use the product, if possible. - Develop Facilitator Guide and participant task list
Conduct a dry run of the tasks to refine the specific task wording and overall flow. - Prepare prototype/product to be evaluated
This may include creating any test data or user accounts, checking hardware and software readiness, or reviewing the prototype flow.
Tips:
- Tasks should provide just enough detail about what participants need to do, not how to do it.
- Arrange tasks in a logical sequence.
- Participants should represent all identified user groups.
- 7 +/– 2 participants (per user group) are usually enough, but 1 or 2 are better than none.
- Familiarize yourself with the current state of the prototype or product to understand any gaps, limits, or known issues. That way you can be prepared to guide participants around or through them, based on the goals of the test.
- If you plan to record the sessions, be sure to follow any policies, rules, or practices around informed consent.
- Determine your strategy to compensate participants (or not), and plan accordingly.
II. Run the Session
- Welcome participant and provide an overview of the session following your Facilitator Guide
The Facilitator Guide should include details on participant consent, think aloud protocol, focus of the test and any up-front demographic questions. - Conduct the review by asking participants to perform the series of tasks provided
Encourage participants to think aloud as they work and be sure to note any important observations or comments. Ask probing questions to help uncover critical expectations, motivation, needs etc. - Discuss impressions
Ask any follow-up questions, seek additional design recommendations, and address any open questions. - Close the session
Provide any closing remarks and thank the participant.
Tips:
- Remain neutral and avoid defending the solution.
- Ask open-ended questions to better understand the participant’s view of what’s working, what’s not, and most importantly, why. Questions like “What did you expect to happen?” and “What are you trying to do?”
- It is okay to let the participant struggle or fail. Use the moment to understand why the solution isn’t working and what changes to the design might help.
III. Analyze and Report
- Compile and review all of the notes and recordings
- Analyze, summarize and categorize key observations, trends, patterns, and comments that align to the original testing goals
Note anything unexpected but insightful. - Document the set of prioritized findings and recommendations
In general, findings are facts supported by data; recommendations are suggestions for improvement. - Share the report with developers, project team, or other key stakeholders
- Work with the project team to determine any next steps in response to the report
Tips:
- After each session, consider taking some quick notes on observations, potential trends or areas to explore in upcoming sessions.
- Based on the audience for the report, consider supporting findings with details on frequency, impact, severity, etc.
- If you recorded the sessions using video, consider creating a compilation of key moments to support high priority findings.
Tips for Life Sciences
- Strive to make the tasks, task scenario, and supporting data as realistic and meaningful as possible to mimic real world work
- Consider the complexity of the work being performed – it may not be have a linear flow.
- Try to conduct the session in the environment where it will be used. If in a lab, consider any general environmental/safety implications (e.g. personal protective equipment)
Resources
- Template - Facilitator Guide (doc) - A template with sample questions to help you plan and conduct usability tests.