Usability Testing New Search and Filter Functionality
Aim. Identify usability issues with new search and filter functionality, and make suggestions for changes.
Planning and running usability tests
Qualitative data analysis
Quantitative data analysis
Summative Usability Testing
Analyse performance and satisfaction to determine if search and filter functionality meets usability objectives
A more flexible search and filtering tool had been developed to replace the navigation bar in a fraud-detection interface that required users (typically analysts) to enter a complete and exact ID to find specific cases and events. We wanted to improve effectiveness, efficiency, and satisfaction with the new functionality. We first needed to test what issues users experience and under what circumstances.
I ran seven usability tests to gain insight into the extent to which the new functionality facilitated the user’s ability to complete routine tasks belonging to three scenarios that relied on search and filter.
With colleagues, I observed user interactions on specific tasks, from which I identified patterns of behaviour, errors, bugs, and usability issues. Tests were undertaken on a laptop in a room with a moderator positioned a few feet away from the participant at roughly a 45degree angle, and a second researcher (optionally) sitting behind, similar to Rubin’s ‘simple single-room set-up’ (2008). I obtained informed consent to video and audio record the sessions, which I later transcribed for analysis.
Results were compared with customer feedback later in the year, which confirmed many of the issues identified in usability testing.
At first, I struggled to turn the generic user activities I was given into defined scenarios. Belonging to a small Design Team established one month prior, it was hard to communicate to developers the value of taking the time to create a solid test plan with specific tasks that would ensure the test was consistent for each participant and provide measurable results. I dealt with this by walking developers through their scenarios as if they were participants, working with them to break user activities down into steps, creating goal-focused user flows.
Another challenge was convincing developers that not all flaws need usability testing to be identified and removed. Usability testing often offers insight into problems that may not have been obvious and these can be masked by usability issues that could otherwise be identified with a heuristic review. In the end, I chose to use this first encounter with usability testing for the company as an opportunity to validate recommended changes with data and evidence.
Usability testing uncovered the extent to which the new functionality supported users’ ability to complete search and filter goals, and how they went about achieving them. These results helped give objectivity to design decisions. The quantitative analysis and reporting is also a good exercise for showing ROI in productivity with each set of changes because multiple sprints of usability testing could eventually be compared to show improvements to usability across iterations. To this end, I analysed:
Task efficiency (ratios of task completion and time)
Ease based on difficulty, speed, and severity of consequences (from "completed with no difficulty" to "completed with major difficulty or required intervention")
I listed, described, grouped, and prioritised usability issues to be addressed, making recommendations based on severity ratings from 0 (irritant or superficial issue, non-critical to workflow) to 3 (critical issue that seriously impairs use and cannot be overcome by the user). In preparation for further testing, this revealed elements we could target to reduce the cognitive and physical effort required of users in pursuit of their search and filtering needs using the system.