top of page

Usability testing omni-search functionality

AimIdentify usability issues with new search and filter functionality for a machine learning platform for monitoring financial crime, and make suggestions for changes.

Problem

 

A more flexible search and filtering tool had been developed to replace the navigation bar in a fraud-detection interface. The existing search tool required users (typically analysts) to enter a complete and exact ID to find specific cases and events. We wanted to improve effectiveness, efficiency, and satisfaction with new search functionality.

I ran seven usability tests to gain insight into the extent to which the new functionality facilitated the user’s ability to complete routine tasks belonging to three scenarios that relied on search and filter.

Process

1

Scenario development

Potential issues were condensed into 3 scenarios consisting of 4 tasks each for the usability test.

2

Usability testing

7 participants were recruited from 6 departments at the company for usability testing.

3

Reporting

A detailed report documented the procedure, participants, results, and recommendations.

1. Scenario development

Search and filter functionality in the fraud-detection platform is complex and varied, resulting in a long list of candidate features for testing. We condensed potential issues into scenarios that focussed on what we hoped to observe. These scenarios captured user tasks related to specific research questions and were given as instructions to participants.

 

The first step in creating these scenarios was to set specific and measurable goals that describe user activities (as opposed to the platform's existing structure). These were used only by the researchers and designers (not the users) to create scenarios. Based on these goals, we developed scenarios consisting of four steps (tasks) each, narrowed down based on the insight that would help us deal with aspects of the search and filter functionality of most concern to users.

2. Usability testing

The scenarios were completed by seven participants using a demo of the most recent version of the platform on a laptop. Note that very few participants are needed for usability studies such as this, with 6 shown to discover 87% of a website’s usability issues (Tullis & Albert, 2013), and 5 often referenced as the “magic number”, since adding more users tends to uncover the same issues (Neilsen, 2005).

 

The usability tests were undertaken on a laptop in a room with myself as a moderator positioned a few feet away from the participant at roughly a 45-degree angle, and another colleague sitting behind, similar to Rubin’s ‘simple single-room set-up’ (2008). I obtained informed consent to video and audio record the sessions, which I later transcribed for analysis.

 

Myself and a colleague took handwritten notes of success rates, completion times, actions, bugs, issues, and comments. We compared our notes with each other and against the recordings. From this data, I identified patterns of behaviour, errors, bugs, and usability issues.

3. Reporting

I analysed and reported on the following:

  • Success rates

  • Completion times

  • Task efficiency (ratios of task completion and time)

  • Ease based on difficulty, speed, and severity of consequences (from "completed with no difficulty" to "completed with major difficulty or required intervention")

I listed, described, grouped, and prioritised usability issues to be addressed, making recommendations based on severity ratings from 0 (irritant or superficial issue, non-critical to workflow) to 3 (critical issue that seriously impairs use and cannot be overcome by the user). In preparation for further testing, this revealed elements we could target to reduce the cognitive and physical effort required of users in pursuit of their search and filtering needs using the system.​

Screenshot 2023-04-14 at 22.01.40.png
Screenshot 2023-04-14 at 22.02.31.png

Outcomes

 

Usability testing uncovered the extent to which the new functionality supported users’ ability to complete search and filter goals, and how they went about achieving them. These results helped give objectivity to design decisions. Results were also compared with customer feedback later in the year, which confirmed many of the issues identified in usability testing.

 

The quantitative analysis and reporting is also a good exercise for showing ROI in productivity with each set of changes because multiple sprints of usability testing could eventually be compared to show improvements to usability across iterations. The summary of usability issues, prioritised based on severity ratings revealed, elements we could target for further testing to reduce the cognitive and physical effort required of users in pursuit of their search and filtering needs using the system.​ 

Challenges and learnings

 

At first, I struggled to turn the generic user activities I was given into defined scenarios. Belonging to a small Design Team established one month prior, it was hard to communicate to developers the value of taking the time to create a solid test plan with specific tasks that would ensure the test was consistent for each participant and provide measurable results. I dealt with this by walking developers through their scenarios as if they were participants, working with them to break user activities down into steps, creating goal-focused user flows.

Another challenge was convincing developers that not all flaws need usability testing to be identified and removed. Usability testing often offers insight into problems that may not have been obvious and these can be masked by usability issues that could otherwise be identified with a heuristic review. In the end, I chose to use this first encounter with usability testing for the company as an opportunity to validate recommended changes with data and evidence.

bottom of page