top of page

Evaluation of a Data Exploration Tool with a Cognitive Walkthrough

Aim. Encourage uptake of a data exploration tool created and used by Africa's Voices Foundation (AVF) by evaluating and improving its usability.

My Role

Co-operate with the developer to make UI changes based on a cognitive walkthrough and on-site validation with an intended user group; create training material and guidance for intended end-users.


Cognitive Walkthrough

Stakeholder Interviews

Focus Group

Recommendations Matrix


  1. Identify usability issues, focussing on new and infrequent users

  2. Validate these issues with intended end-users

  3. Make recommendations for improvement

Problem Statement

Africa's Voices Foundation (AVF) wanted to allow its partners to use a bespoke tool for exploring textual data (SMS and tweets), but the system had not been built with these users in mind. The tool, now called 'Scouta', was originally designed to perform functions that were specific to the skills and needs of AVF researchers during the infancy of the organisation. AVF had attempted to introduce Scouta to one of its partners in Kenya, but it wasn't adopted. AVF also wanted research assistants in Somalia to use Scouta, but it was hard for users to understand.

Screenshot 2020-04-14 at 15.17.58.png


I used a cognitive walkthrough to make a systematic assessment of the interface in its most up-to-date form at the time. This resulted in a report consisting of the frequency of usability challenges, their severity, possible solutions, and the proportion of usability challenges across all three tasks, colour-coded for frequency (blue) and severity (red).

While the developer made changes based on the cognitive walkthrough, I set out to establish the system requirements for our partner, Well Told Story (WTS). I wanted to understand why previous versions had not been used and to obtain a list of the most pressing requirements and concerns. To this end, I conducted stakeholder interviews and ran a focus group discussion with intended users at WTS, showing them an iterated version of the interface. 




The aim was to keep the main existing functions of the system (message filtering, associations list, graphics, and filtered messages), but to make these functions more intuitive.

I discussed challenges and considered solutions with the developer. For this, I compiled a 2-page document with requirements and recommendations for specific changes towards meeting these. I also compiled a recommendations matrix with specific potential UI changes, suggested based on: (a.) the perceived importance for users; (b.) limitations imposed by data structure and technology dependencies; and (c.) the intended purpose of the tool. 


Changes to Scouta were presented to the lead researcher at WTS for feedback, which was positive.


I created a user guide for WTS explaining Scouta's elements and functions, the changes made between versions, tips for using it, and FAQs. Additionally, the interface, and the subsequent video tutorials and written guidelines I created, were used by research assistants working with AVF. 


The usability of a system depends on its context of use: who is using it, what for, where, and why. In its original form, 'Scouta' was designed to meet an internal need: visually explore big databases of Swahili and Shen SMS and demographic data based on networks of words;​ it was difficult convincing  WTS to take on and use the tool. I needed to sensitively shine a light on why WTS had not taken up the tool with a retrospective evaluation highlighting the issues.


Due to costs, schedules, and technical restrictions, solutions were often elusive. For example, loading time interrupted the user by more than 30 seconds (sometimes minutes), and occurred every time a user (de)selected an option. In an ideal world, a system would update automatically with every interaction, but the volume of data (> a million SMS) meant that updating the results with every click had a profoundly negative impact on usability. After discussing this with the developer and understanding why this was happening, we agreed on a solution: the system should only run once the user pressed a dedicated button to indicate they were ready for the results to reload. This was an extra step (button press), but saved several minutes in loading time.

bottom of page