top of page

My Portfolio / User Experience Research / Token Based Authentication

Concept-testing a token-based authentication system

Aim: Concept test a proposed token-based authentication system, and compare reactions to this system with how people perceive passwords and password management systems.



Password strength depends on the password design process; since humans are typically responsible for designing their own passwords – with unreasonable expectations that we create a new and secure password for every system requiring a login – password strength is often not high. Solutions exist but users still rely heavily on weak password strategies. One proposed solution, yet to be released, involved carrying multiple wearable devices to continuously authenticate the user's presence.

The Principal Investigator (PI) for this project wanted to know what we can reasonably expect a user to manage in terms of carrying physical devices rather than relying on passwords and password management strategies.

Process: Grounded Theory


Grounded Theory (GT) is a systematic, inductive qualitative technique for developing a theory (rather than testing an existing theory) that can later be tested with quantitative techniques. It relies on constant data comparison during three overlapping coding phases.


Open coding

Open coding involves identifying categories that describe the data. I conducted open coding twice on the first 6 interviews for intra-rater reliability and then continued up to 16 interviews. This phase informed the initial coding frame.


Axial (pattern) coding

Axial coding involves defining hierarchical relationships between codes. Related codes were grouped together and structured diagrammatically in mind maps, mini frames, and Action Paradigm Models to understand the data.


Selective (focussed) coding

Selective coding involves defining the process by which categories are related to a core idea, and using this to develop a theory. You collect new data until theoretical saturation is reached. I stopped collecting data at 20 interviews.


Twenty semi-structured interviews were conducted at the University of Cambridge, Computer Lab, in which participants (members of the public responding to an online advert) interacted with low-fidelity prototypes: two polymorph "authentication tokens" and several plastic "token-unlocking devices". Participants were asked to make choices between items to prompt comments about authentication tokens and uncover what criteria a token-based authentication system would need to meet to increase acceptability. We wanted to know what was important to potential users in terms of usability and security​.

The interviews were video and audio recorded, allowing me to transcribe them. Transcripts were analysed using a qualitative method for systematic, inductive, and empirical theory-creation: Grounded Theory.

The transcripts from the first six interviews were subject to trial open coding, followed by a consistency check with double coding (for intra-rater reliability) and blind coding (for inter-rater reliability), resulting in a coding frame. Open-coding proper started from the beginning, with the first sixteen interviews. The aim was to find commonalities in the data that reflected categories and revealed a set of themes. I then grouped codes into conceptual categories that reflected relationships (axial coding). As well as blind coding, I sought feedback on interpretations of the data from two "inquiry auditors" (a research team member and a colleague at a different university). The final phase (selective coding) involved analysing code clusters with an aim to describe the data in terms of an underlying process.​




I designed, ran, and transcribed 20 semi-structured interviews. A diverse range of participants were recruited to capture as many concepts as possible in order to reach theoretical saturation.


Coding phases

I reduced the data (open coding), organised the resulting categories into patterns (axial coding), and developed an explanatory theory (selective coding) when no new categories emerged (saturation).


Published article

The token-based results and insights were published in a 2016 academic paper – Responsibility and Tangible Security: Towards a Theory of User Acceptance of Security Tokens.



The outcome is a logic paradigm (a model) of the resulting theory, which represents patterns in the data. The most salient aspects of this theory are positioned within “a story” to help the reader relate the categories and sub-categories to a central theme.

The resulting theory was that the tangibility of authentication tokens increases perceived responsibility for mitigating security risks and having to manage physical items. The more devices a user has to carry, the greater the potential inconvenience and perceived risk due to the user having to rely on physical items. This is anxiety provoking, and implies that the concept of continuous authentication by carrying multiple devices is unlikely to be accepted. The full theory was presented in a published paper.​ 

Challenges and learnings


I learned a great deal about qualitative research techniques, which can go far beyond basic category formation and can be empirical and systematic. I learned how to re-conceptualise data and present it in a different way to understand the relationship between concepts. I enjoyed the bottom-up nature of the process and working with colleagues to develop the resulting theory.


Challenges centred around the practicalities of: obtaining a range of participants within a reasonable time-frame; creating uniform, low-fidelity prototypes; and the involved process of performing grounded theory analysis on the data. These challenges were all met, and resulted in a thorough, systematic review of the problem space.​

bottom of page