Over the past few months we’ve been working on a quantitative research tool to complement the work that we’re doing in the user research lab. Our aim was to build something that we could test with hundreds, if not thousands of people, remotely and securely. We wanted to test different versions of a webpage and have the ability to slice and dice the data in a number of ways.
Start with user needs
As with everything at GDS, we started with user needs. We spent a lot of time working through our requirements and, in particular, thinking about the data. This resulted in a life-sized sketch of the prototype, which showed the connections between data sets and meant that everyone on the team was clear about the end goals.
We recently blogged about how the Identity Assurance team is testing our prototype with members of the public. The prototype takes people through the journey of selecting and registering with an identity provider to access a digital government service. We’ve just completed our 23rd round in the research lab, which means that we’ve watched more than 130 users interact with our service.
Instrumenting the analytics
Working with prototypes at an early stage allows us to do things we won’t be doing with the live service. For example, we took the initial part of the user journey (from the sign-in/register page through to choosing an identity provider) and configured the pages in order to get the detailed analytics that we needed. From a research perspective, this allowed us to do two things. We used event tracking to measure how users interacted on pages (e.g. clicks made on links) and extra tagging code called custom variables which enabled us to segment the data.
We also added an extra step into the process: when people finished interacting with the prototype we asked them to answer survey questions about their experience. This was to allow us to marry peoples’ opinions of the prototype against their actual behaviour in order to get a richer understanding of usage.
Finally, we wanted to test the most effective way of communicating the concept of identity assurance to users, and showed users one of three explanation pages – a text, image and video page – to see if any produced higher comprehension levels.
Analysing the data
We emailed the study to a demographically representative sample of the UK online population. The study was live for ten days and 1,690 members of the public took part.
Within our analytics tool we created a series of advanced segments and custom reports, which allowed us to fully interrogate the results. We then used our research questions and hypothesis as the framework through which to explore the data. The analysis took half a day to do and the results went up on the wall for everyone to see.
On the whole, the data confirmed the behaviour that we’ve been seeing repeatedly in our fortnightly testing. For example that:
-
driving licence, passport and current account are the most popular forms of identity evidence
-
people gravitate towards identity providers that they know and trust
-
when people understand the concept they’re more willing to engage with the process
-
an introductory video helps to aid comprehension
It’s exciting to have tested part of the identity assurance user experience with so many people at this stage and encouraging that the quantitative results support and build upon our qualitative work.
What we did next
As a result of the study, we:
-
tried to get more people to watch the video
-
created a new video and supporting text
-
explored further the use of different verification methods, particularly identity questions
-
ran a second quantitative test to understand more about people’s choice of identity provider
-
documented lessons learnt about the process of setting up the research tool
The test worked well and the data behaved as we expected it to. We’ve now got a more robust view of how people behave and feel at the start of the identity assurance user journey and we’ll use that knowledge to iterate the prototype further.
This exercise demonstrates the added value to digital services when researchers and analysts work together to create richer and more robust insights.
If you have any questions or want to find out more, please respond to this post.