Skip to main content
GOV.UK Verify

https://identityassurance.blog.gov.uk/2016/03/24/how-we-work-with-experts-to-make-gov-uk-verify-better/

How we work with experts to make GOV.UK Verify better

Posted by: , Posted on: - Categories: Policy

We take the protection of GOV.UK Verify and the security of our users and their data very seriously - we've posted previously about how we keep GOV.UK Verify secure.

GOV.UK Verify is Pan Government Accredited - an ongoing independent process to confirm we are effectively identifying and managing risks to the service. Our work to secure the service to protect users’ privacy and security is never finished. We appreciate that the range and nature of threats is constantly developing, as are the technologies and methods for protecting digital services against attack.

As part of our work to constantly develop and improve GOV.UK Verify, we are always working to make sure GOV.UK Verify reflects the very best in the current state of knowledge about how to protect digital services.

Risk is something we assess on all levels (not just technical or operational) and giving consideration to the service we offer, how that service affects users and their data and the consequences of actions and incidents that may occur. People often think of technical mitigations for risk such as cryptography but that is only part of the solution; we also need to consider people and processes.

Much of the change and innovation in the way we secure the service comes from within the GOV.UK Verify ecosystem of users, certified companies and government services. However, we also look to subject matter experts outside of government to challenge our thinking. In this blog post we thought we’d share a little about how we have learnt - and continue to learn - from these experts.

We work with security experts in other government departments, such as CESG (the National Technical Authority), and non-government groups, such as OASIS, as well as having our own experts. We take a structured approach to identifying risks and designing effective ways to mitigate and manage them.

Security feedback

The more eyes there are on a service, the better it gets as improvements and alternative approaches are suggested and developed. The feedback that has helped us develop GOV.UK Verify has come from a variety of sources: industry; other governments; and international standards organisations.

Last summer a paper was published by a group of researchers and security experts entitled: Toward Mending Two Nation-Scale Brokered Identification Systems. This paper provided a critique of not only GOV.UK Verify with regard to privacy and security but also to the identity scheme operated by Connect.Gov in the United States.

At the time we wrote about how the paper raised some interesting issues about how federated identity assurance systems like GOV.UK Verify work, and how users' privacy can best be protected. We invited one of the authors of the paper to join our independent Privacy and Consumer Advisory Group, and we’ve continued the conversation over the following months to explore the issues raised in the paper. We’ve found this process very useful, and we look forward to continuing to share knowledge and learning with the academic community and other experts as GOV.UK Verify continues to expand and improve.

The issues discussed in the paper centred on a particular type of potential security breach in relation to the GOV.UK Verify hub, and the idea of having such a hub. The hub is the part of GOV.UK Verify which allows communication between the user, the certified company, and the service on GOV.UK.

The overall design of GOV.UK Verify means that the amount of data passing through the hub is very limited, and it doesn’t store personal data. The hub enables a user to choose which certified company they want to use, and passes the user to that certified company. The certified company verifies the person’s identity. The certified company passes the user’s name, address, date of birth and gender (known as the matching dataset) back to the hub, along with a message to say that the person’s identity has been verified to the required level of assurance, for onward transmission to the service the person wants to use.

None of the data used to prove the person’s identity leaves the certified company. Once the person has accessed the service they want to use, no further data is passed back to the hub - the hub doesn’t see any data about what the user did once they accessed the service.

A big part of the reason for designing the hub in this way (rather than, for example, building a large database containing everyone’s identity information and linking together data about each person’s use of government services) was to minimise the value to potential fraudsters of the data passing through the hub. There is relatively little value in gaining access to data about a person's name, address and date of birth; that information can much more easily be obtained from other places, such as the electoral register or social media accounts. This means that although certain types of breach may be theoretically possible, it is highly unlikely that anyone would be motivated to execute them because of the effort involved in doing so significantly outweighing the benefit.

We’re always working to understand as fully as possible all the potential breaches that could affect our service, whether or not they have yet happened or are likely to happen. We are constantly iterating our approach to respond to new threats. The paper considered a particular type of potential breach that could theoretically allow someone to gain control of the GOV.UK Verify hub, enabling them to direct the user to use their credentials for a purpose not evident to the user (such as gaining access to the user’s records within a government service). It also considered a scenario is where the hub is ‘curious’ and becomes a tool for observing the data that passes through the hub. We were already aware of these types of threats, and had things in place to mitigate them. But we don’t assume we know all the answers and are always keen to learn from this outside expert opinion, so we met with one of the paper’s authors on multiple occasions to discuss the issues raised from a technical and policy perspective.

This type of research and engagement helps us validate our thinking, and it also shows us how security threats may be perceived elsewhere, helping us improve the way we explain our work.

What we’re doing

In the short term, lessons from the paper have fed into our thinking for how we can continuously improve the way we govern the federation, and also how we monitor for industrialised attacks on GOV.UK Verify.

The authors of the paper proposed some specific measures we could consider, and we have built some of the outcomes of those discussions into our work towards our objectives for taking GOV.UK Verify from beta to live. We’ve incorporated this into our ongoing risk management and accreditation work.

The paper highlighted some advanced cryptographic techniques that may be of use, such as sharing an ephemeral key from the government service provider which can be used to prevent visibility of user attributes at the hub by adding an extra layer of encryption.

What remains to be proven is the suitability of such solutions in a production identity system such as GOV.UK Verify. This is a limitation the paper’s authors recognise.

We currently use state of the art privacy technologies and techniques, and in the future we are interested in adopting new and more advanced privacy technologies and techniques as they become viable, something the researchers of this paper may be able to help us achieve.

From the start, we have known that there will be sophisticated and well-funded attackers, who aim to able to operate undetected for long periods of time. This thinking informed the overall design of GOV.UK Verify, our approach to security and the decision to take a federated approach, using a range of certified companies and building a hub to manage communications between users, certified companies and services rather than building one single identity database.

It's almost impossible to prevent all attacks to an online service, particularly as criminals can draw on huge resources when motivated to do so. Therefore, as well as our work to prevent attacks through the way we design and run GOV.UK Verify, we also have to build our monitoring capability so we can detect attempted attacks when they take place. This means we can mitigate the type of attacks discussed in the paper by making sure that we are able to very quickly detect these industrialised attacks as they appear and close them down rapidly, protecting the user.

We already have monitoring capability so that we can look for patterns of abuse, anomalies in usage, or simply odd behaviour that points towards these sophisticated industrialised attacks. This includes tools to automate some of the attack scenarios mentioned in the paper helping us to build a detailed understanding of how we would detect and resolve breaches should they happen, protecting our users and their transactions in an ever evolving threat landscape. We’ll be working to improve this capability in response to developing threats and technologies over time.

Working with industry and open standards bodies

The privacy of our users comes first in everything we do, so hearing about new approaches to the protection of personal data as it is used in identity federations such as GOV.UK Verify is very important to us.

Our engagement outside of government regarding technology and security has always been an important feature of GOV.UK Verify’s development and will continue to be so. The initial SAML profile and hub design was supported by experts from Microsoft, Oracle and IBM to make sure that we designed something that industry could easily adopt. More recently we have briefed the OASIS Security (SAML) Technical Committee, and the FIDO Alliance regarding our architecture, and continue to share our learning with the European Commission as we move towards cross-border access to online services.

We’re designing and building GOV.UK Verify to respond to a complex set of problems and we don’t claim to have all the answers. Working with - and learning from - a wide variety of experts helps us make GOV.UK Verify better for users. We welcome the input of experts, like the authors of this paper and, if that’s you, we’d encourage you to get in touch with us.

Sharing and comments

Share this page

4 comments

  1. Comment by Nicholas Bohm posted on

    Mistakes can happen in the best systems. If someone else is wrongly verified to be me, and I suffer loss, who is liable to compensate me? Or, putting it more generally, how are fraud and error risk allocated?

    • Replies to Nicholas Bohm>

      Comment by Rebecca Hales posted on

      Hi Nicholas

      GOV.UK Verify is designed to protect users from loss of data and identity fraud. It’s faster and more secure than other methods of verifying your identity and verifies an individual to the level of assurance that modern digital services need. However, as you acknowledge, threats in this area are rapidly developing and no system can ever be 100% secure.

      A certified company is liable in the same way as any other data controller if it's hacked and personal data is compromised.

      If a certified company fails to comply with the published government standards for identity assurance and, as a result of this breach of contractual obligations, asserts a verified identity relating to someone who is not the user, the certified company is responsible for the losses.

      Certified companies are also responsible if someone who is not the user is wrongly granted access their verified identity account because the certified company has failed to comply with the published government standards.

      If a certified company has complied with the government standards for identity assurance, meeting all contractual obligations, but a user is still successful in creating a fraudulent verified identity account, the certified company is not liable.

      The government standards are constantly under review and we update them regularly to make sure they reflect the evolving and changing nature by which people look to undertake their criminal behaviour. In the event of fraudulent activity occurring despite all contractual requirements being met by a certified company, we would work with the government service and with law enforcement partners to iterate the standards to protect against further threats to the delivery of online services in future.

  2. Comment by MarkK posted on

    So if the certified company is not liable for a loss, it must be the only remaining party, namely the one who mandated the system: GDS. This scenario has been written out of the promised privacy impact assessment on the curious grounds that it would only occur if a crime were committed. The victim (and indeed relying party) will be unaware which IdP was used. The principles call for someone to turn to (albeit only for users, but the person might be a genuine user via a different IdP), yet the suggested Ombusdman remains 'under consideration' despite the proposed go-live next month.

    There may also be fraudulent claims 'that it wasn't me', but the level provided only claims to be balance of probability, not beyond reasonable doubt.

    • Replies to MarkK>

      Comment by Rebecca Hales posted on

      As explained in the comment above, there are clear circumstances where the certified company is responsible for loss.

      We will be publishing the privacy impact assessment soon. As Orvokki has explained in previous comments, this won't include issues related to fraud but we will be blogging shortly about what kind of fraud our standards prevent.

      All certified companies are obliged to implement a complaints process, which is reviewed by GDS as part of their acceptance prior to going live. The dispute resolution function mentioned in the principles - which has previously been referred to as an Ombudsman - is handled by GDS. At this time we do not expect to need to recruit an individual to fulfil that function, but that decision will be reviewed regularly in response to service monitoring and user needs.