James Stewart has published a post about how GDS decides when it's ok not to publish source code. The identity assurance programme operates within the approach outlined by James.
We do publish information about our design, but we don’t publish code that would reveal specifics about our implementation of the design. As James explains, 'we don’t publish information about the implementation of the design because it would allow people to create a duplicate and practise hacking it without our being able to detect that activity.'
To give an example of how we publish information about our design, we published our SAML profile in November 2013 whilst we were in alpha. We're planning to update it soon to reflect our most up to date thinking following our private beta. Using an open standard such as SAML 2.0 allows departments working with us to use a variety of products, including open source solutions, to interoperate with us. We've checked to make sure that open source products can be used to integrate with us.
We do want to make our work as transparent as possible, and we will over time release parts of our code that we think are safe and useful to publish. We will be looking at this issue more over the coming months once we have completed the work to launch our service into public beta.
4 comments
Comment by Tom Halligan posted on
> it would allow people to create a duplicate and practise hacking it without our being able to detect that activity.
Just a few things:
1: You won't stop people at least ATTEMPTING to attack the service whether you open-source the code or not.
2: Somebody in your team or panel of experts is highly likely to miss something. It happens - I don't need to provide a list of high-profile bugs which have led to security breaches since you just need to wait about a week and another one will pop up.
3: Since people will attack the service regardless, and somebody working on the code is virtually guaranteed to mess something up, isn't the only practical, sane solution to open source the code and have as many eyes as possible looking through it to catch as many security holes as they can?
By playing the 'security though obscurity' game - all you're doing is betting that nobody is going to uncover something you missed. It won't work - it NEVER works. Even with open-source code, a large-enough code-base is going to have some number of bugs / security issues. At least people can find them and fix them, however - and it also means that interested individuals can contribute and improve the software that we are likely going to HAVE to use whether we care or not.
Attacks will happen - all you're doing by hiding the code is reducing the number of guards on shift.
Don't get me wrong, I think the GDS have done an amazing job and you've obviously got a good team together - but I do disagree with your approach on this one.
Comment by Janet Hughes posted on
Hi Tom, thanks for commenting. We don't disagree with the principles you've expressed here at all, but we do think there are some specific exceptions to these general rules. Also what I've written here isn't a static position; I'm just setting out where we are now. We're going to be looking at this again once we've completed the work we needed to do to get to public beta.
Comment by Otmane El Rhazi posted on
Hi everyone, just a question: is it possible to request a copyright license for that code? I mean wrap it as an innovation (if it is a case), then it would be easy to open the source of the code. Regarding security/assurance of any hacking any code that involve access or login should highly encrypted with a single or double key of 128bytes at least.
Comment by Dr Peter Hawkes posted on
Remember the maxim of the late Professor David J Wheeler of the Cambridge Computer Laboratory:-
"Encryption only spreads Security"