Hi all and welcome to OxCERT’s new blog. Here we will try to keep you up to date with any developments and projects we are involved in regarding security in the University of Oxford. We’ll also take the chance to comment on recent trends, debates and news stories. I hope you enjoy reading and find it useful and we look forward to receiving comments and feedback.
There has been an interesting debate on SSL interception following a thread on the Oxford IT staff mailing list that started with the (in)security of open wifi hotspots. Rather than following up on the mailling list then I thought this might be a good topic to start off our blog, so here goes!
Some good points have already been raised surrounding SSL and man-in-the-middle (MITM) attacks but I think some points have also been overlooked. First of all the attacker has to be able to intercept and relay the conversation (so has to be “listening” in-line) which makes un-encrypted wi-fi access points a great point of attack. Secondly, and the key point in my view, is that the “security” of SSL based transactions relies on the user doing some work. The attacker, for typical web based SSL transactions, only has to convince the user to accept their own certificate instead of that of the genuine service. They don’t necessarily have to do a good job of “spoofing” the genuine certificate, they just have to get the user to accept their own! Modern day web-browsers attempt to make this experience seamless for the end user with the infamous “padlock”. The theory goes that any certificate that is not signed by a genuine certificate authority (CA) should throw up a warning to the user who will then sensibly back away, thank technology for saving them from fraud and pop out to the shops instead. However, experience tells us that things are often not as straight forward as this and the (mis?)use of SSL certificates today does seem to raise more questions than answers.
For a start, the whole idea of the “padlock” has been debated for some time now along with the notion that the padlock signifies that the site is “secure”. You and I may know differently but this is the message that has been projected to users: Padlock == secure. Of course the padlock has nothing to do with the security of the actual site, but it is supposed to help assure Alice that she is talking to Bob after all and that the communication between the two will be protected. Sadly there are numerous flaws in this process too. Despite the fact that browsers may alert the user to an invalid certificate, how many users will ignore or not notice the warning and just “click-through” the process? Should the user implicitly trust the web browser’s root certificates? I for one have been to perfectly legitimate sites whose certificate has been signed by an “untrusted” CA and browsers can be compromised by the bad guy. The situation is not helped by the fact that the economics of security have led to a situation where SSL certificates are pretty easy to get hold of without particularly thorough verification from registration and certificate authorities. All this before any mention of the recent fraudulent certificates issued by the Comodo CA as a result of compromised accounts at one of their registration authorities.
The whole problem of trust and certificates has never been an easy one, but the desire for competitive advantage and ease of use over genuine security hasn’t helped. One question that runs through this whole issue is how far technology should go towards creating a “seamless” experience for the user, and/or how much we should rely on user education. Applying technology to the solution is difficult. Whatever solution you come up with will add complexity and bring with it its own set of problems (some of which are described above). If users become entirely dependent on technology to maintain their security, is this a good thing anyway? I can’t help but think that relying solely on technology will create a situation where we all waste our time looking for the silver bullet, the “sliced bread equivalent for the Internet”. We will probably waste a lot of money looking too. I’m no futurologist, but I’m going to stick my neck out and say that this ain’t gonna happen and the sooner we stop looking, the better!
On the other hand, as has been pointed out, we can’t expect all users to know about SSL certificates and we can’t educate them all over night. Security is seen as an inconvenience and users want the most convenient solution. As anyone who went to Ross Anderson’s talk at the OERC the other night will be aware, economics and competition will mean that vendors will choose usability and convenience over good security practice (see his paper at see ftp://ftp.deas.harvard.edu/techreports/tr-03-11.pdf for more details on the topic).
So what should we do? What can we do? Not “breaking” SSL would be a good start IMHO. As I have described above, it is difficult enough as it is without encouraging users to routinely ignore valid SSL certificates for the service they are accessing. It has always seemed slightly perverse to me to take the one tool that users can use to validate the authenticity of a site and encrypt the ensuing traffic, and defeat the purpose of it in the name of “security”. What do we mean by “security” anyway. It may be argued that SSL interception will help “secure” your network, but what benefits does it actually give you compared with the costs both in terms of finance and invasion of privacy? It’s important to ask what are you trying to achieve and why. Do the benefits of the solution you are looking at justify the costs? Have you considered that the costs aren’t just how much money it costs you to buy? Yes, there may be benefits to SSL interception in certain environments and I am no expert in the subject, but I would hope that anyone thinking of implementing it would exhaust all other options first and think very carefully about the legal and wider security implications. In terms of malware, a lot can be gained simply from examining network flows and DNS lookups if you know what you are looking for. If you are looking at traffic content, most of malware/botnets we are currently tracking still use HTTP alone for command and control (C&C) – they save the encryption for the stuff they really want to protect! It also helps if you have a dedicated security/incident response team monitoring the network for badness I might add 😉 If you really want to limit malicious traffic on your network there are malicious domain and IP lists that can be used or you can allow access only to lists of known good sites. Its also important to consider what you are trying to protect. For example, what are the implications to you, and the services you run if user machines get compromised?
And last, but by no means least, how do we help users secure their own machines? The vast majority of incidents OxCERT see still involve people not running AV, not patching their machines or running pirated software etc. There is no easy solution for this either of course which is why I think that one thing we can always do is to educate our users. It’s true we can’t educate everyone overnight, but if education isn’t the way forward in the University of Oxford, then something is wrong I think. Take what opportunities you can to make your users aware of the risks but don’t just impose restrictive technology on them in the name of security. Security doesn’t have to be about inconvenience – it can also be about helping users to achieve what they want. If users only want convenience and security == inconvenience, perhaps we need to change the way we present security to our users?