To Phish or Not To Phish? Part 1

That is the question…….

About eighteen months ago I wrote a blog post on the price of phish.  Since then, phishing has continued not only to remain a problem but to grow as a significant threat to aspects of the University’s business. This led to some pretty drastic measures being taken two weeks ago when access to Google Docs was effectively temporarily blocked from within the University network. Robin’s excellent blog post on the issue last week gave further details and, perhaps unsurprisingly, generated a fair amount of interest.  Some of the comments and responses were interesting and well balanced, others were well meaning but not well informed and some have just been plain wrong. But I think it is to OxCERT’s credit that they have been so open about what happened and welcomed all comments good, bad and ugly.  As a very brief summary to follow up Robin’s post I thought it would be worth starting off by clearing up a few details:

  1. The action to block Google Docs was not a knee-jerk reaction but a temporary measured response and an attempt to limit the impact of a very current threat that was in danger of spiralling out of control and having a significant effect on critical University services (i.e. email).  If you are dealing with a burst water pipe the first thing you do is to turn off the water.
  2. Whatever measures are in place to detect and prevent spam/phishing, it only takes one person to respond to one email and there is a risk of significant escalation.  Once an account is compromised it can be used to spam internally so users are more likely to receive the phish and more likely to respond if it comes from a genuine Oxford account (sad but true).  Therefore you get a snowball effect and in this case the snowball was getting pretty extreme.

    Snowball effect

    Snowball effect

  3. OxCERT have been monitoring and dealing with compromised accounts for as long as they have existed (since 1994 I believe). It is true that in the last two years phishing has become an increased problem but it isn’t a new problem. However the attackers are able to adjust their methods and use those that get the best results.  There are countless services based all over the world  allowing users (and bad guys) to set up free web-based forms and there are many compromised websites that the attackers also use.  Where these are little-known, little-used services or unheard of, personal websites, it is very easy to effectively block access to these sites to protect our own users and prevent them from filling in the forms.  Similarly we can observe phishing runs and prevent users replying to the addresses used by the attackers.  OxCERT have detected and dealt with many compromised accounts over the years but in doing so we have prevented many many more from being compromised and, importantly, have therefore done their bit to ensure that the University’s email service has remained available to legitimate users whilst protecting other valuable assets.  However the attackers can get much better results by using Google Docs because they know we can’t just block access to Google (permanently anyway!).
  4. The cost of any security control shouldn’t outweigh the benefit.  However  coming up with accurate costs of doing something or not doing something can be very difficult.  This is why you need a security team that are prepared to make difficult decisions when dealing with incidents. These decisions are subjective (not everyone will agree with the action) and they are based on individual circumstances rather than being a blanket response to a given situation.  The point is that they are reasoned decisions that can be justified.
  5. It is important, when dealing with security incidents, to continue to monitor (both the threat and the impact of any security controls) and when the cost of the control outweighs the benefit it is time to change and do something different.  Ultimately that is the reason the Google blocks were lifted after only 2.5 hours.

The point of this is not to make excuses or even to argue either way as to whether the right decision was made but rather to demonstrate that phishing remains a difficult problem to deal with and (as is so often the case) the conditions favour the bad guys.  It is very cheap for them to do, they only need a very small success rate to make it worth it and, if they make a mistake there is little to no impact on them.  They have everything to gain and nothing to lose unlike the targets (the University of Oxford in this case) for whom it is the opposite.  It is also worth pointing out that OxCERT didn’t have to blog about this and make it so public.  It might make a nice headline that Oxford has blocked Google and it gives some people the opportunity  to air their grievances or tell us what we should have done.   But the truth it that, in the end, Google Docs was only unavailable for a very limited period of time and the number of users who actually noticed (compared with the number of users in total at least) was pretty low.  Communication of OxCERT’s action happened at the time and also after the incident and all users who complained received a direct response explaining exactly what had happened and why.  All of the responses to that particular communication indicated that users understood and supported our actions.  If it had been left at that the chances are there would have been little or no coverage of this incident but I think it is good to be as open as possible, to allow debate and also to make as many people as possible (including Google) aware of the problems we face.  To that extent Robin’s blog has been very successful.

Phishing Form

A Google Docs Phishing Form

However I did want to make it clear that all security decisions are thought about extremely carefully and I think it is fair to say that all of the legitimate ideas that have been put forward in response to Robin’s blog have been considered.  Some may be possible but too expensive, others may be impractical for either technical, political or social reasons.  The University of Oxford is a complicated place when it comes to IT and their are numerous constraints on what the central IT Services department can and can’t do.  That said there is always more that can be done and we will continue to look at both technical and social means to improve our prevention, detection and response to all incidents and threats – including phishing.  For the purposes of this particular blog post however, the area I am interested in exploring is training and awareness, specifically the idea to phish our own users.  The very thought of this has some people here up in arms but I’d like to discuss further the idea of awareness when it comes to phishing and understand  the opinions and objections of others.  If you do too see part 2 of this blog post.

Posted in Information Security | 4 Comments

4 Responses to “To Phish or Not To Phish? Part 1”

  1. Ian says:

    I think if Google Docs is hard to block, it’s not that hard to filter out of email. A reasonable and easily implemented measure might be to add a SpamAssassin rule that gives any URL matching “docs.google.com/.*formkey” a high score. While there might be false positives, they will be small in number and there are easy workarounds (such as fishing the email back out of your spam folder).

    Harder to implement (and more contentious) would be to pass incoming emails through a process which attempts to detect phishy text and/or links and, if triggered, inserts a text warning into the body of the email. Gmail does this via their HTML user interface – which you can see if you look at phishy emails in your junk mail folder. You can ignore any incoming email that’s PGP signed or encrypted, as spammers don’t (currently) sign their missives.

    • Jonathan Ashton says:

      Thanks Ian, though as I mentioned most technical measures mentioned have been (or continuously being) considered. In terms of response then OxCERT have to work within the constraints that exist at the time. The main focus of this blog post however is intended to be the awareness training and education.

  2. Mark Johnson says:

    I dont think phishing your own users is a bad idea, but it’s a big social challenge. The challenge is how you deal with users who fall for it. If the idea is to directly address users who fall victim to your phish, my experience of things like this is that some people have a serious fear of appearing stupid, and may even go as far as to flat out deny that it was them.

    An alternative would be to display a message after submission informing them that they’ve just partaken in a secuirty test, with information about avoiding scams in the future (this avoids the potential embarassment of a person telling them they fell for the trick), but this might confuse users and make the problem worse (“Oh, this is just another one of those forms from IT, no harm in filling it out then”).

    If you were to use it as a method of identifying areas to target “general” security training, it might work.

    • Jonathan Ashton says:

      Hi Mark,

      I think it would be important to display a message to the user rather than have any other action taken. As far as I can see that would reduce the embarrassment factor over current practices (i.e. no-one else need know or be involved). It also provides the opportunity to deliver an instant message – potentially increasing the impact.