That is the question…….
Part 1 of this blog post gave a summary of some of the issues we face when trying to detect, prevent and respond to phishing attacks. The upshot is that it isn’t easy. Technical controls are theoretically possible but in any organisation there are technical, financial, political and social constraints. One thing we can all do is to try and educate our own users but is there any point and how can we do so in a way that is both effective and measurable?
The whole issue of security awareness and education is often a contentious one. Awareness programs tend to be compliance driven focussing on areas like data protection but it is hard to know whether this type of activity has any meaningful effect in terms of actual security. When it comes to phishing there are also some pretty good arguments against training and awareness. Despite having a (hopefully) intelligent population here at the University of Oxford, issues of recent weeks and months show that there are still plenty of gullible users and this comes at a significant cost to the University. But even if we are dealing with up to 10 compromised accounts following a phishing attack involving 1000 phishing emails then it still means that 99% of users didn’t respond. While we are at it a significant number of people here actively report phishing to us now and this is one of the main “sensors” we use to detect attacks. Therefore awareness is pretty good right? How do you improve on 99%? Even if you do want to send out warnings how can you effectively warn people and inform them of the difference between legitimate emails and phishing emails. Phishing emails, after all are designed to look like real emails and every discussion we’ve had here about warning emails has been lengthy. How do you know whether anyone reads them or cares? What is the right message to get across? How you you get the message across accurately and succinctly but provide all the relevant information? Importantly how to you make it not appear to be a phish itself?
Perhaps then it isn’t worth bothering and, instead, focussing on technology-based solutions? Well, better technology certainly needs to be explored but I wonder whether looking at it like this is actually skewing the argument. If we were talking about other security controls we’d probably thinking in a more targeted way. For example, if you wanted to do some penetration testing for SQL injection vulnerabilities you wouldn’t necessarily just run your tests against every machine you run. First of all you’d probably audit your network to find out where SQL databases are available, run some initial testing to determine which of those might be vulnerable and then target the more agressive testing for those particular instances where the risk is especially high or where the vulnerabilities are suspected. So can awareness training also be more focused and can we target the users who are most vulnerable in a way that is measurable? I think, with phishing at least, the answer is probably yes and here is why and how…..
Not many users fall for phishing twice so why not target your own users? What better way of finding out who in your organisation is vulnerable to phishing than actually phishing them before the bad guys do? That way you can actually target the 1%. Recently, however, the very suggestion of this idea amongst the IT community here led to some major objections so I’d like to understand in more detail what some of those objections are. First of all though I thought it would be worth presenting the arguments as I see them. Again, if this were some other security vulnerability/control and we were looking to do penetration testing then it probably wouldn’t even be an issue. So what is the difference with users? By phishing its own users an organisation can obtain genuine metrics based on those that report phishing, those that do nothing and those that actually fall for the scam. Not only can you provide some meaningful information on effectiveness of your program, you can target the training at the users who need it the most. And, just like for the bad guys, it is cheap, very simple to do, effective and so is something you can repeat on a regular basis. You could even throw in an incentive like entering those who report the emails as phish into some sort of prize draw. This isn’t something new we’ve just dreamt up either and is something that is promoted by Lance Spitzner of SANS as part of their “Securing the Human” program. If you want a more detailed overview of how to do it see Lance’s Webinar. There are also many other examples of similar campaigns and numerous tools available to use such as Wombat Security’s phishguru, Phishme and Phishline.
A few people though have commented that they would have major objections to such an approach stating that it would be a “step too far”. Not all of these concerns have been qualified but those that I am aware of surround privacy and eroding the trust that users have in IT staff if we were seen to be trying to trick users. Let’s deal with privacy first, the main concern here being that the names of users replying to phishing emails would be revealed to management or others. Well, that might be a legitimate concern. After all we don’t want users thinking we are just trying to catch them out and get them into trouble. But I think that is fairly easy to overcome by only reporting repeat offenders of phishing campaigns. There are plenty of other ways to get the message across to people without the issue having to go via their own managers. Some have expressed concerns that users will actually give us their passwords. Whether this is a problem or not is debatable – if users do reply to us with their password we would reset it and get them to create a new one as we do whenever a user reveals their password currently. But, in any case, the technique could easily be set up so that we don’t receive that particular information.
In terms of trust then, again, I can see the argument here but I don’t believe it is an insurmountable problem. For example any awareness campaign could be announced to all users at the very beginning and as long as we communicate clearly and effectively with users who respond then I don’t see a major problem. When OxCERT replied to all users who complained that we had temporarily blocked Google Docs and explained the reasons, almost everyone of them fully understood the reasons for our actions. Besides the point here is that we don’t want users to trust emails asking for credentials regardless of where they come from and surely it is better for users that we try and phish them before the bad guys do? Of course there would undoubtedly be some people who would just be unhappy but we get this anyway. One senior academic here recently verbally abused our helpcentre staff and called the whole of IT Services b******s for making sure him/her change his/her password. This was after he/she had responded to a phishing scam. I’m not sure we should be going out of our way to avoid upsetting people like this.
An effective solution or major mistake?
So we have discussed a means of carrying out some form of penetration testing that is very cheap, easy and effective. At worst it will provide genuinely meaningful metrics that could be used to assess the state of our human defences and also (over a period of time) to demonstrate any improvement (or not) in those defences based on measures that we take. At best it will allow us to target our training at our most vulnerable users allowing them to protect themselves and protect University assets whilst we are at it. Yes, there may be pitfalls but nothing that I can think of that can’t be overcome with a little thought, advice and learning lessons from those who have carried out this type of activity already. So, we could carry on arguing over how to word warnings about phishing emails that no-one really reads anyway or whether or not to put links in emails. We could continue to play whack-a-mole with compromised accounts whilst everyone else tells us what we should be doing. We could, and should, explore technological solutions but that will inevitably take time and may not improve things anyway. Or we could do something different, cheap and effective. I would welcome your thoughts on which it should be.