FIRST Technical Colloquium, Amsterdam: day one


Last week one of us attended a FIRST technical colloquium, generously hosted by Cisco in their offices in the suburbs of Amsterdam. Somewhat unusually, this was the second FIRST TC of the year to be held in Europe; nevertheless the event was well-attended, unsurprisingly with a strong presence from the Dutch teams and from Cisco themselves.

Proceedings started with a talk on Cuckoo Sandbox, an open-source tool for automated malware analysis. This is a topic of some interest to us, as we have been intending for some time to set up our malware analysis system, but commercial systems can be extremely expensive and we lack the resources to develop our own. Cuckoo comes across as well-suited to our requirements, with a good and ever-expanding featureset. Unlike some commercial vendors we’ve previously encountered, the speaker was happy to admit some of the limitations of sandboxing, not least that malware authors may include code to detect when they are running in a sandboxed environment and adjust the malware’s behaviour accordingly. He also stressed the importance of making effective use of the information gained through use of the software.

Next was a talk from Seth Hanford of Cisco on development of version 3 of CVSS, the Common Vulnerability Scoring System (CVSS). The current implementation was launched in 2007 and is widely used in the security industry, not least by us in assessing vulnerability announcements and which merit our sending bulletins to IT staff in the university. Nevertheless, experience has shown that the system is not perfect and presents some opportunities for confusion, and it is hoped that version 3 can address these problems.

This was followed by some talks on DNS-related issues. First was Paul Vixie from ISC, perhaps best known as formerly maintainer of the BIND nameserver software, co-founder of the original Realtime Blackhole List anti-spam measure, and as self-confessed holder of the record for the “most CERT advisories due to a single author”. Paul’s talk was on Response Policy Zones (RPZ), a feature added to recent versions of BIND as a means of providing a “DNS firewall”, allowing DNS server maintainers to prevent client access to systems based on domain name rather than IP address. This is a more advanced implementation of something that we have done at the University’s central nameservers for over eight years, and something which we are keen to explore further over the coming months. A second talk on RPZ followed, exploring the practicalities of implementation and operation.

Continuing the DNS theme was Henry Stern of Cisco, discussing passive DNS logging. Passive DNS is something that we have been aware of for several years, through use of an external service to determine how the relationship between domain names and IP addresses has changed over time. Such a service relies on capturing the responses given by recursive nameservers, anonymising and collating that data. We are purely a “consumer” at present but are being encouraged to collect data ourselves at the university nameservers and contribute data to the project, provided that any personally-identifiable information has been removed. Cisco have taken the idea further and are logging the queries within their internal network, purely for internal use, logging some four billion lookups per day. Naturally this requires considerable effort to reduce the volumes of captured data to a level at which useful queries can can be run in a matter of seconds.

The following talk was on Visual Malware Analysis, working on the principle that humans are much better at taking in visual information to produce diagrams representing the behaviour of malware given inputs from common analysis tools. Nevertheless there is significant complexity even to relatively simple malware and it would take practice to be able to make effective use of the information presented in this form.

The final talk of the day was by two members of Cisco’s own CSIRT team, entitled “Re-writing the CSIRT Playbook”. Despite being much larger than OxCERT, they still admit to being understaffed, and are gathering data from a variety of systems spread around the globe. They described the reasons for moving from a commercial Security Information and Event Management (SIEM) to infrastructure they have built inhouse, before discussing their “playbooks”. Essentially these describe the rules and actions to be taken under particular circumstances, making it clear which steps require a human to make decisions before action is taken – for instance, if a member of staff above a particular level of seniority is involved.

This ended the official talks for the day, but a drinks reception followed, offering opportunities for some networking before we headed back to our respective hotels in the city centre. For the second day of the meeting, see http://blogs.it.ox.ac.uk/oxcert/2013/04/18/first-tc-ams2/

Posted in FIRST Conference | Comments Off

To Phish or Not to Phish? Part 2

That is the question…….

Part 1 of this blog post gave a summary of some of the issues we face when trying to detect, prevent and respond to phishing attacks.  The upshot is that it isn’t easy.  Technical controls are theoretically possible but in any organisation there are technical, financial, political and social constraints.  One thing we can all do is to try and educate our own users but is there any point and how can we do so in a way that is both effective and measurable?

The whole issue of security awareness and education is often a contentious one.  Awareness programs tend to be compliance driven focussing on areas like data protection but it is hard to know whether this type of activity has any meaningful effect in terms of actual security.   When it comes to phishing there are also some pretty good arguments against training and awareness. Despite having a (hopefully) intelligent population here at the University of Oxford, issues of recent weeks and months show that there are still plenty of gullible users and this comes at a significant cost to the University.  But even if we are dealing with up to 10 compromised accounts following a phishing attack involving 1000 phishing emails then it still means that 99% of users didn’t respond.  While we are at it a significant number of people here actively report phishing to us now and this is one of the main “sensors” we use to detect attacks.  Therefore awareness is pretty good right?  How do you improve on 99%?  Even if you do want to send out warnings how can you effectively warn people and inform them of the difference between legitimate emails and phishing emails.  Phishing emails, after all are designed to look like real emails and every discussion we’ve had here about warning emails has been lengthy.  How do you know whether anyone reads them or cares? What is the right message to get across?  How you you get the message across accurately and succinctly but provide all the relevant information?  Importantly how to you make it not appear to be a phish itself?

Perhaps then it isn’t worth bothering and, instead, focussing on technology-based solutions?  Well, better technology certainly needs to be explored but I wonder whether looking at it like this is actually skewing the argument.  If we were talking about other security controls we’d probably thinking in a more targeted way.    For example, if you wanted to do some penetration testing for SQL injection vulnerabilities you wouldn’t necessarily just run your tests against every machine you run.  First of all you’d probably audit your network to find out where SQL databases are available, run some initial testing to determine which of those might be vulnerable and then target the more agressive testing for those particular instances where the risk is especially high or where the vulnerabilities are suspected.  So can awareness training also be more focused and can we target the users who are most vulnerable in a way that is measurable?  I think, with phishing at least, the answer is probably yes and here is why and how…..

Why Phish?

Taken from http://www.flickr.com/photos/patrickgage/1620195364/

Not many users fall for phishing twice so why not target your own users?  What better way of finding out who in your organisation is vulnerable to phishing than actually phishing them before the bad guys do?   That way you can actually target the 1%.   Recently, however, the very suggestion of this idea amongst the IT community here led to some major objections so I’d like to understand in more detail what some of those objections are. First of all though I thought it would be worth presenting the arguments as I see them. Again, if this were some other security vulnerability/control and we were looking to do penetration testing then it probably wouldn’t even be an issue.  So what is the difference with users?  By phishing its own users an organisation can obtain genuine metrics based on those that report phishing, those that do nothing and those that actually fall for the scam. Not only can you provide some meaningful information on effectiveness of your program, you can target the training at the users who need it the most.  And, just like for the bad guys, it is cheap, very simple to do, effective and so is something you can repeat on a regular basis.  You could even throw in an incentive like entering those who report the emails as phish into some sort of prize draw.  This isn’t something new we’ve just dreamt up either and is something that is promoted by Lance Spitzner of SANS as part of their “Securing the Human” program.  If you want a more detailed overview of how to do it see Lance’s Webinar.  There are also many other examples of similar campaigns and numerous tools available to use such as Wombat Security’s phishguru, Phishme and Phishline.

Concerns

Privacy is a balanceA few people though have commented that they would have major objections to such an approach stating that it would be a “step too far”.  Not all of these concerns have been qualified but those that I am aware of surround privacy and eroding the trust that users have in IT staff if we were seen to be trying to trick users.  Let’s deal with privacy first, the main concern here being that the names of users replying to phishing emails would be revealed to management or others.  Well, that might be a legitimate concern.  After all we don’t want users thinking we are just trying to catch them out and get them into trouble.  But I think that is fairly easy to overcome by only reporting repeat offenders of phishing campaigns.  There are plenty of other ways to get the message across to people without the issue having to go via their own managers.  Some have expressed concerns that users will actually give us their passwords.  Whether this is a problem or not is debatable – if users do reply to us with their password we would reset it and get them to create a new one as we do whenever a user reveals their password currently.  But, in any case, the technique could easily be set up so that we don’t receive that particular information.

In terms of trust then, again, I can see the argument here but I don’t believe it is an insurmountable problem.  For example any awareness campaign could be announced to all users at the very beginning and as long as we communicate clearly and effectively with users who respond then I don’t see a major problem.  When OxCERT replied to all users who complained that we had temporarily blocked Google Docs and explained the reasons, almost everyone of them fully understood the reasons for our actions.  Besides the point here is that we don’t want users to trust emails asking for credentials regardless of where they come from and surely it is better for users that we try and phish them before the bad guys do?  Of course there would undoubtedly be some people who would just be unhappy but we get this anyway.  One senior academic here recently verbally abused our helpcentre staff and called the whole of IT Services b******s for making sure him/her change his/her password.  This was after he/she had responded to a phishing scam.  I’m not sure we should be going out of our way to avoid upsetting people like this.

An effective solution or major mistake?

So we have discussed a means of carrying out some form of penetration testing that is very cheap, easy and effective.  At worst it will provide genuinely meaningful metrics that could be used to assess the state of our human defences and also (over a period of time) to demonstrate any improvement (or not) in those defences based on measures that we take.  At best it will allow us to target our training at our most vulnerable users allowing them to protect themselves and protect University assets whilst we are at it.  Yes, there may be pitfalls but nothing that I can think of that can’t be overcome with a little thought, advice and learning lessons from those who have carried out this type of activity already.  So, we could carry on arguing over how to word warnings about phishing emails that no-one really reads anyway or whether or not to put links in emails. We could continue to play whack-a-mole with compromised accounts whilst everyone else tells us what we should be doing.  We could, and should, explore technological solutions but that will inevitably take time and may not improve things anyway.  Or we could do something different, cheap and effective.  I would welcome your thoughts on which it should be.

Posted in Information Security | 11 Comments

To Phish or Not To Phish? Part 1

That is the question…….

About eighteen months ago I wrote a blog post on the price of phish.  Since then, phishing has continued not only to remain a problem but to grow as a significant threat to aspects of the University’s business. This led to some pretty drastic measures being taken two weeks ago when access to Google Docs was effectively temporarily blocked from within the University network. Robin’s excellent blog post on the issue last week gave further details and, perhaps unsurprisingly, generated a fair amount of interest.  Some of the comments and responses were interesting and well balanced, others were well meaning but not well informed and some have just been plain wrong. But I think it is to OxCERT’s credit that they have been so open about what happened and welcomed all comments good, bad and ugly.  As a very brief summary to follow up Robin’s post I thought it would be worth starting off by clearing up a few details:

  1. The action to block Google Docs was not a knee-jerk reaction but a temporary measured response and an attempt to limit the impact of a very current threat that was in danger of spiralling out of control and having a significant effect on critical University services (i.e. email).  If you are dealing with a burst water pipe the first thing you do is to turn off the water.
  2. Whatever measures are in place to detect and prevent spam/phishing, it only takes one person to respond to one email and there is a risk of significant escalation.  Once an account is compromised it can be used to spam internally so users are more likely to receive the phish and more likely to respond if it comes from a genuine Oxford account (sad but true).  Therefore you get a snowball effect and in this case the snowball was getting pretty extreme.

    Snowball effect

    Snowball effect

  3. OxCERT have been monitoring and dealing with compromised accounts for as long as they have existed (since 1994 I believe). It is true that in the last two years phishing has become an increased problem but it isn’t a new problem. However the attackers are able to adjust their methods and use those that get the best results.  There are countless services based all over the world  allowing users (and bad guys) to set up free web-based forms and there are many compromised websites that the attackers also use.  Where these are little-known, little-used services or unheard of, personal websites, it is very easy to effectively block access to these sites to protect our own users and prevent them from filling in the forms.  Similarly we can observe phishing runs and prevent users replying to the addresses used by the attackers.  OxCERT have detected and dealt with many compromised accounts over the years but in doing so we have prevented many many more from being compromised and, importantly, have therefore done their bit to ensure that the University’s email service has remained available to legitimate users whilst protecting other valuable assets.  However the attackers can get much better results by using Google Docs because they know we can’t just block access to Google (permanently anyway!).
  4. The cost of any security control shouldn’t outweigh the benefit.  However  coming up with accurate costs of doing something or not doing something can be very difficult.  This is why you need a security team that are prepared to make difficult decisions when dealing with incidents. These decisions are subjective (not everyone will agree with the action) and they are based on individual circumstances rather than being a blanket response to a given situation.  The point is that they are reasoned decisions that can be justified.
  5. It is important, when dealing with security incidents, to continue to monitor (both the threat and the impact of any security controls) and when the cost of the control outweighs the benefit it is time to change and do something different.  Ultimately that is the reason the Google blocks were lifted after only 2.5 hours.

The point of this is not to make excuses or even to argue either way as to whether the right decision was made but rather to demonstrate that phishing remains a difficult problem to deal with and (as is so often the case) the conditions favour the bad guys.  It is very cheap for them to do, they only need a very small success rate to make it worth it and, if they make a mistake there is little to no impact on them.  They have everything to gain and nothing to lose unlike the targets (the University of Oxford in this case) for whom it is the opposite.  It is also worth pointing out that OxCERT didn’t have to blog about this and make it so public.  It might make a nice headline that Oxford has blocked Google and it gives some people the opportunity  to air their grievances or tell us what we should have done.   But the truth it that, in the end, Google Docs was only unavailable for a very limited period of time and the number of users who actually noticed (compared with the number of users in total at least) was pretty low.  Communication of OxCERT’s action happened at the time and also after the incident and all users who complained received a direct response explaining exactly what had happened and why.  All of the responses to that particular communication indicated that users understood and supported our actions.  If it had been left at that the chances are there would have been little or no coverage of this incident but I think it is good to be as open as possible, to allow debate and also to make as many people as possible (including Google) aware of the problems we face.  To that extent Robin’s blog has been very successful.

Phishing Form

A Google Docs Phishing Form

However I did want to make it clear that all security decisions are thought about extremely carefully and I think it is fair to say that all of the legitimate ideas that have been put forward in response to Robin’s blog have been considered.  Some may be possible but too expensive, others may be impractical for either technical, political or social reasons.  The University of Oxford is a complicated place when it comes to IT and their are numerous constraints on what the central IT Services department can and can’t do.  That said there is always more that can be done and we will continue to look at both technical and social means to improve our prevention, detection and response to all incidents and threats – including phishing.  For the purposes of this particular blog post however, the area I am interested in exploring is training and awareness, specifically the idea to phish our own users.  The very thought of this has some people here up in arms but I’d like to discuss further the idea of awareness when it comes to phishing and understand  the opinions and objections of others.  If you do too see part 2 of this blog post.

Posted in Information Security | 4 Comments

FIRST TC 2013 (Lisbon)

Organisers of the conference dinner had a novel approach to network security

During late January one member of OxCERT were able to attend the annual spring Technical Colloquium organised jointly by FIRST and TF-CSIRT. This event provides opportunity to meet up with, hear presentations from, and discuss matters of interest with a variety of people involved in network security and incident response. This event is always particularly valuable for OxCERT because of the larger than average academic presence than many of the other FIRST events we attend. The meeting was over 4 days, although OxCERT only attended the first three, as the tutorials organised on the last day were not as relevant to our core activities as the rest of the meeting.

Monday began with a closed session of the Trusted Introducer (TI), a group which OxCERT is not a member, so our involvement began on the Monday afternoon with the TF-CSIRT meeting. This provided an opportunity for teams to give a brief update on their activities and some of the things they have been involved with other the last few months, icluding tools they have developed, particular challenges they were facing and other things they felt relevant to the community.

Of particular note were updates from a major UK ISP regarding the volume of targetted phishing they have been seeing – rising by a factor of 20 over the space of 6 months last year. It is interesting, that this is very similar to what we have seen, even though we are in a very different industry. They observed that it was not immediately obvious why an attacker might wish to steal such credentials – it is much clearer that University credentials have value than those for a typical ISP mail service. They also were hoping to find other people to collaborate with in dealing with phishing, this could lead to a project of benefit to us and to others depending on where the project leads.

Other updates that were of interest included tools for devolved acquisition of forensic data – a challenge we currently face when dealing with incidents outside of IT Services – how can we easily, safely and securely obtain forensic images if they are needed without having to physically take a particular system back for imaging, and tools for the automated scanning of sites like pastebin.com for data of interest to incident responders – in recent times pastebin has been used to publicise attacks and to disseminate stolen information so such tools are extremely useful to the incident response community.

The afternoon continued with a look at darknets – unallocated blocks of address spaces that are monitored to see what unexpected traffic hit them. This time, it was darknets with a small twist – netblocks were chosen deliberately due to being obvious misspelling of common RFC1918 address ranges, and the traffic dumped. This revealed the amount of sensitive data that is leaked by careless typing of IP addresses into devices, and reminds us all of the need to take care when configuring such systems.

The day’s talks concluded with a discussion with the developers of RTIR on new features in their incident response handling system. Although we do not use this tool, it is useful to see what it can do, so that we can evaluate whether we should look to introduce any similar features into our own systems.

The conference dinner was a good opportunity to meet up with and talk to people working in similar roles in other organisations, and I had several useful discussions with people about areas they were investigating and projects they were undertaking. Of particular interest was a discussion about logging tools and event correlation, an area I’d like to do further work when time permits.

Tuesday consisted of a selection of plenary talks from a variety of people well known within the security community. Of particular note were talks from Cisco’s security team, covering some of the challenges of intrusion detection systems, correlation of logs from multiple sources, and finding the resources to be able to deal with incidents. One thing that was particularly striking was that, even from within a large company that supplies pre-built tools for doing intrusion detection, and pre-defined netflow analysis, the security team still relies heavily on custom written signatures. Also noteworthy was the very high proportion of incidents they now detect using deep packet inspections (around 90% of the incidents on their network) – this mirrors very well the changes we’ve seen in our detection methods for malware and the changes we’ve observed over the past 4-5 years.They also covered some new technologies, some of which may be appropriate within Oxford to help protect users from some classes of threat.

Panasonic spoke about some of the challenges for internet service providers of doing incident response with an increasing numbers of connected devices, many of whom the person with the internet service contract may have no control over. Examples such as smart meters, internet connected pacemakers and cars were given as things where there may be good reasons why the person in possession of the device may for good reason not be permitted or desire to take responsibility for patching such a device! Of course although we are not an ISP, if such devices become uniquitous, it is fairly likely that some of these devices may end up on our network, and we will need to have a plan as to how to cope with them.

This was followed with an overview of iOS security, device encryption and some of the misconceptions that are common. Ken van Wyk has worked hard to analyse and to discover exactly how this works, and what limitations are present within iOS’s whole device encryption system. This was extremely interesting as I did not understand how this worked before and I know that the Information Security team are going to be doing some work in this area imminently.

Wednesday consisted of a whole day hands on session from the Internet Initiative of Japan. This consisted of trying to perform a forensic investigation of a particular targetted attack, and trying to identify exactly what exploit had been used to perform the attack. This was extremely useful, as we do perform such analyses ourselves on occasion (although generally in slightly less depth), and they covered the use of several tools I had not used in the past, and I look forward to having the time to investigate them further and if they are helpful sharing them with my colleagues.

Posted in FIRST Conference | Comments Off

Google Blocks

We recently felt it necessary to take, temporarily, extreme action for the majority of University users: we blocked Google Docs.

Why would we do such a thing, you might well ask. Surely Google Docs is a perfectly legitimate site, widely used by staff and students as part of their work and personal lives?

We know that. Unfortunately, it is also frequently used for illegal activities; importantly, illegal activities which threaten the security of the University’s systems and data.

Background: phishing attacksTaken from http://www.flickr.com/photos/patrickgage/1620195364/

Many readers will be aware that over the past few years, phishing has been a major problem for us. Not the sort of phishing in which the attacker gets hold of online banking details – in general that’s a matter for users, their banks, and law enforcement. What we care about are phishing attacks harvesting credentials for University systems, in particular email accounts.

In general the attackers are seeking accounts from which to send out spam. Lots of spam. Universities tend to have well-connected email systems which are generally considered reputable by other email providers. In the absence of effective monitoring, it can be easy for over a million messages to be sent out before someone happened to notice. Once a compromised account is closed off, the attackers simply move to another one. Every so often they need to send out a bunch of phishing emails to snare some more accounts from which to advertise their little blue pills or whatever.

For a successful phishing attack, the attackers need some means of capturing login credentials. Not so long ago, they’d simply ask for users to reply to the phishing email, including their details. These days that approach is less common, and most attacks bear a link to a web form. The forms are hosted anywhere they can find – perhaps a compromised webserver, perhaps one of the world’s many free webform hosting providers.

Now, Mom & Pop’s Free Form Farm is unlikely to be used by many legitimate users, and if we were to block access to it, it’s unlikely to be a big deal for anyone but the phishers. But as well as all the small providers, there’s a big one: Google Docs.

Google Docs and phishing

One of many recent phishing pages on Google Docs

Google Docs has many advantages. One significant one is that millions of people use it for perfectly law-abiding purposes. Another is that traffic is encrypted. Many educational establishments will have some capability for filtering traffic to malicious URLs as it flows through their network. That’s easy with unencrypted traffic. If the site uses SSL, then you have to do some kind of SSL interception. Straightforward on a corporate network full of tightly-managed systems. Much harder on a network full of student machines, visitor laptops and the like, and in our opinion, something to be avoided.

So how can you stop your users reaching the phishing forms? Assuming that the phishing emails get past all your anti-spam and anti-malware defences, you essentially need to ask Google nicely if they could take the form down. That’s simple enough – Google’s own security team have advised us that the best way is to use the “Report abuse” link that’s at the bottom of each page. Easy enough.

Unfortunately, you then need to wait for them to take action. Of late that seems typically to take a day or two; in the past it’s been much longer, sometimes on a scale of weeks. Most users are likely to visit the phishing form when they first see the email. After all it generally requires “urgent” action to avoid their account being shut down. So the responses will be within a few hours of the mails being sent, or perhaps the next working day. If the form is still up, they lose. As do you – within the next few days, you’re likely to find another spam run being dispatched from your email system.

Recent attacks

Over the past few weeks there has been a marked increase in phishing activity against our users. Now, we may be home to some of the brightest minds in the nation. Unfortunately, their expertise in their chosen academic field does not necessarily make them an expert in dealing with such mundane matters as emails purporting to be from their IT department. Some users simply see that there’s some problem, some action is required, carry it out, and go back to considering important matters such as the mass of the Higgs Boson, or the importance of the March Hare to the Aztecs. Granted, many, if not most of our users do spot the scams, and do nothing (or better, warn us about it). But as with most spam, it only takes a small proportion to respond for the attacks to be worthwhile. And we have tens of thousands of users. Despite all attempts at user education, some will inevitably respond. We see a good mix: first-year “digital native” undergraduates, ancillary staff, emeritus professors.

The recent attacks have often seen us dealing with several account compromises within a short length of time. We are keen to see that compromises and associated spam runs do not adversely impact the University’s “reputation” with external email services such as Hotmail and GMail. We have had problems in the past in which Hotmail have rejected all mail from us over a period of many days, owing to too high a proportion of the mail from us being marked as spam. Such incidents can cause major disruption to legitimate University business, especially given the number of sites which make use of Live@edu and other outsourced email solutions. Spam is not the only threat to University business from an account compromise, of course – something the University of East Anglia know all too well.

Blocking Google Docs

Almost all the recent attacks have used Google Docs URLs, and in some cases the phishing emails have been sent from an already-compromised University account to large numbers of other Oxford users. Seeing multiple such incidents the other afternoon tipped things over the edge. We considered these to be exceptional circumstances and felt that the impact on legitimate University business by temporarily suspending access to Google Docs was outweighed by the risks to University business by not taking such action. While this wouldn’t be effective for users on other networks, in the middle of the working day a substantial proportion of users would be on our network and actively reading email. A temporary block would get users’ attention and, we hoped, serve to moderate the “chain reaction”.

It is fair to say that the impact on legitimate business was greater than anticipated, in part owing to the tight integration of Google Docs into other Google services. This was taken into account along with changes to the threats and balance of risks over the course of the afternoon, and after around two and a half hours, the restrictions on access to Google Docs were removed.

What next?

We appreciate and apologise for the disruption this caused for our users. Nevertheless, we must always think in terms of the overall risk to the University as a whole, and we certainly cannot rule out taking such action again in future, although our thresholds for doing so may be somewhat higher. We are meanwhile investigating several possible technical measures for reducing the risks to the University with less impact on legitimate network usage, and will be reviewing our emergency communications procedures.

We will also be pressuring Google that they need to be far more responsive, if not proactive, regarding abuse of their services for criminal activities. Google’s persistent failures to put a halt to criminal abuse of their systems in a timely manner is having severe consequences for us, and for many other institutions. If OxCERT are alerted to criminal abuse of a University website, we would certainly aim to have it taken down within two working hours, if not substantially quicker. Even out of official hours there is a good chance of action being taken. We have to ask why Google, with the far greater resources available to them, cannot respond better. Indeed much, if not all, of the process could be entirely automated – and part of their corporate culture is that their programmers and sysadmins should be automating common tasks such that they can devote efforts to more interesting matters. Google may not themselves be being evil, but their inaction is making it easier for others to conduct evil activities using Google-provided services.

Posted in Email, General Security, Google | 90 Comments

Information Security Policy – So What?

In July 2012 the University Council officially approved a new information security policy but what does this mean in practical terms?  Well, for a start, it means that each University department must formulate their own information security policy and it is the responsibility of the head of department to make sure this happens.  So, what is an information security policy anyway and why do departments need their own?

Different goals

Different goals

It’s not just to make work for people, that is for sure, and the infosec team are here to help with any questions and (where possible) practical advice.  Of course resources are an issue which is why we’re putting our guidance and helpful tools into an information security toolkit.  This is currently being re-drafted and converted to fit the new IT Services web pages but in the meantime you can find it via the OUCS site.  There is, of course, a simple reason why each department should have its own policy and that is that one size does not fit all.  It would be great to be able to write one “policy” that tells every member of the University what they need to do to be “secure”, but the fact is that that just doesn’t work – especially in an environment as devolved and diverse as the University of Oxford.  That is because each department (or equivalent) has their own management structure, their own operational goals and objectives and hence their own security requirements.  Information security isn’t just about locking down data and IT systems so that bad guys can’t get in.  It is about assessing assets, requirements and risk and then making informed decisions.  Requirements for confidentiality, integrity and availability will vary hugely depending on what you are doing. Confidentiality, for example, is likely to be a much higher priority for those carrying out research using patient identifiable data than it would for, say, the University’s main webpage where availability is probably the most important thing.

This brings us back to the question of what is an information security policy anyway.  Well, the terminology can be argued about (there are specific definitions of “rule”, “regulation” or “policy” within the University) but it doesn’t matter what you call it.  In this context an information security policy simply means something that is approved at the highest level of seniority within the department, that states your goals for information security, your commitment to achieving those goals and the framework within which you will manage information security.  To that end we’ve provided a very simple template to work from which can be found in the toolkit.  This may well be accompanied by other policies and procedures but this is the bare minimum that is required.

What is the point of that you may ask?  Well for any information security programme to be effective and successful you need to know what it is you are trying to achieve and have a plan for how you are going to achieve it.  Visible support from senior management is essential as is officially identifying roles and responsibilities.  Ultimately information security is about measuring risk and making informed decisions so it is important to know who is responsible for making those decisions.  And that is why having a local policy is important.  It provides the framework from which everything else will happen and defines where the buck stops!  What it doesn’t do is tell you how those objectives will be met and, although the further down the hierarchy you go the closer to methodology you get, I prefer to keep policy and procedure separate.  That might not fit all departments and examples of staff “procedures” will be provided via the toolkit shortly.  However, I find that assigning responsibility for writing and authorising procedures to meet the policy means that you can have a concise and robust policy that will not need to be frequently changed, and procedures that can be altered, added to or scrapped as the threat landscape and technologies change.

Speed Limit

Speed limit policy

One of the main arguments I hear against this approach is that you can’t have a policy in place that you can’t enforce .  I can see why people say this and understand what they mean.   But I don’t think it is necessarily true.  Certainly policies may be less effective if you can’t actively enforce them but, to me, a security policy is simply a security control like any other.  It is just one layer of the onion.  Regardless of where you are in terms of being able to actively monitor and enforce policy, without knowing what you are trying achieve how will you ever get there?  A policy that is communicated to all makes people aware of their responsibilities and also that there may be a consequence to their actions.  Take speed limits on motorways for example.  Everyone knows the limit is 70 mph but that it is impossible for the police to enforce this for every road user.  But does that mean that we shouldn’t have speed limits?  No.  Instead some road users stick to the limit anyway (or at least to a speed they don’t think they’ll be punished for doing)  and the police implement other controls to encourage people to comply varying from speed traps to cardboard cutouts of police cars.  Everyone who uses the roads however is aware of the legal limit, is aware that there is a consequence of breaking the limit, and that the consequence may be worse if the impact of you speeding is worse (e.g. you have an accident).

Security policies are no different.  Start with the policy which says where you want to be and build your other controls and practices around that to help give you assurance that you are doing the right things.  Some things will be more of a priority and will require more assurance than others.  However just because a “keep out” sign and a lock might be better doesn’t mean that a “keep out” sign on its own is pointless.

Posted in Information Security | 1 Comment

BYOD: Major Risk or Latest Bandwagon?

Go to any security event these days and you will find any number of information security managers, vendors or company directors all nodding in agreement that mobile devices and bring your own device (BYOD) are one of the current  big security threats.  But at all the events I’ve been to recently not one person has been able to quantify what the actual risk is.  Despite not really understanding what the problem is however, everyone is keen to fix it and you can bet your life there is someone willing to sell you a product that will “secure” your mobile devices.

But here at the University of Oxford BYOD isn’t exactly a new thing!  How many people out there using the Oxford network first connected their own device back in the early nineties?  Today we have people connecting not just one device but many of their own devices (and also University owned mobile devices to boot).  Think of any product you like and I’m sure you’ll find it on our network and it will have been there since it existed. Despite this we are still here.

So what really is the risk?  There is certainly a concern over data protection and potential breaches involving confidential and/or personal information, though from the conversations I’ve had most people are worried about the use of email.  This makes me ask the questions a) isn’t this an email issue rather than a mobile devices issue and b) why are you using unsecured emails for dealing with confidential data in the first place? How many people who are worried about email on mobile devices would happily log into untrusted, public kiosk machines wherever they are in the world?  Of course there are people out there who are perhaps handling larger confidential datasets on their tablets or smartphones but the fact that someone uses an iPad seems to worry people much more than someone using a laptop regardless of how it is used.

I’m not saying these things are exactly the same or that there aren’t different vulnerabilities with different devices but does the fact that someone is using a “mobile device” inherently increase the risk to the point where we need a one-size-fits-all approach to securing such devices?  There is an argument to say that people will take more care of their own expensive devices which contain primarily their information – perhaps the way forward is a policy to say that confidential data can be used on mobile devices so long as you keep naked photos of yourself on the same device

One threat that often seems to get overlooked (although not from within the technical security community I might add) is that of malware.  This seems to be forgotten amongst the worry surrounding perceived data protection issues but if you wanted to target large volumes of information on an organisation’s mobile devices would you go out and start stealing individual smart phones or would you try and infect hundreds of them and collect the data from your living room?  Come to think of it you’d probably target the data, not the device so things like email (again) are a prime target.  On the other hand how many casual thefts of tablets and smartphones result in data actually being exposed compared with devices being stolen to be wiped and sold?

Of course we need to protect personal and confidential data wherever it may be and with the powers of the ICO to issue fines the impact of not doing so can be considerable.  Despite that I hear many comments suggesting that damage to reputation is the key concern and I agree that this is something to to take into account.  However fewer people seem to consider the reputational impact of constantly telling our users what they can’t do.  We live in a world where the expectation of our users is soaring.  People want access 24 hours a day, 7 days a week and they want if from whichever device they are using, wherever they are in the world.  Surely meeting these demands and this expectation should be  one if (if not the) major influences in our decision making and policy?  Don’t we need to be thinking of ways to provide secure access to information regardless of device, location and time?  After all, concentrate on securing particular devices and they will surely be out of date in a few years time?

Yes there are different and sometimes increased risks with mobile devices, but there are also many benefits too.  And of course there are different environments (we don’t operate in a homogenous, managed or locked down one and the risks to us are different to that of, say, banks).  Of coures we should secure information and encryption certainly has a part to play in that. But we also need to be concentrating on providing access to information and making it accessible in a way that allows people to work in the way that a modern, mobile society demands.

So the next time someone tells you that you need to be securing mobile devices, ask them if they know why?

Posted in Information Security | Comments Off

2012 FIRST Conference

Two members of OxCERT attended the annual conference of FIRST (Forum of Incident Response and Security Teams). This year it was held in the Hilton Hotel in St. Julian’s, Malta. Situated in the heart of the beautiful Portomaso waterfront in fashionable St. Julian’s, just fifteen minutes from the UNESCO World Heritage City of Valletta.

On the day prior to the conference start, OxCERT attended a meeting of educational and research networks, the Academic CSIRT Meeting, hosted by TERENA. The aim of this second Academic CSIRT meeting (the first being during the 2011 conference) was to discuss issues affecting CSIRTs whose constituencies include National Research and Education Networks, Universities, research institutions and/or other related organisations. Andrea Kropacova from  Czech Republic’s National Research and Education Network (NREN) gave a few inspiring presentations including ‘Academic Security Policies’ and ‘Trends in security incidents’ which stimulated constructive and ongoing discussions among the audience. Two sources of security incident information (Shadow Server, TeamCymru) of five that she frequently mentioned in her talk (the others being UCEprotect, DSheild, NASK Polska) have also been widely used by our team and gives us a chance to learn about other sources. Also a team from Brazil which connects 15 countries of Latin-America attracted quite some interest. They work on malicious activity monitoring, incident handling and providing assistance to CSIRTS.

On the first full day of the conference, we enjoyed two plenary talks ‘IT Security @ European Commission’ and ‘DigiNotar Crisis’ given by Francisco García Morán and Aart Jochem respectively. Discussion over coffee break revealed the significance of actual incidents as being critical support to policy making especially for EC, but also for any organisation working to put IS policies in place. As traditional, the conference split into separate tracks in the afternoons. One talk was about how to examine network activity of PoisonIvy and to detect their command and control servers, which should prove very useful for our own monitoring.

The second session of Tuesday morning was given by Jean-Christophe Le Toquin from Microsoft. His main message is that security teams should go nationally and be cross-disciplinary – to quote: “find a hammer and break your own silo!”. Of the ‘TECHNICAL FOUNDATIONS’ track in the afternoon, Christopher Smithee from Lancope, Inc. talked about detecting advanced persistence threats (APT) using netflow, which requires different approaches that traditional means, mainly monitoring the interactions of your own internal systems to spot compromises.

In the evening of Wednesday, we moved by coach to the magnificent Mdina (the old capital of Malta) for the conference banquet. After a drink reception on the top of this 2700-yr old town, we started the lovely dinner with live music. A big chat with our friends from Warwick and Janet certainly brightened up the night.

On Thursday we enjoyed a talk by a representative (Chad Greene) from Facebook who convinced the audience that Facebook had been compromised, however the compromise was in fact only a test of Facebook’s incident handling procedures. Something of a smaller scale would certainly be a useful exercise to test a CERTs readiness.

During a networking break with CERT Polska we learned of many useful detection techniques they employ, particularily on peer2peer ZeuS malware and found they also host an IT Security conference.

A hardware vendor showed off a sophisticated Portable Malware Lab, which uses their proprietary virtualization to run the malware lab Operating System alongside a fully  functional Windows or Ubuntu OS, each having a dedicated processor for their own use. A very useful product but with a $15K price tag, likely out of reach for many CERT teams.

The final day of the conference began with Lance Spitzner from the SANS Institute, US exploring ‘The Past, Present and Future of Surviving the World of Security’. The focus of the talk was one of educating users and communicating effectively, and encouraged phishing assessments on organization staff as an educational tool.

In all, it has been a great conference which shall keep inspiring and motivating greater advancement in IT security throughout the world. We look forward to attending 2013 FIRST.

Posted in FIRST Conference | Comments Off

OxCERT probes, and firewalling

Note for external users: This post relates to a service that OxCERT offers to units within the University in the form of occasional port scans for ports related to particular known threats that we are tracking. This post looks at the various ways that people can use these scans within their units and some of the pitfalls they may experience.

In recent months, we have become aware that there is a great deal of variety in the ways that units within the University are using our occasional port scans. This variety can be very healthy, and allows various units to focus on particular areas that they consider to be a threat. It is worth observing that the various approaches people have used lead to very different results from the scans.  It is therefore important that the units in question are fully aware of the impact of their choices when deciding how to approach them, and that all their staff are aware of their decisions. All the approaches we’ve seen have different merits so we see it as worthwhile to look at what these are and what they can tell you. We have also seen a few common pitfalls which we may be able to help avoid people falling into. These hints may also be relevant to anybody considering purchasing any form of network scanning service.

One common approach is to treat our scanning hosts as identical to any other systems located within the University Network, this is useful for the following purposes:

  • validation of firewall rules
  • identification of potentially vulnerable systems accessible outside of your subnet

However this doesn’t help in:

  • identifying vulnerable hosts behind your firewall
  • validating any additional firewall rules you may have for hosts outside of the University network

Another thing to think about here is whether your firewall has some form of rate limiting on port scans – these functionalities can be useful in reducing the log noise from brute force attempts, however they are liable to leave a false sense of security when applied to network scans, as large amounts of the subnet may be falsely listed as having nothing listening

Another approach is to treat the scanning range as though it is entirely outside of the University network, this will obviously help in similar ways to the previous case, however it may cause you to miss certain hosts if they are presenting vulnerable services to the Oxford network but not to the outside world. I would suggest that unless your network structure is particularly unusual (for instance a network in which you present far more services to the outside world than to the Oxford network) this approach is unlikely to be beneficial.

The final approach (which we have seen from a few units) is to treat our probe system as though they are internal to their network, this has the following advantages for a unit:

  • it may pick up hosts that are only accessible internally and are vulnerable to a particular threat that would otherwise not be detected, this is particularly relevant if the unit has a flat (or nearly flat) internal network structure (i.e. there are few internal to internal firewall rules).

It also has a few disadvantages:

  • it doesn’t serve to double check whether firewall rules are correctly being applied
  • it may be technically hard and of limited benefit to do this for a network where different classes of system are physically segregated
  • it may lead to unexpected panic if other IT Staff with in the unit are not aware that the firewall is exempted for these hosts when results are presented
  • we may get in touch about hosts we would not otherwise need to if they are protected by your firewall

In general we do not mind what approach is being taken, and consider that the key requirement is to understand the impact of the decisions you are taking, and what that will mean for the results of the scans we perform.

Additionally we would urge all firewall operators to check what, if any, rate limiting they have in place, as this is a prime candidate for producing false negative scan results when we scan your networks – remember it is not unusual for attackers to use non-systematic scans (from many IP addresses) with the result that they may get around rate limiting even if we cannot. Whilst it might be beneficial to use a whole /24 for our scans thereby avoiding this issue, it is very hard to justify the use of already scarce IPv4 merely for this purpose.

Posted in Uncategorized | Comments Off

Musings on Mac Malware

Apple Store, Fifth Avenue

Apple: under attack day and night


Over the past couple of weeks, OxCERT have been somewhat overwhelmed by Mac malware. This isn’t quite the first time we’ve dealt with problems on Macs – we’ve seen several compromised over the years through weak or exposed ssh credentials, and others infected as a result of installing pirated software. But with Flashback, the game has changed forever. We are seeing huge numbers of attacks of the sort that Windows users have had to contend with for years. Apple users, and indeed Apple themselves, just have not been ready. We are dealing with what is probably the biggest outbreak since Blaster struck the Windows world all the way back in the summer of 2003. That time OxCERT dealt with around 1000 incidents; we have seen several hundred Flashback incidents and they keep on coming.

What is Flashback?

Flashback is not in fact that new, it has been around in various forms since September 2011. Like much malware, multiple variants exist, as the attacks evolve to exploit new vulnerabilities, avoid detection and adapt to new purposes. Early versions required user interaction in order to execute, but in recent weeks the malware has been exploiting a vulnerability in Java, allowing for “drive-by” exploits where all a user has to do is to visit a webpage hosting malicious content (perhaps via a third-party advertisement).

Once on the system, Flashback gives the attackers the ability to do pretty much whatever they like with it, at least until someone stops them; it will depend on the particular variant and what the command-and-control systems tell it. In the Windows world, a common approach has been to capture users’ keystrokes and other information from the system in order to gain access to their online banking. Others may be interested in what other resources they can access, whether on the compromised system itself, or via resources to which the user has access. Some University users have access to some very sensitive data.

The Java vulnerability is one that Oracle fixed on 14 February. Anyone using Oracle’s auto-update mechanisms would have received this update shortly after, but under OS X, Java is distributed by Apple and most users have to wait for Apple to release the update. In this case, Apple did not release it until 3 April, some seven weeks later, by which time the vulnerability was being widely exploited. The reason behind this delay is unclear, but it is not a one-off: Apple have consistently lagged weeks behind Oracle on Java updates. For all we know there may be good operational reasons as to why it takes so long for Apple to release the updates, but as is usual they have been completely silent on the matter.

Java is not the only application being exploited – there are reports of others being targetted, such as emails containing malicious Word attachments. All too familiar in the Windows world of course.

“But Macs don’t get viruses!”

Sadly far too many users still appear to be under the misapprehension that “Macs don’t get viruses” in spite of decades of evidence to the contrary. Indeed, at the time of writing, Apple themselves still state that a Mac “doesn’t get PC viruses”. Technically true, perhaps, but very misleading: PCs get PC viruses, Macs get Mac viruses which may be extremely similar to that common on PCs, in spite of the “built-in defences”. (Note also the claim that “Apple responds quickly by providing software updates and security enhancements” – as we’ve seen, this depends very much on your definition of “quickly”.)

There was perhaps a time when the threat of viruses was sufficiently low that Mac users didn’t have to worry too much about having antivirus software installed, but that time is long gone. Apple’s “built-in defences” weren’t saving users from Flashback infections. It’s true recent versions OS X have a built-in antimalware capability, but it is extremely limited and no substitute for a proper third-party antivirus system. Sophos is widely used and supported in the University but no doubt most of the major players have equally good solutions.

What should Mac users do to protect themselves?

Really, it’s a case of taking the same precautions are required as on a Windows system.

  • Install antivirus, and ensure it updates frequently, preferably several times a day.
  • Keep the operating system up-to-date (but see below) – ensure Software Update checks on a daily basis, and that security-related updates are applied promptly.
  • Keep third-party applications up-to-date, especially anything that may handle untrusted data from the Internet. Browsers (eg Firefox, Chrome, Opera), mail clients (eg Thunderbird, Outlook), Flash, Java, Acrobat, Office.
  • Be wary. Don’t open email attachments you don’t expect, especially if from unknown senders. Only download software from trusted sources.
  • Enable the built-in firewall.
  • Disable or remove software you don’t use. Under OS X 10.7 (Lion), Apple now do this automatically with Java.

There is however a nasty catch with operating system updates, of which many users will be unaware: Apple security support lifetimes are much shorter than in the Windows world. This is an issue which we discuss further in a second post.

Ultimately though, the game has changed for Mac users. They can no longer sit smugly thinking that few people are going to bother attacking them – Macs are being attacked on a very significant scale, and complacency is asking for trouble. Mac malware has gone mainstream, and will likely remain so.

Posted in Apple, General Security | 10 Comments