Open Heart(bleed) Surgery

If you haven’t heard by now of the so-called “Heartbleed” Internet security bug that last week sent the Internet security community into something of a frenzy, then you probably don’t need to worry and almost certainly won’t be reading this!  For those of us who use the Internet and watch the news however you may want to read on.

Heartbleed-Patch-Needed

“Heartbleed” is the name given to a recently discovered flaw in  a specific implementation of one of the world’s most widely used Internet security protocols - SSL/TLS.  Called OpenSSL the software is used to protect sensitive data (such as usernames, passwords, payment details etc.) sent backwards and forwards between your computer and “secure” websites.  Although it is hard to know precisely how many websites are affected by this vulnerability, it is estimated that about two thirds of the world’s websites use OpenSSL and that around 17% of sites are vulnerable to this bug.  That is about half a million websites and, since they may have been vulnerable since the bug was introduced into the software (as far back as 2011), it is rightly being treated as a pretty big deal.  As renowned security expert Bruce Schneier put it “On the scale of 1 to 10, this is an 11.”

Unsurprisingly then Heartbleed has attracted a lot of attention, but this has led to confusion amongst many with, for example, conflicting advice on whether and when to change passwords.  Worrying and panicking however, won’t do anyone any good so what are the risks, what should you do and what is the University of Oxford doing in response?

What are the Risks?

Well the good news is that once the problem was noticed the response has been pretty effective with many major service providers having patched the vulnerability already.  The trouble is that implementations of OpenSSL may have been vulnerable for over two years.  How much of a problem this actually is nobody really knows at the moment, but the risk is that cyber-criminals may have been aware of the vulnerability before the good guys were.  So far though, there have been no reports of widespread exploitation (either before or after the bug was announced) and, although an attack against a website you use could have disclosed sensitive information (such as passwords, payment details etc.) it would be more difficult for attackers to target specific information.  In other words, even if vulnerable sites you use were exploited it is far from certain that any of your details will have been exposed.  I’ve no intention of explaining how the exploit works but if you want a decent, non-technical, explanation as to why this is the case then look no further than xkcd.

So what should I do?

Well, as mentioned, first of all don’t panic.  Changing passwords is a good idea (and we’ll come to that in a bit) but apart from that there isn’t much you can do about what has already happened.  What you can do is to take this opportunity to improve your online security practices.  Remember that this vulnerability is not a weakness in the underlying protocols that secure our Internet traffic, but a vulnerability in software that implements them.  In other words human error (you can forget conspiracy theories in this case.  No, really!).  This is, perhaps, a timely reminder that we shouldn’t take security and privacy online for granted and we can all play a part in protecting ourselves from the risks.  Good security happens in layers!  If you don’t use good, unique passwords for different sites and don’t use 2-factor authentication where it is available then now might be good time to start.  Many are advising users to start using a password manager such as LastPass or KeePass when you start to change your passwords.  Similarly, now is the time to start following good standard advice like regularly checking your bank statements.

Keep Calm and Use the ToolkitYou should also be aware that this vulnerability is very likely to lead to an increase in phishing scams.  Since pretty much everyone who uses the Internet is being asked to change their passwords, the bad-guys are likely to want a piece of this action and use the opportunity to send round fake emails asking for passwords and/or linking to fake sites.  Be aware of this threat and, if you are in any doubt as to whether an email (or phone call for that matter!) is legitimate then ask someone technical for help (perhaps your local IT support staff or the IT Services help desk).

If you want advice on good practice when it comes to online security (including how to spot phishing emails) then why not check out our information security website or, better still, book on one of our lunchtime courses which cover what you need to know and do.

So should I change my passwords?

Yes it is probably a good idea but before you change your password for any individual site you might first want to check:

  1. Was the site affected;
  2. Has the organisation patched its systems;
  3. Have they changed their SSL certificates; and
  4. Have they told you it has been fixed?

It can sometimes be hard to get clear information on this but one site has come up with a decent list of well-known organisations and summarised their position.

What about my University passwords and what is the University doing about this problem?

WebauthWell, for the last week we’ve been assessing the scale of the problem within Oxford and, where possible, applying fixes.  The response from both central IT Services and amongst the many IT support staff across the departments and colleges has been swift and impressive.  The University takes your security and privacy online very seriously.  The good news is that most of the central services that deal with passwords (that we’ve assessed so far anyway) weren’t vulnerable to this attack.  This includes Nexus (email and calendaring), Webauth (used for Single Sign On) and VPN.  However Oxford is a very complex organisation when it comes to IT so let’s not break out the champagne and look smug just yet.  Because some of the backend systems that interact with our main services were running vulnerable versions of OpenSSL it is possible that some credentials may have been exposed.  I ought to stress at this point that we believe the actual risk that any passwords have been exposed on a large scale to be very low.  However wherever we perceive that this has been a possibility then we are making users change their passwords.  I’ve tried to summarise the position on a “per credential-type” basis below:

Single Sign On (SSO)/Oxford passwords

These are the passwords you use for Nexus and for SSO protected resources.  Neither Webauth, Nexus or the Shibboleth service are affected by this vulnerability, nor is the production SMTP service that is used by some for sending mail.  However a test SMTP environment was vulnerable and, although this isn’t used directly to handle any live credentials there is a theoretical attack that could have affected those that use the SMTP service.  There is no evidence this has happened and we think the risk is extremely low.  Nonetheless, if you fall into this category we will be expiring your password and contacting you to ask to you change it as a precaution.

For everyone else then you should change your password if you are concerned at all, or it you use it anywhere else.

Remote Access Passwords

These are the passwords used (mostly) for the VPN service which, again, was not directly vulnerable.  However one of the backend systems that deals with credentials was vulnerable for a limited time period and, if you changed or set a remote access password within that period (approximately the last year), then a successful attack is also theoretically possible. Again, we feel the risk is very low but this  does affect a greater number of users than for SSO passwords. So we will also be expiring those potentially affected passwords and contacting users.

For everyone else – change your password if you are at all concerned and/or if you use the same password elsewhere.

HFS passwords

HFS is the backup service offered by IT Services for staff and postgraduate students. Again the primary service is unaffected by the vulnerability but, similarly to remote access passwords, it is possible passwords could have been exposed via a supporting service.  Again there is little risk that this could be used in any meaningful attack and, as it happens, the HFS service already automatically renegotiates passwords with the client software and so we are considering the merits of making sure this happens sooner than usual.

In other words there is nothing you need to do – affected passwords will be changed automatically and you won’t even notice.

What else?

Of course this only covers central services and the University operates in a very devolved way.  Unfortunately we can’t answer questions about all services offered by departments and colleges so if you want to know more you should ask your department and/or college.

What about other sensitive data?

Indeed this vulnerability does not just affect passwords and the University runs many systems that handle personal data, financial data and other confidential information.  We are continuing to investigate all central services to see whether or not they could have been vulnerable to this bug.  We’ll therefore be reporting further when we have all the information we need.  In the meantime there is no evidence that any of your sensitive or personal information has been placed at risk.

To Summarise

This is clearly a very serious security bug and has had a significant and far reaching effect on service providers all over the Internet.  However the bug has a fix which has already been widely deployed and, whilst we don’t yet know the overall impact, the worst case scenario doesn’t seem to be the most likely outcome.  However we should all take this as an opportunity for improvement in our online security practices and ensure that we take responsibility for our own security and privacy as far as is possible.  Within the University we are taking the vulnerability very seriously which is demonstrated by the fact that we are investigating the potential impact as thoroughly as possible and, where we see any risk to end-users, taking appropriate action.  We will continue to do so along with all of the other activities we carry out to protect your security and privacy online.

Posted in Information Security | Leave a comment

TRANSITS I Workshop, Prague

At the end of November I attended the TERENA TRANSITS I workshop in Prague. TRANSITS I is aimed at those who have recently joined a CERT or who have been tasked with creating a new CERT. Attendees at the workshop came from a variety of organisations across Europe and beyond. Members of European CERT/CSIRT teams have developed the course and kindly volunteered time to deliver the content, TRANSITS is also supported by ENISA (European Network and Information Security Agency). Overall I found this to be a useful and informative few days, the TRANSITS course is a valuable resource for anyone joining or setting up a CERT team for the first time, the course contains modules on the operational, organisational, technical and legal issues faced by a CERT team.

Operational

Image of Prague Castle

The operational module covered the incident handling process of a CERT. Incident handling is the bread and butter of a CERT’s working day and it was interesting to hear how other CERTs approach this. Also discussed were various tools that can be used to collate information on threats and guide the process of turning a vulnerability alert into an advisory which can be published, something that we do on almost a daily basis. One of these tools, Taranis, we are hoping to implement in the future.

Organisational

This module covers where a CERT sits within the structure of their organisation. It is important for any team to have a firm grasp of its mission, its raison d’etre, as this informs all further decisions. OxCERT’s mission is defined as:

“To protect the integrity of the University backbone network and to keep services running”

This also defines our constituency; those that connect to the backbone network of the University of Oxford. Leading on from this we also need the tools and the authority to carry out our mission. One example of such a tool is the ability to block from the network hosts that may threaten the integrity or availability of services for other University users.

Technical

The technical module contains an overview of the various threats a CERT can expect to deal with.  Among those that we unfortunately see on a day-to-day basis are keylogging malware, SQL injection and botnets, to name but a few. The module also gives an overview of various tools and resources that can be used to deal with these threats.

Prague Castle

Legal

Laws and guidance are often updated so it is essential to keep up to date and ensure you are working on the correct side of the law, especially as our work often leads us into situations where it would be easy to overstep the mark. It was also particularly interesting to compare the different legal requirements affecting teams across Europe. It is helpful to bear this in mind particularly when travelling, as an activity that is legal in one country may not be in another.

This module also discusses the issue of disclosure, i.e. what information to disclose to who, and when? Inevitably this will be a mixture of policy and per-incident pragmatism, but it is a topic that is worth consideration for all CERT teams.

Apart from the taught materials the course also gave an opportunity to meet with members of other CERTs to network and exchange PGP keys (to sign later). I found the course presented a good overview of CERT activities and provides a suitable starting point for a recent or would-be CERT member.

Posted in Uncategorized | Comments Off

Farewell to XP (part 2)

In the first part of this post, I looked at the background to the end of support for Windows XP in April 2014. In this (somewhat delayed, apologies) second part I will consider what those in the University will need to do if they are still using Windows XP, although hopefully much of the content will be equally useful for those elsewhere who are still maintaining XP systems. I will assume that readers are not in a position to consider putting off the problem through Microsoft’s Custom Support programme.

Microsoft are not continuing full support after April

Microsoft aren't making a full U-turn, sorry. CC BY-SA 3.0 byhttp://commons.wikimedia.org/wiki/User:Kingroyos

Microsoft aren’t making a full U-turn, sorry.
CC BY-SA 3.0 by http://commons.wikimedia.org/wiki/User:Kingroyos

Since I wrote the first post there has been a slight relaxation in policy by Microsoft: support for Microsoft anti-malware products on Windows XP has been extended until July 2015. It is important to note that this is not the same as Microsoft extending full security support for Windows XP, despite what has been reported in some news articles (at the time of writing this states “Microsoft has decided to continue providing security updates for the ageing Windows XP operating system until 2015″).

Microsoft are simply adding a limited amount of protection and probably little that will not be offered anyway through third-party antivirus products which continue to support Windows XP after April. Note that Microsoft’s own blog post states “Our research shows that the effectiveness of antimalware solutions on out-of-support operating systems is limited.”

Our advice is that this changes nothing: continue pressing ahead with your upgrade and/or mitigation plans, as described in the remainder of this post.

What should people in the University do?

At the time of writing, Windows XP remains in widespread use around the University, although hopefully IT staff should have been aware of the end-of-support date for a year or more and upgrade plans are well under way. It is inevitable, however, that there will be parts of the University where it simply will not be possible to complete the process of migration away from XP in time. Moreover there will be other areas where XP simply must remain in use, as no other realistic option exists. So what should staff in this position be doing? As mentioned in part 1, “nothing” is not an option!

Risk assessment and prioritisation of upgrades

The most important thing to do in this situation is to determine where the greatest risks lie and to prioritise accordingly. For the purposes of this article I shall consider only the risks posed by the systems currently running Windows XP, although these must be assessed in the wider context of the overall risks in each department and college. Concentrating all efforts on upgrading XP systems and neglecting everything else is almost certainly not the path of wisdom; your “business as usual” activities are just that.

What is most likely to be attacked?

The vast majority of incidents handled by OxCERT can be attributed to one of three main causes: vulnerabilities in the user, vulnerabilities in public-facing services, and vulnerabilities in desktop systems and applications. To the disappointment of IT staff everywhere, replacing Windows XP will do little or nothing for the vulnerabilities in users: they will continue to make the same mistakes as before, for instance responding to phishing emails, or executing malicious email attachments. While local services have been targetted in the past (e.g. Blaster, Conficker), Windows XP is not normally considered an appropriate platform for public-facing services, so it is the third category that merits attention.

The major attack vectors against a desktop system are those which are likely to handle untrusted data from the outside world. For the vast majority of users, such data will mostly come through their web browser or their email client. Malicious content may trigger vulnerabilities in the core operating system, in the web browser or email client, in libraries and components used to handle particular types of content (for instance image display), in additional Microsoft sofware (eg Silverlight, Office) or in third-party software (such as Java and Flash). It is worth remembering that Internet Explorer 8 is the latest version of Internet Explorer to be supported by XP, limiting the amount that can be done to keep an up to date Microsoft web browser on an XP based machine.

Not all of the installed software will lose support next April. Given the size remaining XP userbase, many third parties will likely continue to support their own software on the platform for some time yet, including some Microsoft applications. Note that extended support for Office 2003 will end at the same time as that for Windows XP, so you’ll just have to get used to that ribbon, sorry. Importantly, most anti-virus vendors won’t cut support immediately: for University users, Sophos have committed to supporting XP until at least September 2015. Antivirus won’t come close to protecting against all attacks (it never did) but is nevertheless well worth having.

Clearly you will need to prioritise upgrades for some desktop users over others. Determining which users should be upgraded first will depend on local circumstances. You may go for senior and high-profile staff first on account of the confidential data they are handling. Then again, they may be those complaining loudest if something doesn’t work, so you may choose to start with users who are more accepting of the inevitable teething problems.

Specialist systems

Does it sometimes feel like you're between a rock and a hard place?

Does it sometimes feel like you’re between a rock and a hard place?

What about the more difficult cases? Inevitably there will be some cases which are particularly problematic, if not impossible to upgrade. Firstly, Windows XP installations are also embedded into many devices, for example vending machines and scanners. Such systems may run a full XP installation, or they may run Windows Embedded. It is important to distinguish the two; not least the different support lifecycles. XP Embedded is supported until the end of 2016; indeed NT Embedded 4.0 remains supported until the end of August 2014. How, and indeed if, updates are delivered and applied is up to the manufacturer of the device, as are other security measures. Updates which are critical for desktop systems may well be irrelevant in the context of a particular embedded system.

If a device is not using Windows Embedded, however, the April deadline applies. If networked, they’re vulnerable to attacks, and indeed we have seen vending machines on unfirewalled public IP addresses which have been infected with malware. These systems won’t be the only cases which are particularly problematic, if not impossible to upgrade. We are aware of scientific and medical equipment costing six or seven figure sums which are controlled from XP desktops. Upgrading them is frequently not an option; indeed in some cases the original vendor is no longer trading.

Avoid unnecessary risks

With such systems we advise considering their essential usage. What software needs to run on the XP system? What, if any, network connectivity is required? For some systems it may be appropriate to disconnect from the network entirely. Beware though that may simply shift the risks. If switching from file transfer over the network to file transfer via removable media, bear in mind that removable media may harbour infections. A system that is permanently offline will not be running up-to-date antivirus, barring very frequent manual updates. Infections on removable media can be partially mitigated by disabling Autorun and Autoplay (some additional information is available for IT staff within the university).

If a system does need to retain network connectivity then consider placing it on a strictly-firewalled network segment. Consider applying a “default-deny” policy in both directions. For instance the only access required may be to a staging area on a local fileserver, in which case the only additional traffic expected might be with the local DNS resolvers and authentication systems.

Don’t forget the human risks – your precautions are futile if your users simply work around them because they see it is necessary in order to get their work done, for instance by reinstalling the software you removed, or by plugging a network cable back in. Be sure that possible usage cases have been considered as early as possible, and ensure that users understand why actions are needed. You’re not doing it to be awkward but to minimise the risks to their equipments and data, while trying to minimise the inconvenience to them in their work.

It takes all the running you can do, to keep in the same place

Does it seem like you're getting anywhere? Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

Does it seem like you’re getting anywhere?
Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

When you’ve finally dealt with that last Windows XP system (and the last Office 2003 installation), congratulations. Sadly, you’re unlikely to get much of a rest, as you’ll soon need to start worrying about the next one. End of support for Windows Server 2003 is in July 2015, Windows Vista in 2017.
Sometimes no explicit resourcing is required because you move to newer versions as part of natural system replacement cycles, but this will not always be the case, especially when dealing with software support lifetimes shorter than that of the hardware. It pays to ensure that your superiors are aware well in advance of when major upgrades need to be carried out, so that with luck the necessary resources can be made available in good time. Plan early, plan well, and stay safe.

Posted in General Security, Microsoft | Comments Off

Farewell to XP (part 1)

8 April 2014 marks the end of an era for many IT staff, and users too. After over 12 years, Microsoft will finally be terminating support for Windows XP, arguably its most successful operating system ever.

A little history

The end of the line for XP is fast approaching

After over twelve years, the end of the line for Windows XP is fast approaching

Windows XP was released in August 2001, and it’s worth reflecting briefly on how different things were. A fairly typical PC might have a single-core 32-bit processor running at around 1GHz, 256MB RAM and 30GB storage. Windows NT and 2000 had achieved some popularity in business environments, but the old Windows-on-DOS platform dominated (though the less said about Windows Me the better). Away from the University network, domestic broadband was still something of a novelty and most users were still on dialup.

Meanwhile, Apple were a niche player, perhaps best-known for the translucent CRT-based iMac, and only a few adventurous types had tried the new OS X 10.0, still in need of its training wheels. The iPod had yet to be released, and few people had ever heard of smartphones. The cellphone market was dominated by Nokia, producing handsets optimised for making telephone calls. The dominant web browser was Internet Explorer 5; a few people still stuck with Netscape. Sites such as Facebook, Twitter or GMail remained years away; Wikipedia had fewer than 10000 pages and few had yet heard of it.

Past security problems

Security threats were not unknown, although rarely financially motivated: users might be tricked into opening a picture of a tennis player, releasing a store of emails, while the previous month had seen the Code Red worm infect hundreds of thousands of webservers before attempting, unsuccessfully, to attack the White House. The world had yet to experience the shock of the real-world attacks of September the eleventh.

It is perhaps not so surprising that Windows XP was not written with security in mind from the start. Of course the original Windows XP would evolve significantly, with three service packs offering substantial improvements in security and stability. From the University’s point of view, Service Pack 2 perhaps made the greatest difference, in that by default the Windows firewall was now enabled. The Blaster worm and its derivatives had resulted in over one thousand infections across the University network in a matter of days. This one simple change made such widespread network-based attacks far less likely; indeed the only attacks on a comparable scale we’ve seen subsequently attacked another operating system entirely.

What are the risks?

Doing nothing is really not an option. Each month’s Microsoft updates include fixes for multiple vulnerabilities in Windows. Some will have been identified by Microsoft, and some by other “white hat” researchers, but others are found first by the bad guys (“zero-days”), and only become known to Microsoft once successful exploitation is discovered. For any attacker finding a zero-day vulnerability in Windows XP today, should they use it now? Almost certainly not: if Microsoft have had twelve years to identify it, are they likely to do so within the next few months? If alerted to it while XP remains under support, they are likely to investigate and fix it as soon as possible. If the exploit isn’t used in anger until after 8 April, it may still be investigated and fixed in supported Windows releases, but Windows XP users may be sitting ducks indefinitely.

Ending support

XP was subsequently followed by newer offerings, with much-enhanced security features built in from the ground up. The unloved Vista released almost seven years ago and was followed in 2009 by the far more popular Windows 7, then again last year by the radically different and much-criticised Windows 8. Retaining any degree of support for four such differing releases is clearly a substantial overhead even for a business the size of Microsoft. There comes a time at which they must decide enough is enough and cut support.

I’m not aware of any other mainstream operating system which has retained support for such a long time, and so far, the nearest competitors have also been Microsoft products: Windows 2000 managed ten and a half years; Windows 98 managed eight years (after being granted a reprieve two years earlier). Windows Server 2003 will get twelve years. Red Hat Enterprise Linux will in time manage slightly longer, as the two most recent versions are scheduled to reach thirteen years early in the next decade.

Will this really be the end of support for XP?

If you're rich enough, you may avoid it

If you’re rich enough, you may avoid it.
Image from Flickr by [garydenness] licensed under CC BY-NC-SA 2.0

Possibly. There is precedent for Microsoft granting a stay of execution: it happened with Windows 98. Support for Windows 98 was originally planned to end in January 2004, but after vocal protests, it was extended for a further two and a half years, until July 2006. In late 2003, Windows 98′s share of the install base was probably comparable to Windows XP’s share today, and shrank considerably during the extra thirty months.

It’s not impossible that Microsoft will do something similar this time, but we simply cannot afford to work on the assumption that it will. The situation is not really comparable. July 2006 was just over eight years since the release of Windows 98; we are already past twelve years with XP. And if I were Microsoft I would be very keen to avoid the perception of “crying wolf” over end of support dates. Last-minute extensions are a great way to annoy those who have put considerable effort into ensuring that they are ready for the originally-announced date, and encourage people to ignore the issue in future.

Even without a stay of execution, April will not be quite the end … if you’re rich. Microsoft’s Custom Support programme will offer patches for critical vulnerabilities to those who can afford them. But prices are in the “if you have to ask how much, you can’t afford it” league. Initial fees are estimated at at least $200 per system per year to retain access to critical updates, but with limits as to the minimum number of systems that will almost certainly render the programme unaffordable within the university. Since Microsoft will continue to produce the updates, they could decide to offer fixes more widely in the event of a particularly virulent infection, but would they actually do so? Perhaps they would as a goodwill gesture if a vulnerability is threatening the overall stability of the global internet, but for lesser threats I really wouldn’t want to bet on it. Play safe and upgrade.

What should people in the University do?

The short, flippant answer is of course “upgrade”. But of course in reality it is not that simple, and the answer is a lengthy article of itself. I will therefore address this in detail in a second post.

Posted in General Security, Microsoft | Comments Off

Cruelty to cats: Apple’s new security support policy?

Smilodon skull

Is Apple hoping that their own big cats will soon go the way of Smilodon?

On Tuesday of last week, Apple proudly proclaimed the launch of their latest and greatest operating system, OS X 10.9 Mavericks. After over 12 years, they’ve finally run out of big cats and moved on to Californian placenames. What’s more, they’ve even removed one of the obstacles to upgrading by making the new release available free of charge. But, as a few others have noted, there appears to be a nasty sting in the tail if you look more closely.

Among the many security advisories released by Apple on Tuesday is a slight oddity: there’s one named OS X Mavericks v10.9, released for “Mac OS X v10.6.8 and later”. Listed are over 40 separate security fixes in OS X 10.9. Clearly these can’t be fixes for bugs in 10.9, since it’s just released; they are fixes for security problems in older versions of OS X. There are no security bundles or point releases which keep you on the old release; the message seems to be that everyone should upgrade to Mavericks. As far as Apple is concerned, those big cats are on the road to extinction.

Can we be sure? No. We have no inside view of what goes on among the corridors and conference rooms of Cupertino. But we can make an educated guess on the basis of the information available. Not least because this situation is strangely familiar. Compare the security advisory for OS X Mavericks v10.9 with that for iOS 7, or indeed earlier releases of iOS. The bugs may differ, but the overall structure is the same, and we know what the support position is with iOS: if you want security patches, you run the latest version. It’s free, so what’s stopping you? Your chosen device turns out not to be supported any more? Tough. The Apple Store is that way; go and be a good little capitalist consumer.

Apple’s policy on security support

Apple don’t appear ever to have issued any official public statement regarding security support for OS X. Nevertheless in recent years a pattern has been established, which could be extrapolated to predict the likely future position. Security fixes would appear for the current version of OS X and for the previous version, although some private comments suggested that support for the previous version was not guaranteed. Occasionally fixes might even appear for the previous-but-one release, especially since Flashback malware struck in early 2012. The past few months have seen a handful of updates for 10.6.8, including Java (a vulnerability in which led to the Flasback outbreak), Safari and Quicktime, though nothing in the underlying operating system.

So why not upgrade?

Are you ready to upgrade yet?

Are you ready to upgrade yet?

You may ask why anyone would not want to upgrade to Mavericks? After all, it’s free. In 2012 I paid £20.99 to upgrade a Snow Leopard system to Lion; back in 2005 it cost me nearly sixty pounds to go from Panther to Tiger. The financial barrier to updating no longer exists.

I can think of several reasons why one might not want to upgrade, at least not yet:

Mavericks doesn’t support your hardware

You can’t really escape this one. Apple publish a minimum hardware specification for Mavericks. It’s similar, but not identical to, the requirements for Mountain Lion. There are certainly quite a few systems around which cannot be upgraded from Lion to Mountain Lion, including several in my department, although some people were simply waiting for the release of the new MacBook Pros before buying new hardware.

You avoid “dot zero” releases

It’s common for any new major software version to come with a whole load of interesting new bugs. Many people in the past have tended to wait until at least 10.n.2 before upgrading, because they don’t wish to be the ones effectively completing Apple’s beta testing. The bugs aren’t necessarily trivial, for instance the LDAP authentication bug that came with 10.7.0 which allowed users to authenticate successfully regardless of the password entered. That was no mere “teething problem” but revealed a fundamental flaw in Apple’s quality assurance.

Your applications don’t run on Mavericks

The California surf isn't for everyone just yet

The California surf isn’t for everyone just yet

Not every software vendor is involved in Apple’s beta program and able to have updates available the moment a new release appears. Here in the university, three such applications are our network backup system (based on IBM’s Tivoli Storage Manager or TSM), Sophos Anti-Virus, and our whole disk encryption service.

In the past it has taken months for IBM to release an official TSM backup client for a new OS X release. A client for an older release might work correctly but there is a risk of unexpected problems, but won’t be officially supported by IBM. We can allow users to back up at their own risk but still need to conduct some local testing. It would be irresponsible for us to let users back up without having a reasonable degree of confidence that users will be able to successfully restore their data should the need arise. [Update, 4 November: the HFS team seem confident that there are no major problems, although there remains no official support from IBM]

Depending on the application, the failure mode may or may not be immediately apparent. We have heard of one University computer being rendered unusable following an attempt to upgrade in spite of advice not to upgrade until an application incompatibility can be resolved.

Before someone starts advocating Time Machine and Filevault, yes, they have their uses, especially for a home user, but are not necessarily appropriate in our environment.

A critical feature has been removed in Mavericks

Features come and go with each release. The ones that disappear aren’t necessarily well-publicised prior to release day. As an example, a friend has reasons to depend upon SyncServices and was somewhat disgruntled to find it gone in Mavericks. Finding an appropriate alternative takes time and effort.

You don’t have the connectivity to upgrade yet

Mavericks is a 5.29GB download. 5GB is a lot larger than a typical security update, even with some of the large updates Apple have pushed out in the past. Some people are on slow or metered connections. In many rural areas, at least in the UK, the download might take several hours, during which the network may be effectively unusable for any other purpose. For people travelling, it may be several times larger than their monthly cellular data allowance or what can be downloaded over a hotel wifi connection overnight. In my case I can purchase extra allowance for my 3G stick but it would cost me £75 to do so even if everything worked perfectly. And as a major research university we have people doing fieldwork in areas of the world that can only dream of such good connectivity.

You don’t have the time to upgrade yet

Again, a big one for a university. For a typical home user, it’s fairly straightforward to set the download running, and perhaps spend a few hours sorting out a few niggles of the new release. Great for them, but it doesn’t necessarily scale. It takes significant time and effort to upgrade a classroom full of systems. If you weren’t expecting to have to upgrade them until OS X 10.10 appears on the horizon (next summer?) then the necessary resources are devoted elsewhere. Upgrading might disrupt teaching, experiments, even examinations. Months of work may need to go into the set up and testing of a new release before it can be deployed.

Now, you may say that Apple aren’t much interested in the enterprise market, and I wouldn’t disagree with you. Nevertheless they have, historically, had a huge customer base within the educational sector. It wasn’t so long ago that support for the AppleTalk networking protocol was a key requirement of the university’s backbone network.

I can’t upgrade yet; what should I do to protect my computer?

As usual it’s all about risk. Do what you reasonably can in order to protect your computer, your information, and yourself. There is no such thing as “completely safe”, but you can take measures to reduce the probability of bad things happening. We cannot predict what the next major attack against OS X will be, but the more possible risks that are addressed, the less likely it is to hit you.

Applications and plugins

Mountain Lion

How do you stay safe with a Mountain Lion?

Bear in mind that a high proportion of attacks target vulnerabilities in applications, not the underlying operating system. For instance, Flashback, the most widespread malware seen for Macs in recent years, targetted a vulnerability in Java. At the time, Java was supplied through Apple, and updates frequently appeared many weeks after their release by Oracle; this has subsequently changed. Many applications will continue to receive updates, possibly for a few years yet, but some will not and is is important to understand where the risks lie.

The most vulnerable applications are those which can receive information directly from arbitrary places in the outside world. Generally those will be your web browser and email client, together with plugins and helper applications used to handle certain kinds of content: Java, Flash, Quicktime, PDFs, Office documents.
Without a clear statement from Apple as to which they will still support on older releases, we must make an educated guess based on the evidence currently available.

Apple released updates for Safari (and the underlying Webkit library used by other applications handling web-based input) for OS X 10.7 and 10.8 last week, so there are reasonable chances that this won’t immediately be a problem.

However, it is possible that Apple Mail is only now supported on 10.9, given the inclusion of several mail-related vulnerabilities on the list of updates in 10.9. Unless you’re particularly keen on Apple Mail you may wish to consider a different email client such as Thunderbird, or simply using webmail, until you upgrade to Mavericks.

Flash is not shipped by Apple so will likely remain supported by Adobe for the time being. Despite their change in policy after Flashback, Apple have still been distributing Java updates as soon as they are released by Oracle; given the negative publicity about Flashback it is likely they will continue doing so for the time being. The situation with Quicktime is less certain.

PDF handling is by default done through Preview.app; as part of the core operating system it is likely that this may not receive further updates on 10.7 or 10.8; perhaps there is some value in considering a switch to using Adobe’s PDF reader on these platforms. For Office files, consider Microsoft Office (available at preferential rates for many University members), or the free (in multiple senses of the word) LibreOffice. If you are switching to third-party applications for particular filetypes, ensure they are configured as the default.

Follow good practice

A lot comes down to the good practice that we advocate all the time. Install antivirus software – it doesn’t guarantee 100% protection but is a lot better than nothing, and Sophos is available for free for members of the university. Ensure that all software is checking for updates on a regular basis, at least once a week (and much more frequently in the case of antivirus). Make sure any available updates get installed promptly. Consider using a firewall. OS X includes a basic software firewall: ensure it is enabled. A hardware firewall may offer better protection; many University colleges and departments have a firewall in place, and standard domestic broadband routers generally include at least a basic firewall capability. Exercise caution in opening email attachments, even if they appear to come from someone you know, or in downloading software from untrusted sources.

Plan on upgrading eventually

Finally, bear in mind that despite these measures, you still lack security support for the core operating system. Following the above advice is a stopgap measure that will prevent some (and possibly most) possible attacks, and buys you some time, but not infinite time – consider it as advice to tide you over for perhaps a few months, but certainly not years. You still need to plan to upgrade at some point, but at a time that better suits you and your work, not Apple’s marketting department.

If you have hardware that can’t run Mavericks, and can’t afford Apple’s latest hardware offerings any time soon, remember that alternate operating systems do exist. There is a software company based in Redmond who will gladly sell you an operating system for any Mac released in the last seven years, though avoid Windows XP otherwise yourself in a similar situation next April. If you are more adventurous, free alternatives exist.

Take care and stay safe.

Posted in Apple, General Security | 1 Comment

2013 FIRST Conference

View of a park in BangkokTwo members of OxCERT attended the 25th annual conference of FIRST (Forum of Incidence Response and Security Teams) held at the Conrad Hotel in the bustling city of Bangkok, Thailand. This year’s hosts were ThaiCERT, the Electronic Transactions Development Agency and the Ministry of Information and Communication Technology. It was a packed schedule over five days but here are some of the highlights.

The conference kicked off in grand style on Monday morning with opening remarks from her Excellency Ms. Yingluck Shinawatra, Prime Minister of Thailand. The Prime Minister welcomed us all to Thailand and discussed the benefits that the Internet can bring to all people and that security is necessary to preserve those benefits.

The Prime Minister’s appearance was followed by the first keynote speech of the conference, given by James Pang, discussing Interpol’s role in facilitating international police cooperation to combat cyber crime. According to Interpol, around the world 14 people fall victim to cyber crime every second.

The second day began with opening remarks from Chris Gibson and a quick video showing the fantastic job ThaiCERT did in organising this year’s football tournament. The first session of the day was a keynote speech from Dr. Paul Vixie of the Internet Systems Consortium. Paul talked about some of the botnet takeovers he has been involved in and some of the problems associated with sharing information from those takeovers. To address these problems the ISC has created the Security Information Exchange, which is designed to be a scalable framework for information sharing, this may be a useful resource for us in the future.

View of rooftops in BangkokOn Wednesday morning Jeff Bollinger, Brandon Enright and Matthew Valites from Cisco gave a presentation titled “Winning the game with the right playbook”. During this interesting talk the team from Cisco highlighted the importance of going beyond predefined reports generated by security equipment to create succinct reports tailored to the individual environment.

The talk went on to discuss the use of Splunk to aggregate data from a variety of sources based on common fields such as timestamp and IP address. We collect information from multiple sources and much of it is queried and correlated by hand, a service such as Splunk that could manipulate that information could be very useful.

After lunch Tomasz Bukowski from NASK/CERT Polska and Arseny Levin and Rami Kogan from Trustwave Spiderlabs gave talks about various types of malware and some of the techniques malware authors use. It’s helpful for us to have a good understanding of the way different pieces of malware behave so we stand a better chance of detecting them on our network.

Wednesday night was the night of the conference banquet, this year we were driven through Bangkok to the Siam Niramit theatre (holder of the Guinness world record for tallest stage). At the theatre we were treated to an impressive show based on Thai history and culture, complete with a live elephant! After the show we had a delicious Thai meal before heading back to the hotel.

Statue near a temple in BangkokOn Thursday morning John Kristoff of Team Cymru gave a presentation on security issues related to IPv6. As we all know, IPv6 is going to come into mainstream use sooner or later and it is likely to be well worth the time and effort to be prepared from a security point of view when it does.

Michael Jordon from Context finished the day with an interesting talk on using AI to detect malicious domains using registrar information. He described a proof of concept that he has been developing to use Bayes-theorem to determine how likely a domain is to be malicious. The idea of using artificial intelligence for this sort of purpose is an interesting one, although as the field is still in its infancy we may have to wait some time before we can practically make use of this.

The final day of the conference was a short one, Lauri Korta-Parn and Masako Someya from the Cyber Defense Institute Inc. gave a talk on Improving Cybersecurity Capabilities of Critical Infrastructures. The talk began with some examples of cyber attacks targeting critical infrastructure around the world, including the Stuxnet worm which targeted uranium processing facilities in Iran.

We may not be processing uranium at the University but we do have IP based control systems for various pieces of equipment and must ensure that they are properly secured.

Finally all that remained was to say goodbye to the other delegates and have a final look around Bangkok before heading back to the airport for the long flight home. Overall this has been a very interesting and informative conference and has given me plenty of food for thought. FIRST and ThaiCERT have done an excellent job and I’m sure everyone will be looking forward to next year in Boston!

Posted in FIRST Conference | Comments Off

Apple support lifetimes strike again

Wednesday saw the official launch of Apple’s iOS version 7, the operating system behind the iPhone, iPad and iPod Touch. But as with some previous updates, there’s a bit of a sting in the tail.

I’ve complained about Apple’s security support in the past, in the context of desktops. When it comes to phones and tablets, things appear even worse. Apple have never, to the best of my knowledge, issued any official statement about security support for versions of iOS, but all past evidence has suggested that once a new major version is released, support for earlier versions ceases entirely. There is certainly no reason to believe that things are any different with iOS 7.

What won’t run iOS7?

iOS 7 will run on all current Apple phones and tablets, as one might expect, and many older devices – as far back as the iPad 2 and iPhone 4. Support for the venerable iPhone 3GS has finally been terminated, probably on the grounds that its 256MB RAM is insufficient for the demands of the new release. The 3GS was released in June 2009, long enough ago that many who purchased it will have by now gone through at least one phone upgrade cycle. Nevertheless, it remained a current Apple product until the iPhone 5 released a year ago.

With the iPod Touch, things are a little different. While the iPhone 4 has 512MB RAM, the fourth-generation iPod Touch, which released around the same time, comes with half that; consequently it is not supported by iOS 7. This is a product which Apple officially discontinued just four months ago.

It doesn’t even end there. Recently-discontinued models can frequently be found on the Refurbished Store. While I found nothing yesterday on the UK store, on the US store, they had five different models of 4th generation iPod Touch available. Complete, it is claimed, with one-year warranty:

Apple US Refurbished Store, 19 September 2013.

Apple US Refurbished Store, 19 September 2013.

I’m no lawyer, and I’ve not seen the small print that comes with these devices, but I’d like to know the legal position if Apple refuse to fix known security vulnerabilities under the warranty.

Apple have done similar things with iOS devices in the past. For instance, software support for the iPhone 3G was suddenly dropped in March 2011, about 32 months after its initial release, and 8 months after they ceased selling it. Support for the original iPad was dropped with the release of iOS 6 a year ago, eighteen months after the product was discontinued.

What are the risks?

How dangerous is it to run an unsupported operating system on a mobile device? As is so often the case in the world of security, it depends.

New iOS releases typically fix a large number of vulnerabilities, and iOS 7 is no exception. It is likely that Apple has known of many of these for months but prefer to bundle updates together, unless there is a pressing reason to issue them earlier (such as widespread exploitation in the wild).

Windows desktops remain the target of choice for malware authors, but other platforms do get attacked, as with the OS X Flashback virus. And as time progresses, the population of vulnerable devices increases. While ardent Apple fans may rush out to get the latest Apple products, many older devices will get sold on or given to friends and family. It may be difficult to produce successful, profitable malware for iOS, but that’s not to say it’s impossible, and if something major does strike, antivirus is not going to save users. Malware for Android certainly exists in spite of the hugely fragmented version base. With iOS, one can be sure of tens of millions of devices still running iOS6 (or earlier), some of which will be used for activities such as online banking or credit card purchasing, which are of particular interest to criminals.

Personally, I’d want to minimise the amount of my personal data (and indeed anyone else’s) exposed to an unsupported system, and handle anything sensitive on a fully-featured desktop or laptop computer, at the expense of convenience. Others may judge the risks differently, but I do wonder just how many users are even aware?

What should one buy and when?

If going down the Apple i-device route, then without any official end-of-support announcements, all one can do is try and predict the time to buy which is likely to give the longest period of support. Watch out for new products offering a significant performance increase (for instance a doubling of internal RAM – not to be confused with the gigabytes of flash storage), or with a significant architectural change (for instance, the new 5S is the first with a 64-bit processor). Buy the latest model, soon after its release. Last year’s model may still be available for less money, but will probably lose support at least a year earlier.

It’s worth briefly noting that things are different in the Android world. Multiple major releases of Android are simultaneously supported, which is good news. Less so is the reliance in many cases on the handset manufacturer and (frequently) your chosen carrier in order to get updates. Often users are lucky to receive any updates whatsoever, especially out of the initial contract period. Android malware is widespread even if much of it is relatively benign.

What should Apple be doing?

I don’t expect Apple to be able to support all devices forever. Clearly the need to support old devices should not stand in the way of innovation and improvement. There are overheads to supporting multiple releases simultaneously, in terms of managing security patches (although many will be common to multiple releases), and in running an app store where not all apps will run on an older release. These are not insoluble problems, especially to such a wealthy company, but ultimately a business will want to see a return on such an investment.

Where is the return in supporting older devices? Consumers have already bought them. An unsupported device may still provide revenue for Apple through purchases of music, video and apps, but if the user will purchase those irrespective of support, why bother with the expense? If the consumer does encounter problems, persuade them to buy a shiny new device. As long as consumers are unaware of the risks, are aware but accept the risks, or are aware and promptly buy a new i-Device, the incentive isn’t there. What will hit them is bad publicity. That surrounding Flashback did result in some changes with regard to security support for OS X, and we note that occasionally, but not consistently, security updates still appear for Snow Leopard as well as for Lion and Mountain Lion. It may take a comparable outbreak on iOS to get Apple to change their attitude to the platform, and sooner or later, such an outbreak is likely to hit.

What I would like to see is a commitment to providing an operating system with full security support for a minimum period of time for every device. For mobile devices, perhaps four years after initial release, and two years after last sale, and for desktops and laptops, seven years from initial release and five years after last sale.

But I won’t wait up.

Posted in Apple, General Security | 2 Comments

Aaaarrrggghhhh – ye be hacked!

Ahoy me hearties! Talk like a pirate day it may be but thar be good reasons why it’s not a good idea t’ act like one online.  Pillagin’ the Internet for booty might seem attractive t’ some bilge-sucking scallywags and ye can see why sometimes.  A recent study by Ofcom cites cost, availability o’ desired material and convenience as the main reasons why people choose t’ hornswaggle when it comes t’ online content and more than half of ye Internet users have downloaded or streamed infringin’ material at some point.  The Ofcom study also suggest that infringement notifications or technical measures, as foreseen by the Digital Economy Act, were unlikely to change behaviour but avast ye for here be a couple o’ good reasons why your booty may turn out t’ be cursed!

pirate

Image from Flickr by [skylervm] licensed under CC BY-NC-SA 2.0

For starters the University takes its response t’ copyright infringement notices seriously (even if some users don’t) but they incur a cost t’ process and respond t’.  That cost be passed on t’ a user’s college who are then left t’ decide if and how t’ discipline the scurvy bilge rats.  That might well be walkin’ the plank but is more likely t’ be a monetary fine which exceeds the value o’ the pieces of eight ye might have saved by not payin’ for the content in the first place.

Secondly copyright infringement can be a great way t’ scuttle your computer and get yourself hacked.  If ye be downloading software from untrustworthy sites then how do you know what it does and whether or not some scurvy dog has altered it t’ do something nasty?  If you’re usin’ a knocked off operatin’ system (Windows or IOS which your messmate has given ye for example) then ye won’t be able t’ repel boarders or batten down those (security) hatches.  Not bein’ able t’ apply critical security updates leaves ye floatin’ like a sittin’ duck whenever ye do anythin’ online. And thar be some truly fearsome buccaneers out thar! Disreputable sites servin’ unlicensed films, music and books can also be breedin’ grounds for exploits and malware distribution.

So, if ye want to stay shipshape then, by all means talk like a pirate, but don’t act like one.  Savvy?

If ye be navigating t’ Oxford this term why not draw alongside on Thurs 24 Oct 12:30-13:30 t’ learn how t’ secure your PC or Mac, and read more about how t’ protect yourself on the InfoSec website?

Posted in Information Security | 1 Comment

Content filtering and the University

The issue of web content filtering is one which crops up every so often within the University. Do we give our users the freedom to visit any content they like, provided that University regulations (and the law of the land) are not broken, or should technical restrictions be put in place, for instance to stop them from viewing offensive content or just to stop them wasting entire afternoons on Facebook?

The subject of filtering is attracting considerable attention at a national level at present, with the Government and the major ISPs seemingly being at loggerheads over the matter.

The University’s position

The University’s position on web content-filtering is, as with many things, essentially to devolve the matter. Very little is done at a central level, and to some staff this comes as something of a surprise. The only exceptions are for certain malicious content, for instance domain names used by botnet controllers, or phishing sites which pose a particular threat to the University. Even there, the restrictions are not applied to all University networks, and we must be very careful to minimise the possibility of false-positives.

Our constituent colleges and departments are given the freedom to do their own thing. From the centre we have limited visibility as to what is done at a local level, other than by asking. Recent (unscientific) enquiries have given us some idea. In many cases the answer is nothing. Some others do some filtering of security threats only, one or two seek to limit access to Facebook and not much else, and a handful impose fairly stringent restrictions. From the responses, it seems departments are more likely to do so than our constituent colleges, perhaps reflecting concerns over confidential data and staff productivity, while colleges prefer not to impose restrictions on their student accommodation.

Some history

A university web proxy error message from around 2000

Back in 1999, the University introduced an intercepting web caching proxy, more for political and financial reasons than for technical ones. At the time the University was severely constrained by limited bandwidth on its external connection (somewhat less than many have on domestic broadband these days), there was no immediate prospect of upgrade, and transatlantic traffic would frequently slow to a crawl by mid-afternoon. Initially there were one or two investigations of content filtering but most were rapidly withdrawn after objections were raised. Some security-related blocks (for instance against the Code Red worm) proved extremely useful and caused little or no disruption to legitimate traffic. Bandwidth-limiting of certain content also proved fairly successful at controlling the limited available bandwidth without blocking content entirely, and offered a great degree of flexibility. Very few complaints were received, and following a major upgrade to the University’s connectivity, the restrictions (and later the proxy itself) were removed.

These days, if asked to advise regarding content filtering within the University network, how would we answer? As with many topics in security, the short answer is, “it depends”. Many of the perceived advantages introduce new risks. To a great extent it depends on what the college or department is looking to achieve, but it is worth noting that technical measures are frequently a poor solution to social problems. Doing almost anything can lead to accusations of “censorship” or denying “academic freedom” – indeed, there were a handful of dissenters when we introduced email antivirus filtering over a decade ago.

Malicious content

Blocking malicious content is relatively uncontroversial. Nevertheless it can be prone to false positives. For instance, you probably don’t need to block news articles entirely because they are pulling in malicious adverts from a third-party site, and your users will object. Nor do you necessarily want to block entire domains because of one malicious item. We’ve known the entire ox.ac.uk domain be blacklisted by one product on account of one of hundreds of servers within the University hosting “malicious” content. The offending content was a “white-hat” tool which had been offered at the same location for well over a decade. Actions need to be proportionate to the threat.

Nevertheless, most malware can be blocked without your users ever noticing. They’re more likely to notice blocks against phishing sites – indeed we’ve received occasional complaints from our users that our blocks are preventing them from “verifying their account” or whatever the phishers are asking them to do. Ensure your users are presented with a clear, informative error message, preferably one specific to phishing attacks.

Where do your users want to go today?

Things get trickier once you start blocking access to content that your users really want to reach. They’ll try and find a way round it, or find a friend who can. You might notice and block their workaround. They’ll soon find another. A colleague of ours assures us that many freshers will be perfectly capable of learning how to defeat any content-filtering on their college network – they’ve learned about such things in order to defeat tighter restrictions on their school (and possibly home) networks. When we tried blocking access to Napster and similar, we rapidly realised just how many services existed that offered our users a workaround – and those were just the ones that used HTTP on port 80. It rapidly turns into a huge game of “whack-a-mole” that we were bound to lose.

Driving traffic towards anonymising services, VPNs, Tor and so forth may present a bigger risk than the problem you are trying to address. Malicious traffic may no longer be blocked by firewalls or intrusion prevention systems, or detected by OxCERT’s monitoring. If a Tor user accidentally configures their system as an endpoint, you may find that an IP address on your network becomes the apparent source of external users’ traffic. Perhaps not a major concern if it allows foreign users to bypass the censorship of an oppressive regime, but very much your problem if it results in accusations of copyright infringement or accessing of child sexual abuse content.

Recreational versus “work” usage

Some workplaces desire to prevent students and/or staff from accessing sites unrelated to their work. For some that may just be a restriction on access to Facebook or Twitter. But not everyone’s usage of social media will always be recreational: it’s not uncommon to use them for publicity purposes, specialist news items and suchlike. Or perhaps, while not directly related to the job, they’re nevertheless beneficial – for instance the local bus company uses Facebook to post service updates. In severe weather, staff may regard access to timely information regarding transport, school closures and so forth as essential.

There is a risk that introducing filtering will upset users, especially if they consider it to interfere with work, or what they consider to be “reasonable” recreational usage. Policies need to be clearly communicated in advance, preferably with the reasoning behind them. The process for requesting exceptions needs to be straightforward, transparent and quick.

The strictest policy would be a default-deny, restricting users to accessing only those sites required as part of their work. Enumerating all such sites may be difficult, especially when webpages commonly pull in content from other sites.

As an example: consider a member of staff who was in the habit of playing a particular online game in his lunchbreak. One day, this resulted in his desktop becoming infected with a virus as a result of exploitation of a Java vulnerability.
The “knee-jerk” reaction in some organisations might be to impose stringent restrictions on usage of anything but directly work-related sites. But in some roles, that might be extremely difficult – the staff may need to access all sorts of sites as part of their job. Do you really want the overheads of dealing with requests for exceptions all the time? What is the problem you are actually trying to address. Why did this system get infected? While the site in question was recreational in nature, the user had no reason not to trust a site they’d used dozens of times before. On this occasion it happened to deliver a Java exploit, most likely through third-party content. Why did that exploit succeed? Because a critical Java patch had not been applied. Rather than putting resources into content-filters and strict restrictions and all the problems that brings, perhaps they should be directed at better, more timely patch management.
Clearly that will not always be appropriate. Desktops used to control a nuclear power plant probably shouldn’t be able to access arbitrary internet content.

Adult content

With apologies to Botticelli

Attempts to filter pornographic/offensive/adult content are not uncommon, but how is that defined? “I know it when I see it” won’t wash when configuring your firewall. Despite the launch of the .xxx domain, the internet is not conveniently segregated into “porn sites” and “acceptable content”. Many sites, Wikipedia included, may have some content deemed offensive but a lot considered perfectly acceptable. Trying to compile a list of “sites containing porn” is futile enough; trying to compile a list of every offending URL will be impossible.

Better filtering might use some kind of heuristics, but even so, where is the boundary between acceptable and unacceptable content? Attitudes vary hugely depending on culture, context, and indeed individuals. Nudity is extremely common in the Fine Arts (some very explicit content is openly displayed in the Musée d’Orsay in Paris). Some blocked sites you may consider perfectly legitimate but users may be too uncomfortable to request exceptions – examples frequently cited are sites dealing with issues of health or sexuality.

As one IT officer puts it: if a college blocks pages, you risk press accusations of censorship; if it doesn’t block anything, you risk stories about Oxford allowing students to browse smut. Damned if you do, damned if you don’t. And this article will no doubt be damned by some content filters for use of the word “damned”. Such “profanity”-based systems have, often rightfully, received a lot of flak over the years. Systems which block information about Scunthorpe, news of Hilary Swank, or discussion of Cleopatra’s bathing arrangements simply because they contain offensive strings are frankly unfit for purpose.

Illegal content

Many of the large domestic ISPs in the UK have taken action to block content which is illegal (at least under UK law) to access, based on a list of content managed by the Internet Watch Foundation (IWF). This certainly helps to guard innocent users against accidentally stumbling across some deeply unpleasant content, and will likely deter some of those with no more than a casual curiosity regarding such material, but as mentioned above, those sufficiently determined will find a way round the restrictions.

The underlying Cleanfeed technology behind the blocks is certainly ingenious but far from perfect. In 2008 the blacklisting of an item on Wikipedia drew considerable attention to the system, caused problems for many legitimate users of Wikipedia, and became a textbook example of the Streisand Effect. The Cleanfeed system has subsequently been used to impose blocks beyond its original remit, for instance in order to
comply with court orders to block piracy sites, and may be leveraged for further purposes according to the whims of future courts or governments.

What alternatives exist?

Warn users then let them proceed at their own risk?

The alternatives depend on the reasons for wanting filtering in place. Clearly, doing nothing will be technically possible, but may not go down well from a political point of view. If you’re primarily worried about bandwidth utilisation, some form of traffic shaping may be acceptable. If you’re concerned about people viewing offensive content in a general computer room, a simple approach that has worked in the past is to print out the IT Regulations in a large font, highlight the relevant clauses, and stick it on the wall as a reminder. If you have concerns about under-18s on your network as part of a summer school, you may be able to shift the responsibility onto the summer school organisers. If the concern is over staff “wasting” time on Facebook during working hours, take a step back. Are they getting the work done to the satisfaction of their line managers, and if so, is a little recreational internet usage actually a problem? If they’re not working hard enough, are technical measures really the most appropriate solution? There are many other distractions that may affect staff performance and a lot of them would never be considered a matter for IT to deal with. One department told us that while they block malicious content, trying to view content in other categories (e.g. “adult”, “illegal drugs”, “gambling”) will simply produce a warning message; users can then proceed at their own risk.

Conclusions

We certainly don’t wish to give the impression that content-filtering is always to be avoided. As with many things, there are pros and cons, and this post has concentrated on the negative aspects which may not immediately be apparent. What seems like a good idea in the first instance may have significant ramifications. What we do suggest is that those involved in determining policy are fully appraised of both the advantages and disadvantages, and appreciate that in most cases a perfect solution will be impossible to achieve. Users obviously need to be made aware of policies and revisions, whether enforced by technical or social means, and any monitoring in place including details of what is logged, to whom it is visible, and under what circumstances it will be used.

How well will people be informed in the case of government-mandated filtering imposed by the major ISPs? I’m not hopeful. My domestic broadband is obtained through a relatively small ISP, who to the best of my knowledge impose no filtering whatsoever. While my domestic broadband provider impose no filtering (currently), my cellular data provider does filter “adult” content by default (not that I have found this to be a problem). I don’t recall them going to any effort to ensure I was aware of this, the possible consequences, or of the procedures required for opting out. If the government get their way with “default-on” filtering, whether domestic ISPs are likely to do any better remains to be seen.

Posted in Web Security | Comments Off

FIRST Technical Colloquium, Amsterdam: day two


For the first day of the meeting, see http://blogs.it.ox.ac.uk/oxcert/2013/04/18/first-tc-ams1/.

The second day began with a talk by Martijn van der Heide of KPN-CERT on information sharing following a botnet takedown. This led to considerable discussion as to what actions were considered appropriate or even lawful, particularly if the source of the data is somewhat “questionable”. All too often law enforcement prove unable or unwilling to take action, but it is clear that people’s personal data has been stolen and it is natural to feel that they should be alerted.

Next was a talk from Trendmicro regarding a “customer” of theirs in the so-called “underground” internet – those dealing in malware and stolen data – and the interaction between “underground” tools and systems operated by the AV industry, giving some visibility into the underground activities and potential upcoming threats.

This was followed by a talk on Spamhaus, providers of blacklisting information primarily regarding spam sources. They are keen that security teams and network administrators should have access to details of spam sources within their own networks. We are currently getting some information indirectly but may be able to obtain additional information via spamhaus themselves, allowing us to better deal within the problem.

Paul Vixie then spoke again, splitting his talk into two sections. The first was regarding general internet public safety, in particular the problem of using legitimate DNS servers as part of traffic amplification attacks. These result from several problems: firstly, many ISPs do not have adequate checks in place to ensure that traffic leaving a customer’s network is using source addresses belonging to that customer. Secondly, DNS generally uses UDP, which is stateless. Thirdly, may often return answers which are many times the size of the original request. Fourthly, many DNS servers will accept responses from any client, either because they are recursive nameservers deliberately or accidentally configured to do so, or because they are authoritative nameservers whose function is to be globally accessible. The combination means that DNS servers can easily be abused as part of very large denial-of-service attacks, sometimes tens of gigabits of traffic in total. Most authoritative DNS servers, BIND included, can be secured against such abuse by taking advantage of recently-introduced response rate-limiting (RRL) features. Paul showed a bandwidth graph from a major DNS provider, with a twenty-fold reduction in their outgoing traffic as soon as RRL was enabled.

The second part of Paul’s talk concerned the ISC’s Security Information Exchange, a mechanism to share security-related information in realtime between trusted partners, and the mechanisms and formats ISC are using to receive and provide data.

After lunch in Cisco’s excellent canteen, and interesting conversation with some of their CSIRT staff, the first speaker was from NATO on “CSIRT Knowledge Management Needs”. This presentation considered the requirements for effective information exchange between security teams, including issues such as data quality, trust of other teams and effective integration of internal and external information sources.

This was followed by Jaeson Schultz of Cisco discussing “bitsquatting” attacks. This considered the possibility of single-bit errors in computer systems resulting in DNS queries for domain names differing by a single bit from the intended name; for instance twitte2.com instead of twitter.com. Real-world data show small numbers of lookups for such domains, and some have been registered. Significant volumes of attacks are unlikely and there are much greater returns to be had via more conventional attacks.

MyCERT, the national team for Malaysia, gave a presentation describing the tools they use for handling security incidents. Like us, phishing is a major problem for them, but concentrating on attacks targeting credentials for bank accounts rather than email. Indeed they have developed their own browser addon to defend users against phishing sites, albeit only those attacking Malaysian banks. They also described looking for defaced websites in Malaysia through use of information from Zone-H and other sources, a process very familiar to us.

Godert Jan van Manen of Northwave then described analysis of the Torpig (aka Sinowal) malware, an information stealer which we have seen in numerous incidents around the University along with the related Mebroot rootkit. He described the behavioural information that could be learned through a detailed analysis, including the domains used for the control channels.

The final talk of the event was from the Chief Security Architect at ING, a major Dutch multinational bank, looking at the mechanisms behind fraudulent financial transactions and the tactics employed by banks to counter them. The importance of behavioural analysis to distinguish malicious usage from legitimate were described. This felt particularly relevant to one who inadvertently triggered his bank’s anti-fraud mechanisms while trying to book the train to Amsterdam to attend the meeting!

In all this technical colloquium proved a very worthwhile event, with many talks on areas of particular relevance to the team’s activities, giving us plenty of “food for thought” with regard additions and improvements to our own systems and processes. We thank Cisco for hosting the meeting, and look forward to attending future FIRST events and hope that they will run as smoothly.

Posted in FIRST Conference | Comments Off