TRANSITS I Workshop, Prague

At the end of November I attended the TERENA TRANSITS I workshop in Prague. TRANSITS I is aimed at those who have recently joined a CERT or who have been tasked with creating a new CERT. Attendees at the workshop came from a variety of organisations across Europe and beyond. Members of European CERT/CSIRT teams have developed the course and kindly volunteered time to deliver the content, TRANSITS is also supported by ENISA (European Network and Information Security Agency). Overall I found this to be a useful and informative few days, the TRANSITS course is a valuable resource for anyone joining or setting up a CERT team for the first time, the course contains modules on the operational, organisational, technical and legal issues faced by a CERT team.


Image of Prague Castle

The operational module covered the incident handling process of a CERT. Incident handling is the bread and butter of a CERT’s working day and it was interesting to hear how other CERTs approach this. Also discussed were various tools that can be used to collate information on threats and guide the process of turning a vulnerability alert into an advisory which can be published, something that we do on almost a daily basis. One of these tools, Taranis, we are hoping to implement in the future.


This module covers where a CERT sits within the structure of their organisation. It is important for any team to have a firm grasp of its mission, its raison d’etre, as this informs all further decisions. OxCERT’s mission is defined as:

“To protect the integrity of the University backbone network and to keep services running”

This also defines our constituency; those that connect to the backbone network of the University of Oxford. Leading on from this we also need the tools and the authority to carry out our mission. One example of such a tool is the ability to block from the network hosts that may threaten the integrity or availability of services for other University users.


The technical module contains an overview of the various threats a CERT can expect to deal with.  Among those that we unfortunately see on a day-to-day basis are keylogging malware, SQL injection and botnets, to name but a few. The module also gives an overview of various tools and resources that can be used to deal with these threats.

Prague Castle


Laws and guidance are often updated so it is essential to keep up to date and ensure you are working on the correct side of the law, especially as our work often leads us into situations where it would be easy to overstep the mark. It was also particularly interesting to compare the different legal requirements affecting teams across Europe. It is helpful to bear this in mind particularly when travelling, as an activity that is legal in one country may not be in another.

This module also discusses the issue of disclosure, i.e. what information to disclose to who, and when? Inevitably this will be a mixture of policy and per-incident pragmatism, but it is a topic that is worth consideration for all CERT teams.

Apart from the taught materials the course also gave an opportunity to meet with members of other CERTs to network and exchange PGP keys (to sign later). I found the course presented a good overview of CERT activities and provides a suitable starting point for a recent or would-be CERT member.

Posted in Uncategorized | Comments Off

Farewell to XP (part 2)

In the first part of this post, I looked at the background to the end of support for Windows XP in April 2014. In this (somewhat delayed, apologies) second part I will consider what those in the University will need to do if they are still using Windows XP, although hopefully much of the content will be equally useful for those elsewhere who are still maintaining XP systems. I will assume that readers are not in a position to consider putting off the problem through Microsoft’s Custom Support programme.

Microsoft are not continuing full support after April

Microsoft aren't making a full U-turn, sorry. CC BY-SA 3.0 by

Microsoft aren’t making a full U-turn, sorry.
CC BY-SA 3.0 by

Since I wrote the first post there has been a slight relaxation in policy by Microsoft: support for Microsoft anti-malware products on Windows XP has been extended until July 2015. It is important to note that this is not the same as Microsoft extending full security support for Windows XP, despite what has been reported in some news articles (at the time of writing this states “Microsoft has decided to continue providing security updates for the ageing Windows XP operating system until 2015″).

Microsoft are simply adding a limited amount of protection and probably little that will not be offered anyway through third-party antivirus products which continue to support Windows XP after April. Note that Microsoft’s own blog post states “Our research shows that the effectiveness of antimalware solutions on out-of-support operating systems is limited.”

Our advice is that this changes nothing: continue pressing ahead with your upgrade and/or mitigation plans, as described in the remainder of this post.

What should people in the University do?

At the time of writing, Windows XP remains in widespread use around the University, although hopefully IT staff should have been aware of the end-of-support date for a year or more and upgrade plans are well under way. It is inevitable, however, that there will be parts of the University where it simply will not be possible to complete the process of migration away from XP in time. Moreover there will be other areas where XP simply must remain in use, as no other realistic option exists. So what should staff in this position be doing? As mentioned in part 1, “nothing” is not an option!

Risk assessment and prioritisation of upgrades

The most important thing to do in this situation is to determine where the greatest risks lie and to prioritise accordingly. For the purposes of this article I shall consider only the risks posed by the systems currently running Windows XP, although these must be assessed in the wider context of the overall risks in each department and college. Concentrating all efforts on upgrading XP systems and neglecting everything else is almost certainly not the path of wisdom; your “business as usual” activities are just that.

What is most likely to be attacked?

The vast majority of incidents handled by OxCERT can be attributed to one of three main causes: vulnerabilities in the user, vulnerabilities in public-facing services, and vulnerabilities in desktop systems and applications. To the disappointment of IT staff everywhere, replacing Windows XP will do little or nothing for the vulnerabilities in users: they will continue to make the same mistakes as before, for instance responding to phishing emails, or executing malicious email attachments. While local services have been targetted in the past (e.g. Blaster, Conficker), Windows XP is not normally considered an appropriate platform for public-facing services, so it is the third category that merits attention.

The major attack vectors against a desktop system are those which are likely to handle untrusted data from the outside world. For the vast majority of users, such data will mostly come through their web browser or their email client. Malicious content may trigger vulnerabilities in the core operating system, in the web browser or email client, in libraries and components used to handle particular types of content (for instance image display), in additional Microsoft sofware (eg Silverlight, Office) or in third-party software (such as Java and Flash). It is worth remembering that Internet Explorer 8 is the latest version of Internet Explorer to be supported by XP, limiting the amount that can be done to keep an up to date Microsoft web browser on an XP based machine.

Not all of the installed software will lose support next April. Given the size remaining XP userbase, many third parties will likely continue to support their own software on the platform for some time yet, including some Microsoft applications. Note that extended support for Office 2003 will end at the same time as that for Windows XP, so you’ll just have to get used to that ribbon, sorry. Importantly, most anti-virus vendors won’t cut support immediately: for University users, Sophos have committed to supporting XP until at least September 2015. Antivirus won’t come close to protecting against all attacks (it never did) but is nevertheless well worth having.

Clearly you will need to prioritise upgrades for some desktop users over others. Determining which users should be upgraded first will depend on local circumstances. You may go for senior and high-profile staff first on account of the confidential data they are handling. Then again, they may be those complaining loudest if something doesn’t work, so you may choose to start with users who are more accepting of the inevitable teething problems.

Specialist systems

Does it sometimes feel like you're between a rock and a hard place?

Does it sometimes feel like you’re between a rock and a hard place?

What about the more difficult cases? Inevitably there will be some cases which are particularly problematic, if not impossible to upgrade. Firstly, Windows XP installations are also embedded into many devices, for example vending machines and scanners. Such systems may run a full XP installation, or they may run Windows Embedded. It is important to distinguish the two; not least the different support lifecycles. XP Embedded is supported until the end of 2016; indeed NT Embedded 4.0 remains supported until the end of August 2014. How, and indeed if, updates are delivered and applied is up to the manufacturer of the device, as are other security measures. Updates which are critical for desktop systems may well be irrelevant in the context of a particular embedded system.

If a device is not using Windows Embedded, however, the April deadline applies. If networked, they’re vulnerable to attacks, and indeed we have seen vending machines on unfirewalled public IP addresses which have been infected with malware. These systems won’t be the only cases which are particularly problematic, if not impossible to upgrade. We are aware of scientific and medical equipment costing six or seven figure sums which are controlled from XP desktops. Upgrading them is frequently not an option; indeed in some cases the original vendor is no longer trading.

Avoid unnecessary risks

With such systems we advise considering their essential usage. What software needs to run on the XP system? What, if any, network connectivity is required? For some systems it may be appropriate to disconnect from the network entirely. Beware though that may simply shift the risks. If switching from file transfer over the network to file transfer via removable media, bear in mind that removable media may harbour infections. A system that is permanently offline will not be running up-to-date antivirus, barring very frequent manual updates. Infections on removable media can be partially mitigated by disabling Autorun and Autoplay (some additional information is available for IT staff within the university).

If a system does need to retain network connectivity then consider placing it on a strictly-firewalled network segment. Consider applying a “default-deny” policy in both directions. For instance the only access required may be to a staging area on a local fileserver, in which case the only additional traffic expected might be with the local DNS resolvers and authentication systems.

Don’t forget the human risks – your precautions are futile if your users simply work around them because they see it is necessary in order to get their work done, for instance by reinstalling the software you removed, or by plugging a network cable back in. Be sure that possible usage cases have been considered as early as possible, and ensure that users understand why actions are needed. You’re not doing it to be awkward but to minimise the risks to their equipments and data, while trying to minimise the inconvenience to them in their work.

It takes all the running you can do, to keep in the same place

Does it seem like you're getting anywhere? Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

Does it seem like you’re getting anywhere?
Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

When you’ve finally dealt with that last Windows XP system (and the last Office 2003 installation), congratulations. Sadly, you’re unlikely to get much of a rest, as you’ll soon need to start worrying about the next one. End of support for Windows Server 2003 is in July 2015, Windows Vista in 2017.
Sometimes no explicit resourcing is required because you move to newer versions as part of natural system replacement cycles, but this will not always be the case, especially when dealing with software support lifetimes shorter than that of the hardware. It pays to ensure that your superiors are aware well in advance of when major upgrades need to be carried out, so that with luck the necessary resources can be made available in good time. Plan early, plan well, and stay safe.

Posted in General Security, Microsoft | Comments Off

Farewell to XP (part 1)

8 April 2014 marks the end of an era for many IT staff, and users too. After over 12 years, Microsoft will finally be terminating support for Windows XP, arguably its most successful operating system ever.

A little history

The end of the line for XP is fast approaching

After over twelve years, the end of the line for Windows XP is fast approaching

Windows XP was released in August 2001, and it’s worth reflecting briefly on how different things were. A fairly typical PC might have a single-core 32-bit processor running at around 1GHz, 256MB RAM and 30GB storage. Windows NT and 2000 had achieved some popularity in business environments, but the old Windows-on-DOS platform dominated (though the less said about Windows Me the better). Away from the University network, domestic broadband was still something of a novelty and most users were still on dialup.

Meanwhile, Apple were a niche player, perhaps best-known for the translucent CRT-based iMac, and only a few adventurous types had tried the new OS X 10.0, still in need of its training wheels. The iPod had yet to be released, and few people had ever heard of smartphones. The cellphone market was dominated by Nokia, producing handsets optimised for making telephone calls. The dominant web browser was Internet Explorer 5; a few people still stuck with Netscape. Sites such as Facebook, Twitter or GMail remained years away; Wikipedia had fewer than 10000 pages and few had yet heard of it.

Past security problems

Security threats were not unknown, although rarely financially motivated: users might be tricked into opening a picture of a tennis player, releasing a store of emails, while the previous month had seen the Code Red worm infect hundreds of thousands of webservers before attempting, unsuccessfully, to attack the White House. The world had yet to experience the shock of the real-world attacks of September the eleventh.

It is perhaps not so surprising that Windows XP was not written with security in mind from the start. Of course the original Windows XP would evolve significantly, with three service packs offering substantial improvements in security and stability. From the University’s point of view, Service Pack 2 perhaps made the greatest difference, in that by default the Windows firewall was now enabled. The Blaster worm and its derivatives had resulted in over one thousand infections across the University network in a matter of days. This one simple change made such widespread network-based attacks far less likely; indeed the only attacks on a comparable scale we’ve seen subsequently attacked another operating system entirely.

What are the risks?

Doing nothing is really not an option. Each month’s Microsoft updates include fixes for multiple vulnerabilities in Windows. Some will have been identified by Microsoft, and some by other “white hat” researchers, but others are found first by the bad guys (“zero-days”), and only become known to Microsoft once successful exploitation is discovered. For any attacker finding a zero-day vulnerability in Windows XP today, should they use it now? Almost certainly not: if Microsoft have had twelve years to identify it, are they likely to do so within the next few months? If alerted to it while XP remains under support, they are likely to investigate and fix it as soon as possible. If the exploit isn’t used in anger until after 8 April, it may still be investigated and fixed in supported Windows releases, but Windows XP users may be sitting ducks indefinitely.

Ending support

XP was subsequently followed by newer offerings, with much-enhanced security features built in from the ground up. The unloved Vista released almost seven years ago and was followed in 2009 by the far more popular Windows 7, then again last year by the radically different and much-criticised Windows 8. Retaining any degree of support for four such differing releases is clearly a substantial overhead even for a business the size of Microsoft. There comes a time at which they must decide enough is enough and cut support.

I’m not aware of any other mainstream operating system which has retained support for such a long time, and so far, the nearest competitors have also been Microsoft products: Windows 2000 managed ten and a half years; Windows 98 managed eight years (after being granted a reprieve two years earlier). Windows Server 2003 will get twelve years. Red Hat Enterprise Linux will in time manage slightly longer, as the two most recent versions are scheduled to reach thirteen years early in the next decade.

Will this really be the end of support for XP?

If you're rich enough, you may avoid it

If you’re rich enough, you may avoid it.
Image from Flickr by [garydenness] licensed under CC BY-NC-SA 2.0

Possibly. There is precedent for Microsoft granting a stay of execution: it happened with Windows 98. Support for Windows 98 was originally planned to end in January 2004, but after vocal protests, it was extended for a further two and a half years, until July 2006. In late 2003, Windows 98′s share of the install base was probably comparable to Windows XP’s share today, and shrank considerably during the extra thirty months.

It’s not impossible that Microsoft will do something similar this time, but we simply cannot afford to work on the assumption that it will. The situation is not really comparable. July 2006 was just over eight years since the release of Windows 98; we are already past twelve years with XP. And if I were Microsoft I would be very keen to avoid the perception of “crying wolf” over end of support dates. Last-minute extensions are a great way to annoy those who have put considerable effort into ensuring that they are ready for the originally-announced date, and encourage people to ignore the issue in future.

Even without a stay of execution, April will not be quite the end … if you’re rich. Microsoft’s Custom Support programme will offer patches for critical vulnerabilities to those who can afford them. But prices are in the “if you have to ask how much, you can’t afford it” league. Initial fees are estimated at at least $200 per system per year to retain access to critical updates, but with limits as to the minimum number of systems that will almost certainly render the programme unaffordable within the university. Since Microsoft will continue to produce the updates, they could decide to offer fixes more widely in the event of a particularly virulent infection, but would they actually do so? Perhaps they would as a goodwill gesture if a vulnerability is threatening the overall stability of the global internet, but for lesser threats I really wouldn’t want to bet on it. Play safe and upgrade.

What should people in the University do?

The short, flippant answer is of course “upgrade”. But of course in reality it is not that simple, and the answer is a lengthy article of itself. I will therefore address this in detail in a second post.

Posted in General Security, Microsoft | Comments Off

Cruelty to cats: Apple’s new security support policy?

Smilodon skull

Is Apple hoping that their own big cats will soon go the way of Smilodon?

On Tuesday of last week, Apple proudly proclaimed the launch of their latest and greatest operating system, OS X 10.9 Mavericks. After over 12 years, they’ve finally run out of big cats and moved on to Californian placenames. What’s more, they’ve even removed one of the obstacles to upgrading by making the new release available free of charge. But, as a few others have noted, there appears to be a nasty sting in the tail if you look more closely.

Among the many security advisories released by Apple on Tuesday is a slight oddity: there’s one named OS X Mavericks v10.9, released for “Mac OS X v10.6.8 and later”. Listed are over 40 separate security fixes in OS X 10.9. Clearly these can’t be fixes for bugs in 10.9, since it’s just released; they are fixes for security problems in older versions of OS X. There are no security bundles or point releases which keep you on the old release; the message seems to be that everyone should upgrade to Mavericks. As far as Apple is concerned, those big cats are on the road to extinction.

Can we be sure? No. We have no inside view of what goes on among the corridors and conference rooms of Cupertino. But we can make an educated guess on the basis of the information available. Not least because this situation is strangely familiar. Compare the security advisory for OS X Mavericks v10.9 with that for iOS 7, or indeed earlier releases of iOS. The bugs may differ, but the overall structure is the same, and we know what the support position is with iOS: if you want security patches, you run the latest version. It’s free, so what’s stopping you? Your chosen device turns out not to be supported any more? Tough. The Apple Store is that way; go and be a good little capitalist consumer.

Apple’s policy on security support

Apple don’t appear ever to have issued any official public statement regarding security support for OS X. Nevertheless in recent years a pattern has been established, which could be extrapolated to predict the likely future position. Security fixes would appear for the current version of OS X and for the previous version, although some private comments suggested that support for the previous version was not guaranteed. Occasionally fixes might even appear for the previous-but-one release, especially since Flashback malware struck in early 2012. The past few months have seen a handful of updates for 10.6.8, including Java (a vulnerability in which led to the Flasback outbreak), Safari and Quicktime, though nothing in the underlying operating system.

So why not upgrade?

Are you ready to upgrade yet?

Are you ready to upgrade yet?

You may ask why anyone would not want to upgrade to Mavericks? After all, it’s free. In 2012 I paid £20.99 to upgrade a Snow Leopard system to Lion; back in 2005 it cost me nearly sixty pounds to go from Panther to Tiger. The financial barrier to updating no longer exists.

I can think of several reasons why one might not want to upgrade, at least not yet:

Mavericks doesn’t support your hardware

You can’t really escape this one. Apple publish a minimum hardware specification for Mavericks. It’s similar, but not identical to, the requirements for Mountain Lion. There are certainly quite a few systems around which cannot be upgraded from Lion to Mountain Lion, including several in my department, although some people were simply waiting for the release of the new MacBook Pros before buying new hardware.

You avoid “dot zero” releases

It’s common for any new major software version to come with a whole load of interesting new bugs. Many people in the past have tended to wait until at least 10.n.2 before upgrading, because they don’t wish to be the ones effectively completing Apple’s beta testing. The bugs aren’t necessarily trivial, for instance the LDAP authentication bug that came with 10.7.0 which allowed users to authenticate successfully regardless of the password entered. That was no mere “teething problem” but revealed a fundamental flaw in Apple’s quality assurance.

Your applications don’t run on Mavericks

The California surf isn't for everyone just yet

The California surf isn’t for everyone just yet

Not every software vendor is involved in Apple’s beta program and able to have updates available the moment a new release appears. Here in the university, three such applications are our network backup system (based on IBM’s Tivoli Storage Manager or TSM), Sophos Anti-Virus, and our whole disk encryption service.

In the past it has taken months for IBM to release an official TSM backup client for a new OS X release. A client for an older release might work correctly but there is a risk of unexpected problems, but won’t be officially supported by IBM. We can allow users to back up at their own risk but still need to conduct some local testing. It would be irresponsible for us to let users back up without having a reasonable degree of confidence that users will be able to successfully restore their data should the need arise. [Update, 4 November: the HFS team seem confident that there are no major problems, although there remains no official support from IBM]

Depending on the application, the failure mode may or may not be immediately apparent. We have heard of one University computer being rendered unusable following an attempt to upgrade in spite of advice not to upgrade until an application incompatibility can be resolved.

Before someone starts advocating Time Machine and Filevault, yes, they have their uses, especially for a home user, but are not necessarily appropriate in our environment.

A critical feature has been removed in Mavericks

Features come and go with each release. The ones that disappear aren’t necessarily well-publicised prior to release day. As an example, a friend has reasons to depend upon SyncServices and was somewhat disgruntled to find it gone in Mavericks. Finding an appropriate alternative takes time and effort.

You don’t have the connectivity to upgrade yet

Mavericks is a 5.29GB download. 5GB is a lot larger than a typical security update, even with some of the large updates Apple have pushed out in the past. Some people are on slow or metered connections. In many rural areas, at least in the UK, the download might take several hours, during which the network may be effectively unusable for any other purpose. For people travelling, it may be several times larger than their monthly cellular data allowance or what can be downloaded over a hotel wifi connection overnight. In my case I can purchase extra allowance for my 3G stick but it would cost me £75 to do so even if everything worked perfectly. And as a major research university we have people doing fieldwork in areas of the world that can only dream of such good connectivity.

You don’t have the time to upgrade yet

Again, a big one for a university. For a typical home user, it’s fairly straightforward to set the download running, and perhaps spend a few hours sorting out a few niggles of the new release. Great for them, but it doesn’t necessarily scale. It takes significant time and effort to upgrade a classroom full of systems. If you weren’t expecting to have to upgrade them until OS X 10.10 appears on the horizon (next summer?) then the necessary resources are devoted elsewhere. Upgrading might disrupt teaching, experiments, even examinations. Months of work may need to go into the set up and testing of a new release before it can be deployed.

Now, you may say that Apple aren’t much interested in the enterprise market, and I wouldn’t disagree with you. Nevertheless they have, historically, had a huge customer base within the educational sector. It wasn’t so long ago that support for the AppleTalk networking protocol was a key requirement of the university’s backbone network.

I can’t upgrade yet; what should I do to protect my computer?

As usual it’s all about risk. Do what you reasonably can in order to protect your computer, your information, and yourself. There is no such thing as “completely safe”, but you can take measures to reduce the probability of bad things happening. We cannot predict what the next major attack against OS X will be, but the more possible risks that are addressed, the less likely it is to hit you.

Applications and plugins

Mountain Lion

How do you stay safe with a Mountain Lion?

Bear in mind that a high proportion of attacks target vulnerabilities in applications, not the underlying operating system. For instance, Flashback, the most widespread malware seen for Macs in recent years, targetted a vulnerability in Java. At the time, Java was supplied through Apple, and updates frequently appeared many weeks after their release by Oracle; this has subsequently changed. Many applications will continue to receive updates, possibly for a few years yet, but some will not and is is important to understand where the risks lie.

The most vulnerable applications are those which can receive information directly from arbitrary places in the outside world. Generally those will be your web browser and email client, together with plugins and helper applications used to handle certain kinds of content: Java, Flash, Quicktime, PDFs, Office documents.
Without a clear statement from Apple as to which they will still support on older releases, we must make an educated guess based on the evidence currently available.

Apple released updates for Safari (and the underlying Webkit library used by other applications handling web-based input) for OS X 10.7 and 10.8 last week, so there are reasonable chances that this won’t immediately be a problem.

However, it is possible that Apple Mail is only now supported on 10.9, given the inclusion of several mail-related vulnerabilities on the list of updates in 10.9. Unless you’re particularly keen on Apple Mail you may wish to consider a different email client such as Thunderbird, or simply using webmail, until you upgrade to Mavericks.

Flash is not shipped by Apple so will likely remain supported by Adobe for the time being. Despite their change in policy after Flashback, Apple have still been distributing Java updates as soon as they are released by Oracle; given the negative publicity about Flashback it is likely they will continue doing so for the time being. The situation with Quicktime is less certain.

PDF handling is by default done through; as part of the core operating system it is likely that this may not receive further updates on 10.7 or 10.8; perhaps there is some value in considering a switch to using Adobe’s PDF reader on these platforms. For Office files, consider Microsoft Office (available at preferential rates for many University members), or the free (in multiple senses of the word) LibreOffice. If you are switching to third-party applications for particular filetypes, ensure they are configured as the default.

Follow good practice

A lot comes down to the good practice that we advocate all the time. Install antivirus software – it doesn’t guarantee 100% protection but is a lot better than nothing, and Sophos is available for free for members of the university. Ensure that all software is checking for updates on a regular basis, at least once a week (and much more frequently in the case of antivirus). Make sure any available updates get installed promptly. Consider using a firewall. OS X includes a basic software firewall: ensure it is enabled. A hardware firewall may offer better protection; many University colleges and departments have a firewall in place, and standard domestic broadband routers generally include at least a basic firewall capability. Exercise caution in opening email attachments, even if they appear to come from someone you know, or in downloading software from untrusted sources.

Plan on upgrading eventually

Finally, bear in mind that despite these measures, you still lack security support for the core operating system. Following the above advice is a stopgap measure that will prevent some (and possibly most) possible attacks, and buys you some time, but not infinite time – consider it as advice to tide you over for perhaps a few months, but certainly not years. You still need to plan to upgrade at some point, but at a time that better suits you and your work, not Apple’s marketting department.

If you have hardware that can’t run Mavericks, and can’t afford Apple’s latest hardware offerings any time soon, remember that alternate operating systems do exist. There is a software company based in Redmond who will gladly sell you an operating system for any Mac released in the last seven years, though avoid Windows XP otherwise yourself in a similar situation next April. If you are more adventurous, free alternatives exist.

Take care and stay safe.

Posted in Apple, General Security | 1 Comment

2013 FIRST Conference

View of a park in BangkokTwo members of OxCERT attended the 25th annual conference of FIRST (Forum of Incidence Response and Security Teams) held at the Conrad Hotel in the bustling city of Bangkok, Thailand. This year’s hosts were ThaiCERT, the Electronic Transactions Development Agency and the Ministry of Information and Communication Technology. It was a packed schedule over five days but here are some of the highlights.

The conference kicked off in grand style on Monday morning with opening remarks from her Excellency Ms. Yingluck Shinawatra, Prime Minister of Thailand. The Prime Minister welcomed us all to Thailand and discussed the benefits that the Internet can bring to all people and that security is necessary to preserve those benefits.

The Prime Minister’s appearance was followed by the first keynote speech of the conference, given by James Pang, discussing Interpol’s role in facilitating international police cooperation to combat cyber crime. According to Interpol, around the world 14 people fall victim to cyber crime every second.

The second day began with opening remarks from Chris Gibson and a quick video showing the fantastic job ThaiCERT did in organising this year’s football tournament. The first session of the day was a keynote speech from Dr. Paul Vixie of the Internet Systems Consortium. Paul talked about some of the botnet takeovers he has been involved in and some of the problems associated with sharing information from those takeovers. To address these problems the ISC has created the Security Information Exchange, which is designed to be a scalable framework for information sharing, this may be a useful resource for us in the future.

View of rooftops in BangkokOn Wednesday morning Jeff Bollinger, Brandon Enright and Matthew Valites from Cisco gave a presentation titled “Winning the game with the right playbook”. During this interesting talk the team from Cisco highlighted the importance of going beyond predefined reports generated by security equipment to create succinct reports tailored to the individual environment.

The talk went on to discuss the use of Splunk to aggregate data from a variety of sources based on common fields such as timestamp and IP address. We collect information from multiple sources and much of it is queried and correlated by hand, a service such as Splunk that could manipulate that information could be very useful.

After lunch Tomasz Bukowski from NASK/CERT Polska and Arseny Levin and Rami Kogan from Trustwave Spiderlabs gave talks about various types of malware and some of the techniques malware authors use. It’s helpful for us to have a good understanding of the way different pieces of malware behave so we stand a better chance of detecting them on our network.

Wednesday night was the night of the conference banquet, this year we were driven through Bangkok to the Siam Niramit theatre (holder of the Guinness world record for tallest stage). At the theatre we were treated to an impressive show based on Thai history and culture, complete with a live elephant! After the show we had a delicious Thai meal before heading back to the hotel.

Statue near a temple in BangkokOn Thursday morning John Kristoff of Team Cymru gave a presentation on security issues related to IPv6. As we all know, IPv6 is going to come into mainstream use sooner or later and it is likely to be well worth the time and effort to be prepared from a security point of view when it does.

Michael Jordon from Context finished the day with an interesting talk on using AI to detect malicious domains using registrar information. He described a proof of concept that he has been developing to use Bayes-theorem to determine how likely a domain is to be malicious. The idea of using artificial intelligence for this sort of purpose is an interesting one, although as the field is still in its infancy we may have to wait some time before we can practically make use of this.

The final day of the conference was a short one, Lauri Korta-Parn and Masako Someya from the Cyber Defense Institute Inc. gave a talk on Improving Cybersecurity Capabilities of Critical Infrastructures. The talk began with some examples of cyber attacks targeting critical infrastructure around the world, including the Stuxnet worm which targeted uranium processing facilities in Iran.

We may not be processing uranium at the University but we do have IP based control systems for various pieces of equipment and must ensure that they are properly secured.

Finally all that remained was to say goodbye to the other delegates and have a final look around Bangkok before heading back to the airport for the long flight home. Overall this has been a very interesting and informative conference and has given me plenty of food for thought. FIRST and ThaiCERT have done an excellent job and I’m sure everyone will be looking forward to next year in Boston!

Posted in FIRST Conference | Comments Off

Apple support lifetimes strike again

Wednesday saw the official launch of Apple’s iOS version 7, the operating system behind the iPhone, iPad and iPod Touch. But as with some previous updates, there’s a bit of a sting in the tail.

I’ve complained about Apple’s security support in the past, in the context of desktops. When it comes to phones and tablets, things appear even worse. Apple have never, to the best of my knowledge, issued any official statement about security support for versions of iOS, but all past evidence has suggested that once a new major version is released, support for earlier versions ceases entirely. There is certainly no reason to believe that things are any different with iOS 7.

What won’t run iOS7?

iOS 7 will run on all current Apple phones and tablets, as one might expect, and many older devices – as far back as the iPad 2 and iPhone 4. Support for the venerable iPhone 3GS has finally been terminated, probably on the grounds that its 256MB RAM is insufficient for the demands of the new release. The 3GS was released in June 2009, long enough ago that many who purchased it will have by now gone through at least one phone upgrade cycle. Nevertheless, it remained a current Apple product until the iPhone 5 released a year ago.

With the iPod Touch, things are a little different. While the iPhone 4 has 512MB RAM, the fourth-generation iPod Touch, which released around the same time, comes with half that; consequently it is not supported by iOS 7. This is a product which Apple officially discontinued just four months ago.

It doesn’t even end there. Recently-discontinued models can frequently be found on the Refurbished Store. While I found nothing yesterday on the UK store, on the US store, they had five different models of 4th generation iPod Touch available. Complete, it is claimed, with one-year warranty:

Apple US Refurbished Store, 19 September 2013.

Apple US Refurbished Store, 19 September 2013.

I’m no lawyer, and I’ve not seen the small print that comes with these devices, but I’d like to know the legal position if Apple refuse to fix known security vulnerabilities under the warranty.

Apple have done similar things with iOS devices in the past. For instance, software support for the iPhone 3G was suddenly dropped in March 2011, about 32 months after its initial release, and 8 months after they ceased selling it. Support for the original iPad was dropped with the release of iOS 6 a year ago, eighteen months after the product was discontinued.

What are the risks?

How dangerous is it to run an unsupported operating system on a mobile device? As is so often the case in the world of security, it depends.

New iOS releases typically fix a large number of vulnerabilities, and iOS 7 is no exception. It is likely that Apple has known of many of these for months but prefer to bundle updates together, unless there is a pressing reason to issue them earlier (such as widespread exploitation in the wild).

Windows desktops remain the target of choice for malware authors, but other platforms do get attacked, as with the OS X Flashback virus. And as time progresses, the population of vulnerable devices increases. While ardent Apple fans may rush out to get the latest Apple products, many older devices will get sold on or given to friends and family. It may be difficult to produce successful, profitable malware for iOS, but that’s not to say it’s impossible, and if something major does strike, antivirus is not going to save users. Malware for Android certainly exists in spite of the hugely fragmented version base. With iOS, one can be sure of tens of millions of devices still running iOS6 (or earlier), some of which will be used for activities such as online banking or credit card purchasing, which are of particular interest to criminals.

Personally, I’d want to minimise the amount of my personal data (and indeed anyone else’s) exposed to an unsupported system, and handle anything sensitive on a fully-featured desktop or laptop computer, at the expense of convenience. Others may judge the risks differently, but I do wonder just how many users are even aware?

What should one buy and when?

If going down the Apple i-device route, then without any official end-of-support announcements, all one can do is try and predict the time to buy which is likely to give the longest period of support. Watch out for new products offering a significant performance increase (for instance a doubling of internal RAM – not to be confused with the gigabytes of flash storage), or with a significant architectural change (for instance, the new 5S is the first with a 64-bit processor). Buy the latest model, soon after its release. Last year’s model may still be available for less money, but will probably lose support at least a year earlier.

It’s worth briefly noting that things are different in the Android world. Multiple major releases of Android are simultaneously supported, which is good news. Less so is the reliance in many cases on the handset manufacturer and (frequently) your chosen carrier in order to get updates. Often users are lucky to receive any updates whatsoever, especially out of the initial contract period. Android malware is widespread even if much of it is relatively benign.

What should Apple be doing?

I don’t expect Apple to be able to support all devices forever. Clearly the need to support old devices should not stand in the way of innovation and improvement. There are overheads to supporting multiple releases simultaneously, in terms of managing security patches (although many will be common to multiple releases), and in running an app store where not all apps will run on an older release. These are not insoluble problems, especially to such a wealthy company, but ultimately a business will want to see a return on such an investment.

Where is the return in supporting older devices? Consumers have already bought them. An unsupported device may still provide revenue for Apple through purchases of music, video and apps, but if the user will purchase those irrespective of support, why bother with the expense? If the consumer does encounter problems, persuade them to buy a shiny new device. As long as consumers are unaware of the risks, are aware but accept the risks, or are aware and promptly buy a new i-Device, the incentive isn’t there. What will hit them is bad publicity. That surrounding Flashback did result in some changes with regard to security support for OS X, and we note that occasionally, but not consistently, security updates still appear for Snow Leopard as well as for Lion and Mountain Lion. It may take a comparable outbreak on iOS to get Apple to change their attitude to the platform, and sooner or later, such an outbreak is likely to hit.

What I would like to see is a commitment to providing an operating system with full security support for a minimum period of time for every device. For mobile devices, perhaps four years after initial release, and two years after last sale, and for desktops and laptops, seven years from initial release and five years after last sale.

But I won’t wait up.

Posted in Apple, General Security | 2 Comments

Aaaarrrggghhhh – ye be hacked!

Ahoy me hearties! Talk like a pirate day it may be but thar be good reasons why it’s not a good idea t’ act like one online.  Pillagin’ the Internet for booty might seem attractive t’ some bilge-sucking scallywags and ye can see why sometimes.  A recent study by Ofcom cites cost, availability o’ desired material and convenience as the main reasons why people choose t’ hornswaggle when it comes t’ online content and more than half of ye Internet users have downloaded or streamed infringin’ material at some point.  The Ofcom study also suggest that infringement notifications or technical measures, as foreseen by the Digital Economy Act, were unlikely to change behaviour but avast ye for here be a couple o’ good reasons why your booty may turn out t’ be cursed!


Image from Flickr by [skylervm] licensed under CC BY-NC-SA 2.0

For starters the University takes its response t’ copyright infringement notices seriously (even if some users don’t) but they incur a cost t’ process and respond t’.  That cost be passed on t’ a user’s college who are then left t’ decide if and how t’ discipline the scurvy bilge rats.  That might well be walkin’ the plank but is more likely t’ be a monetary fine which exceeds the value o’ the pieces of eight ye might have saved by not payin’ for the content in the first place.

Secondly copyright infringement can be a great way t’ scuttle your computer and get yourself hacked.  If ye be downloading software from untrustworthy sites then how do you know what it does and whether or not some scurvy dog has altered it t’ do something nasty?  If you’re usin’ a knocked off operatin’ system (Windows or IOS which your messmate has given ye for example) then ye won’t be able t’ repel boarders or batten down those (security) hatches.  Not bein’ able t’ apply critical security updates leaves ye floatin’ like a sittin’ duck whenever ye do anythin’ online. And thar be some truly fearsome buccaneers out thar! Disreputable sites servin’ unlicensed films, music and books can also be breedin’ grounds for exploits and malware distribution.

So, if ye want to stay shipshape then, by all means talk like a pirate, but don’t act like one.  Savvy?

If ye be navigating t’ Oxford this term why not draw alongside on Thurs 24 Oct 12:30-13:30 t’ learn how t’ secure your PC or Mac, and read more about how t’ protect yourself on the InfoSec website?

Posted in Information Security | 1 Comment

Content filtering and the University

The issue of web content filtering is one which crops up every so often within the University. Do we give our users the freedom to visit any content they like, provided that University regulations (and the law of the land) are not broken, or should technical restrictions be put in place, for instance to stop them from viewing offensive content or just to stop them wasting entire afternoons on Facebook?

The subject of filtering is attracting considerable attention at a national level at present, with the Government and the major ISPs seemingly being at loggerheads over the matter.

The University’s position

The University’s position on web content-filtering is, as with many things, essentially to devolve the matter. Very little is done at a central level, and to some staff this comes as something of a surprise. The only exceptions are for certain malicious content, for instance domain names used by botnet controllers, or phishing sites which pose a particular threat to the University. Even there, the restrictions are not applied to all University networks, and we must be very careful to minimise the possibility of false-positives.

Our constituent colleges and departments are given the freedom to do their own thing. From the centre we have limited visibility as to what is done at a local level, other than by asking. Recent (unscientific) enquiries have given us some idea. In many cases the answer is nothing. Some others do some filtering of security threats only, one or two seek to limit access to Facebook and not much else, and a handful impose fairly stringent restrictions. From the responses, it seems departments are more likely to do so than our constituent colleges, perhaps reflecting concerns over confidential data and staff productivity, while colleges prefer not to impose restrictions on their student accommodation.

Some history

A university web proxy error message from around 2000

Back in 1999, the University introduced an intercepting web caching proxy, more for political and financial reasons than for technical ones. At the time the University was severely constrained by limited bandwidth on its external connection (somewhat less than many have on domestic broadband these days), there was no immediate prospect of upgrade, and transatlantic traffic would frequently slow to a crawl by mid-afternoon. Initially there were one or two investigations of content filtering but most were rapidly withdrawn after objections were raised. Some security-related blocks (for instance against the Code Red worm) proved extremely useful and caused little or no disruption to legitimate traffic. Bandwidth-limiting of certain content also proved fairly successful at controlling the limited available bandwidth without blocking content entirely, and offered a great degree of flexibility. Very few complaints were received, and following a major upgrade to the University’s connectivity, the restrictions (and later the proxy itself) were removed.

These days, if asked to advise regarding content filtering within the University network, how would we answer? As with many topics in security, the short answer is, “it depends”. Many of the perceived advantages introduce new risks. To a great extent it depends on what the college or department is looking to achieve, but it is worth noting that technical measures are frequently a poor solution to social problems. Doing almost anything can lead to accusations of “censorship” or denying “academic freedom” – indeed, there were a handful of dissenters when we introduced email antivirus filtering over a decade ago.

Malicious content

Blocking malicious content is relatively uncontroversial. Nevertheless it can be prone to false positives. For instance, you probably don’t need to block news articles entirely because they are pulling in malicious adverts from a third-party site, and your users will object. Nor do you necessarily want to block entire domains because of one malicious item. We’ve known the entire domain be blacklisted by one product on account of one of hundreds of servers within the University hosting “malicious” content. The offending content was a “white-hat” tool which had been offered at the same location for well over a decade. Actions need to be proportionate to the threat.

Nevertheless, most malware can be blocked without your users ever noticing. They’re more likely to notice blocks against phishing sites – indeed we’ve received occasional complaints from our users that our blocks are preventing them from “verifying their account” or whatever the phishers are asking them to do. Ensure your users are presented with a clear, informative error message, preferably one specific to phishing attacks.

Where do your users want to go today?

Things get trickier once you start blocking access to content that your users really want to reach. They’ll try and find a way round it, or find a friend who can. You might notice and block their workaround. They’ll soon find another. A colleague of ours assures us that many freshers will be perfectly capable of learning how to defeat any content-filtering on their college network – they’ve learned about such things in order to defeat tighter restrictions on their school (and possibly home) networks. When we tried blocking access to Napster and similar, we rapidly realised just how many services existed that offered our users a workaround – and those were just the ones that used HTTP on port 80. It rapidly turns into a huge game of “whack-a-mole” that we were bound to lose.

Driving traffic towards anonymising services, VPNs, Tor and so forth may present a bigger risk than the problem you are trying to address. Malicious traffic may no longer be blocked by firewalls or intrusion prevention systems, or detected by OxCERT’s monitoring. If a Tor user accidentally configures their system as an endpoint, you may find that an IP address on your network becomes the apparent source of external users’ traffic. Perhaps not a major concern if it allows foreign users to bypass the censorship of an oppressive regime, but very much your problem if it results in accusations of copyright infringement or accessing of child sexual abuse content.

Recreational versus “work” usage

Some workplaces desire to prevent students and/or staff from accessing sites unrelated to their work. For some that may just be a restriction on access to Facebook or Twitter. But not everyone’s usage of social media will always be recreational: it’s not uncommon to use them for publicity purposes, specialist news items and suchlike. Or perhaps, while not directly related to the job, they’re nevertheless beneficial – for instance the local bus company uses Facebook to post service updates. In severe weather, staff may regard access to timely information regarding transport, school closures and so forth as essential.

There is a risk that introducing filtering will upset users, especially if they consider it to interfere with work, or what they consider to be “reasonable” recreational usage. Policies need to be clearly communicated in advance, preferably with the reasoning behind them. The process for requesting exceptions needs to be straightforward, transparent and quick.

The strictest policy would be a default-deny, restricting users to accessing only those sites required as part of their work. Enumerating all such sites may be difficult, especially when webpages commonly pull in content from other sites.

As an example: consider a member of staff who was in the habit of playing a particular online game in his lunchbreak. One day, this resulted in his desktop becoming infected with a virus as a result of exploitation of a Java vulnerability.
The “knee-jerk” reaction in some organisations might be to impose stringent restrictions on usage of anything but directly work-related sites. But in some roles, that might be extremely difficult – the staff may need to access all sorts of sites as part of their job. Do you really want the overheads of dealing with requests for exceptions all the time? What is the problem you are actually trying to address. Why did this system get infected? While the site in question was recreational in nature, the user had no reason not to trust a site they’d used dozens of times before. On this occasion it happened to deliver a Java exploit, most likely through third-party content. Why did that exploit succeed? Because a critical Java patch had not been applied. Rather than putting resources into content-filters and strict restrictions and all the problems that brings, perhaps they should be directed at better, more timely patch management.
Clearly that will not always be appropriate. Desktops used to control a nuclear power plant probably shouldn’t be able to access arbitrary internet content.

Adult content

With apologies to Botticelli

Attempts to filter pornographic/offensive/adult content are not uncommon, but how is that defined? “I know it when I see it” won’t wash when configuring your firewall. Despite the launch of the .xxx domain, the internet is not conveniently segregated into “porn sites” and “acceptable content”. Many sites, Wikipedia included, may have some content deemed offensive but a lot considered perfectly acceptable. Trying to compile a list of “sites containing porn” is futile enough; trying to compile a list of every offending URL will be impossible.

Better filtering might use some kind of heuristics, but even so, where is the boundary between acceptable and unacceptable content? Attitudes vary hugely depending on culture, context, and indeed individuals. Nudity is extremely common in the Fine Arts (some very explicit content is openly displayed in the Musée d’Orsay in Paris). Some blocked sites you may consider perfectly legitimate but users may be too uncomfortable to request exceptions – examples frequently cited are sites dealing with issues of health or sexuality.

As one IT officer puts it: if a college blocks pages, you risk press accusations of censorship; if it doesn’t block anything, you risk stories about Oxford allowing students to browse smut. Damned if you do, damned if you don’t. And this article will no doubt be damned by some content filters for use of the word “damned”. Such “profanity”-based systems have, often rightfully, received a lot of flak over the years. Systems which block information about Scunthorpe, news of Hilary Swank, or discussion of Cleopatra’s bathing arrangements simply because they contain offensive strings are frankly unfit for purpose.

Illegal content

Many of the large domestic ISPs in the UK have taken action to block content which is illegal (at least under UK law) to access, based on a list of content managed by the Internet Watch Foundation (IWF). This certainly helps to guard innocent users against accidentally stumbling across some deeply unpleasant content, and will likely deter some of those with no more than a casual curiosity regarding such material, but as mentioned above, those sufficiently determined will find a way round the restrictions.

The underlying Cleanfeed technology behind the blocks is certainly ingenious but far from perfect. In 2008 the blacklisting of an item on Wikipedia drew considerable attention to the system, caused problems for many legitimate users of Wikipedia, and became a textbook example of the Streisand Effect. The Cleanfeed system has subsequently been used to impose blocks beyond its original remit, for instance in order to
comply with court orders to block piracy sites, and may be leveraged for further purposes according to the whims of future courts or governments.

What alternatives exist?

Warn users then let them proceed at their own risk?

The alternatives depend on the reasons for wanting filtering in place. Clearly, doing nothing will be technically possible, but may not go down well from a political point of view. If you’re primarily worried about bandwidth utilisation, some form of traffic shaping may be acceptable. If you’re concerned about people viewing offensive content in a general computer room, a simple approach that has worked in the past is to print out the IT Regulations in a large font, highlight the relevant clauses, and stick it on the wall as a reminder. If you have concerns about under-18s on your network as part of a summer school, you may be able to shift the responsibility onto the summer school organisers. If the concern is over staff “wasting” time on Facebook during working hours, take a step back. Are they getting the work done to the satisfaction of their line managers, and if so, is a little recreational internet usage actually a problem? If they’re not working hard enough, are technical measures really the most appropriate solution? There are many other distractions that may affect staff performance and a lot of them would never be considered a matter for IT to deal with. One department told us that while they block malicious content, trying to view content in other categories (e.g. “adult”, “illegal drugs”, “gambling”) will simply produce a warning message; users can then proceed at their own risk.


We certainly don’t wish to give the impression that content-filtering is always to be avoided. As with many things, there are pros and cons, and this post has concentrated on the negative aspects which may not immediately be apparent. What seems like a good idea in the first instance may have significant ramifications. What we do suggest is that those involved in determining policy are fully appraised of both the advantages and disadvantages, and appreciate that in most cases a perfect solution will be impossible to achieve. Users obviously need to be made aware of policies and revisions, whether enforced by technical or social means, and any monitoring in place including details of what is logged, to whom it is visible, and under what circumstances it will be used.

How well will people be informed in the case of government-mandated filtering imposed by the major ISPs? I’m not hopeful. My domestic broadband is obtained through a relatively small ISP, who to the best of my knowledge impose no filtering whatsoever. While my domestic broadband provider impose no filtering (currently), my cellular data provider does filter “adult” content by default (not that I have found this to be a problem). I don’t recall them going to any effort to ensure I was aware of this, the possible consequences, or of the procedures required for opting out. If the government get their way with “default-on” filtering, whether domestic ISPs are likely to do any better remains to be seen.

Posted in Web Security | Comments Off

FIRST Technical Colloquium, Amsterdam: day two

For the first day of the meeting, see

The second day began with a talk by Martijn van der Heide of KPN-CERT on information sharing following a botnet takedown. This led to considerable discussion as to what actions were considered appropriate or even lawful, particularly if the source of the data is somewhat “questionable”. All too often law enforcement prove unable or unwilling to take action, but it is clear that people’s personal data has been stolen and it is natural to feel that they should be alerted.

Next was a talk from Trendmicro regarding a “customer” of theirs in the so-called “underground” internet – those dealing in malware and stolen data – and the interaction between “underground” tools and systems operated by the AV industry, giving some visibility into the underground activities and potential upcoming threats.

This was followed by a talk on Spamhaus, providers of blacklisting information primarily regarding spam sources. They are keen that security teams and network administrators should have access to details of spam sources within their own networks. We are currently getting some information indirectly but may be able to obtain additional information via spamhaus themselves, allowing us to better deal within the problem.

Paul Vixie then spoke again, splitting his talk into two sections. The first was regarding general internet public safety, in particular the problem of using legitimate DNS servers as part of traffic amplification attacks. These result from several problems: firstly, many ISPs do not have adequate checks in place to ensure that traffic leaving a customer’s network is using source addresses belonging to that customer. Secondly, DNS generally uses UDP, which is stateless. Thirdly, may often return answers which are many times the size of the original request. Fourthly, many DNS servers will accept responses from any client, either because they are recursive nameservers deliberately or accidentally configured to do so, or because they are authoritative nameservers whose function is to be globally accessible. The combination means that DNS servers can easily be abused as part of very large denial-of-service attacks, sometimes tens of gigabits of traffic in total. Most authoritative DNS servers, BIND included, can be secured against such abuse by taking advantage of recently-introduced response rate-limiting (RRL) features. Paul showed a bandwidth graph from a major DNS provider, with a twenty-fold reduction in their outgoing traffic as soon as RRL was enabled.

The second part of Paul’s talk concerned the ISC’s Security Information Exchange, a mechanism to share security-related information in realtime between trusted partners, and the mechanisms and formats ISC are using to receive and provide data.

After lunch in Cisco’s excellent canteen, and interesting conversation with some of their CSIRT staff, the first speaker was from NATO on “CSIRT Knowledge Management Needs”. This presentation considered the requirements for effective information exchange between security teams, including issues such as data quality, trust of other teams and effective integration of internal and external information sources.

This was followed by Jaeson Schultz of Cisco discussing “bitsquatting” attacks. This considered the possibility of single-bit errors in computer systems resulting in DNS queries for domain names differing by a single bit from the intended name; for instance instead of Real-world data show small numbers of lookups for such domains, and some have been registered. Significant volumes of attacks are unlikely and there are much greater returns to be had via more conventional attacks.

MyCERT, the national team for Malaysia, gave a presentation describing the tools they use for handling security incidents. Like us, phishing is a major problem for them, but concentrating on attacks targeting credentials for bank accounts rather than email. Indeed they have developed their own browser addon to defend users against phishing sites, albeit only those attacking Malaysian banks. They also described looking for defaced websites in Malaysia through use of information from Zone-H and other sources, a process very familiar to us.

Godert Jan van Manen of Northwave then described analysis of the Torpig (aka Sinowal) malware, an information stealer which we have seen in numerous incidents around the University along with the related Mebroot rootkit. He described the behavioural information that could be learned through a detailed analysis, including the domains used for the control channels.

The final talk of the event was from the Chief Security Architect at ING, a major Dutch multinational bank, looking at the mechanisms behind fraudulent financial transactions and the tactics employed by banks to counter them. The importance of behavioural analysis to distinguish malicious usage from legitimate were described. This felt particularly relevant to one who inadvertently triggered his bank’s anti-fraud mechanisms while trying to book the train to Amsterdam to attend the meeting!

In all this technical colloquium proved a very worthwhile event, with many talks on areas of particular relevance to the team’s activities, giving us plenty of “food for thought” with regard additions and improvements to our own systems and processes. We thank Cisco for hosting the meeting, and look forward to attending future FIRST events and hope that they will run as smoothly.

Posted in FIRST Conference | Comments Off

FIRST Technical Colloquium, Amsterdam: day one

Last week one of us attended a FIRST technical colloquium, generously hosted by Cisco in their offices in the suburbs of Amsterdam. Somewhat unusually, this was the second FIRST TC of the year to be held in Europe; nevertheless the event was well-attended, unsurprisingly with a strong presence from the Dutch teams and from Cisco themselves.

Proceedings started with a talk on Cuckoo Sandbox, an open-source tool for automated malware analysis. This is a topic of some interest to us, as we have been intending for some time to set up our malware analysis system, but commercial systems can be extremely expensive and we lack the resources to develop our own. Cuckoo comes across as well-suited to our requirements, with a good and ever-expanding featureset. Unlike some commercial vendors we’ve previously encountered, the speaker was happy to admit some of the limitations of sandboxing, not least that malware authors may include code to detect when they are running in a sandboxed environment and adjust the malware’s behaviour accordingly. He also stressed the importance of making effective use of the information gained through use of the software.

Next was a talk from Seth Hanford of Cisco on development of version 3 of CVSS, the Common Vulnerability Scoring System (CVSS). The current implementation was launched in 2007 and is widely used in the security industry, not least by us in assessing vulnerability announcements and which merit our sending bulletins to IT staff in the university. Nevertheless, experience has shown that the system is not perfect and presents some opportunities for confusion, and it is hoped that version 3 can address these problems.

This was followed by some talks on DNS-related issues. First was Paul Vixie from ISC, perhaps best known as formerly maintainer of the BIND nameserver software, co-founder of the original Realtime Blackhole List anti-spam measure, and as self-confessed holder of the record for the “most CERT advisories due to a single author”. Paul’s talk was on Response Policy Zones (RPZ), a feature added to recent versions of BIND as a means of providing a “DNS firewall”, allowing DNS server maintainers to prevent client access to systems based on domain name rather than IP address. This is a more advanced implementation of something that we have done at the University’s central nameservers for over eight years, and something which we are keen to explore further over the coming months. A second talk on RPZ followed, exploring the practicalities of implementation and operation.

Continuing the DNS theme was Henry Stern of Cisco, discussing passive DNS logging. Passive DNS is something that we have been aware of for several years, through use of an external service to determine how the relationship between domain names and IP addresses has changed over time. Such a service relies on capturing the responses given by recursive nameservers, anonymising and collating that data. We are purely a “consumer” at present but are being encouraged to collect data ourselves at the university nameservers and contribute data to the project, provided that any personally-identifiable information has been removed. Cisco have taken the idea further and are logging the queries within their internal network, purely for internal use, logging some four billion lookups per day. Naturally this requires considerable effort to reduce the volumes of captured data to a level at which useful queries can can be run in a matter of seconds.

The following talk was on Visual Malware Analysis, working on the principle that humans are much better at taking in visual information to produce diagrams representing the behaviour of malware given inputs from common analysis tools. Nevertheless there is significant complexity even to relatively simple malware and it would take practice to be able to make effective use of the information presented in this form.

The final talk of the day was by two members of Cisco’s own CSIRT team, entitled “Re-writing the CSIRT Playbook”. Despite being much larger than OxCERT, they still admit to being understaffed, and are gathering data from a variety of systems spread around the globe. They described the reasons for moving from a commercial Security Information and Event Management (SIEM) to infrastructure they have built inhouse, before discussing their “playbooks”. Essentially these describe the rules and actions to be taken under particular circumstances, making it clear which steps require a human to make decisions before action is taken – for instance, if a member of staff above a particular level of seniority is involved.

This ended the official talks for the day, but a drinks reception followed, offering opportunities for some networking before we headed back to our respective hotels in the city centre. For the second day of the meeting, see

Posted in FIRST Conference | Comments Off