2014 FIRST Conference: Monday

Park Street Church from Boston Common

Park Street Church from Boston Common

Once again it’s time for the annual FIRST Conference. This year it’s in Boston, one of the oldest cities in North America and packed with history at almost every turn. The venue is the Park Plaza hotel, previously the venue of the 1994 FIRST Conference, almost an eternity ago when it comes to computer security.

The conference started yesterday evening with a reception, a welcome opportunity to catch up with familiar faces and to meet new people. This year sees a record turnout, with well over seven hundred attendees.

The presentations began early Monday morning with a special keynote by two members of the FBI Boston Division concerning the response to the Boston Marathon bombing in April 2013, a sombre reminder that for some, security is literally a life-and-death matter.

Tributes to the Boston Marathon victims, Arlington Street Church, August 2013

Tributes to the Boston Marathon victims, Arlington Street Church, August 2013

Following a break, the presentations split into three streams. Often two or more presentations of interest will coincide and today was no exception; I eventually decided upon those I considered to offer the most to OxCERT as a whole in spite of considerable personal interest in the alternatives. First up was David Bianco of FireEye, speaking on Enterprise Security Monitoring. This introduced such topics as the “Cyber Kill Chain” and the “Pyramid of Pain”, and how best to use them to gain the most insight and threat intelligence. I followed this with Pawel Pawlinksi of the Polish national CERT on automated data processing, a topic of considerable interest to us as we struggle to keep on top of the information we receive.

Marathon Sports, Boylston Street

Marathon Sports, Boylston Street

After lunch, two members of the JPCERT Co-ordination Center discussed the problems of open DNS resolvers and their approach to mitigation of the problem with the aid of a simple check website. This was followed by Ben April of Trend Micro on Bitcoin for the Incident Responder, a good introduction to the best-known “crypto-currency” and the workings of transactions.

Following a panel discussion on “Developing Cybersecurity Risk Indicators – Metrics” was what for me proved the most interesting talk of the day, from Steve Zaccaro, professor of psychology at George Mason University. This stressed the need not only for effective teamwork within incident response teams but to take teamwork a stage further, with effective working between multiple teams. Steve followed this by leading a “Birds of a Feather” session on CSIRT Effectiveness, which prompted much discussion as to approaches that do and do not work among the wide variety of incident response teams represented. These two sessions, and subsequent discussions with other delegates, have for me prompted much reflection on OxCERT’s own processes and our working relationships with other teams, both within IT Services and across the University.

Posted in FIRST Conference | Comments Off

FIRST Technical Colloquium 2014, Amsterdam

In April two members of OxCERT were fortunate enough to attend the FIRST Technical Colloquium in Amsterdam, kindly hosted by Cisco at their Campus offices. The event was well attended by representatives from national CERTs and SOC teams, including a significant presence from Cisco themselves. As always the talks were both interesting and informative, this post will touch on a few of the highlights.

Jeremy Junginger of Cisco gave an enlightening and entertaining talk entitled Threat Actor Techniques. He discussed the ‘workflow’ of an attack, detailing how an attacker can use an initial foothold to gain further privileges. Based on a real-world scenario, Jeremy’s hands-on demonstration of privilege escalation gave the audience a unique (yet somewhat chilling) insight into how simple, everyday choices made by system and security administrators can quickly lead to the complete takeover of an otherwise locked-down system. Emphasising the value of ‘lateral movement’ within a compromised network, Jeremy quickly directed his attack around standard defences rather than through them, eventually leading to a compromise of administrator credentials and exfiltration of arbitrary data within the short timeframe of the presentation; free of the obligation to present and explain, the attack could have been successful in under 10 earth minutes.

Begijnhof

By contrast. Dave Jones, also from Cisco, gave a presentation on mitigating attacks that target administrator or ‘root’ credentials. This followed on neatly from his talk at the Bangkok FIRST conference. Dave focused on the application of two-factor and multi-factor security and how widely it can, and arguably must, be deployed in order to preserve the sanctity of administrative privilege. Whilst many of the techniques presented should not be new information to most security professionals, few of us can truly claim to follow all of them as rigorously as we should and being reminded to keep our house in order is no bad thing!

Henry Stern of Farsight gave an interesting talk about DNStap, a tool which allows for efficient logging of DNS transactions without the need for packet capture. The capturing stage of traditional DNS monitoring has always proven the most resource-intensive, as many of the system functions involved are fundamentally blocking in nature. DNS logging at gigabit line speeds is challenging enough, and traditional approaches simply do not scale efficiently enough once the 10G barrier is breached. DNStap achieves its goals of efficient DNS logging by integrating directly with the DNS architecture itself, bypassing the need to create and analyse intermediate packet captures; this approach supports many common implementations and may represent the future standard approach to the hard-problems of tracking and monitoring malicious domains, such as the ‘fast-flux’ algorithms employed by the Gameover-ZeuS malware networks.

Seth Hanford, also of Cisco, talked about CVSS (Common Vulnerability Scoring System) version 3. Classification of vulnerabilities may not feature in most security professional’s top ten most interesting subjects, but every single vulnerability report and security bulletin you read will refer to that standard CVSS number somewhere in the reference trail. The integrity and relevance of the CVSS system has kept it in regular use by the entire IT industry for over a decade; having a standard way to quickly assess the severity of a given vulnerability is very valuable and something which OxCERT regularly make use of.

Martin Lee, again of Cisco, gave a presentation about a concept of great debate within Cisco, in fact it is literally postered across many of the walls of the Cisco campus; the “Internet of Things”. Distinct yet intertwined with the network of servers and information content we know and understand, the Internet of Things refers to the growing percentage of networked devices which are real-world functional obects. With the advent of IPv6 and aggressing conservation of IPv4 via NAT, everything from your phone and smartwatch to your fridge and air conditioning is becoming globally addressable, and therefore accessible. The recently publicised attacks against Smart TVs with internet connectivity are a haunting vision of things to come, as the costs of storage and processing power continue to fall steadily we can expect to see connectivity become a pervasive element of nearly all electrical appliances. Martin went on to highlight some of the benefits of this expansion; smart building that can monitor and regulate their own power usage and temperature, intelligent transport networks that can re-configure to avoid congestion. Of course, being a security talk, the meat of the presentation consisted of the potential risks of creating a network of newly automated devices with the influence of the operator strongly diminished; what if a malicious person attacked your data center’s environmental systems and switched off the alerts, promptly followed by the air conditioning? A reliance upon a tiny ARM9 core and a Broadcom wireless chip to tell you about fifty million pounds worth of burning silicon seems foolish in this scenario.

huis

Overall the attendance of the TC was thoroughly worthwhile, and confirmed to us the value of the smaller format as compared to the FIRST Conference. The more proximate surroundings permitted the exchange of some frank questions and answers that may not have found expression in a wider setting, and the talk certainly gave our delegates plenty to think about and report back.

Posted in Uncategorized | Comments Off

Gameover for P2P Zeus?

http://openclipart.org/detail/166696/nuclear-explosion-by-tzunghaor

tzunghaor
http://openclipart.org/detail/166696/nuclear-explosion-by-tzunghaor

Over the past few days you may have spotted headlines in the press that appear to claim the UK has two weeks to save itself from a massive cyber attack. You may be asking: what is this threat, and what is the University doing about it? Excellent questions, but let’s start from the beginning.

On the 2nd of June the UK National Crime Agency announced that it had, as part of an international collaboration, disrupted the Gameover Zeus botnet (aka GOZeus or P2PZeus), hindering the ability of infected machines to communicate with one another or the criminals behind the botnet.

What is Gameover Zeus?

Gameover Zeus is malware that allows criminals to completely control an infected computer. It’s typically used to steal online banking credentials and has also been used to spread Cryptolocker ransomware.

Once a computer is infected (usually via a malicious email attachment or by visiting a website that drops malware) it will attempt to join a peer-to-peer network of other compromised machines. Becoming a bot in the botnet.

Instructions from the botherders (criminals who run the botnet) are passed to the bots via the p2p network, effectively masking the location of the command and control infrastructure. Stolen information is passed back from the bots in the same way. This greatly frustrates any attempt to shutdown the botnet, take out one C&C server and another will spring up and join the network.

Instructions from the botherders are cryptographically signed. Otherwise it would be possible to impersonate a C&C server and send out an instruction to have the malware deactivate itself. That would be nice, but life’s not that easy.

zeus p2p botnet

Simplified diagram of the Gameover Zeus botnet
Clipart courtesy of openclipart.org

What happens in two weeks time?

Details are scarce as to exactly how the takeover has been achieved and the NCA cautions that the bad guys are likely to regain control soon. However, they estimate that we have a grace period of approximately two weeks before the Gameover botnet comes back into use.

The NCA is encouraging everyone to use this time to extricate existing Gameover Zeus infections and also to ensure their machines are as resilient as possible.

Potentially once the criminals regain control they may make a concerted effort to infect more machines. They may also seek to update their malware to prevent this happening again, without more details about the take down it is difficult to guess.

What is the University doing about this?

OxCERT have been tackling Zeus malware in its various forms since approximately 2008 and we will be looking into what we can do to detect even more infections in the future.

We welcome the increased attention on Zeus, which is – and always has been – a serious problem. However we are unlikely to be in a much worse situation in two weeks time then we were before the take down.

For now our advice remains the same, ensure you use supported operating systems and software (and keep them up to date). Install an appropriate anti-virus, again, keeping it up to date. Most importantly of all, remain vigilant; particularly beware of unsolicited emails with attachments or web links.

Also bear in mind that unscrupulous individuals may seek to take advantage of the public anxiety surrounding Zeus. If you receive a notification that you are infected please take a moment to verify the source of the information.

Jim Linwood http://www.flickr.com/photos/brighton/2153602543/

Jim Linwood http://www.flickr.com/photos/brighton/2153602543/

To Summarise 

Zeus is a serious global problem and we’re pleased to see an international effort to tackle it. But in the mean time, stay safe and don’t panic.

Posted in General Security | Comments Off

On Reflection

Emerging Denial-of-Service Attacks and You

img-source Flickr (George Ellenburg) under CC BY-NC-SA 2.0

“Those who cannot remember the past are doomed to repeat it”

With the threat of the Heartbleed crisis steadily diminishing due to a worldwide effort to patch and secure SSL, the attention of the security community must return to the issues displaced by sheer severity of that infamous bug. Shortly before the announcement of the Heartbleed vulnerability, those with exceptional memories may recall a number of increasingly concerning reports centred around certain UDP protocols and their susceptibility to abuse. Denial-of-Service (DoS) attacks of unfamiliar patterns and rapidly expanding capability were witnessed, exploiting holes in long-established and familiar internet protocols to terrible effect. In the wake of several successful attacks against security actors, US-CERT compiled and published Alert TA14-017A; today we explore the conclusions of this report and the nature of the threat it describes.

The DoS problem

CERT teams globally have become painfully aware of the increasing complexity of DoS techniques, as the perpetrators seek to evade or overwhelm the extensive technical countermeasures now in place to mitigate traditional DoS tools. In the wake of many well-publicised (and successful) DoS campaigns against commercial and political entities, the enormous technical focus applied to the DoS problem has done much to limit the effectiveness of traditional exhaustion attacks.

Copyright © 1999 - 2014, Arbor Networks, Inc. All rights reserved.

Copyright © 1999 – 2014, Arbor Networks, Inc. All rights reserved.

Even ‘Distributed’-DoS attacks that make use of hundreds or even thousands of compromised ‘bots’ to sustain the assault can be strongly attenuated by cunning use of ‘tarpitting‘ to slow down attacking machines, or via complex VM-based decoy systems. At a higher level, manipulation of Domain Name Service configuration can ‘blackhole’ much of the malicious traffic into digital oblivion, or even reverse the attack back against the systems of the perpetrators.

These techniques have always pivoted upon availability of resources; the side capable of marshalling more was generally more likely to prevail. As with all things, this delicate situation would be destined to change.

The changing face of evil

In a twist of irony, it was the ever-faithful DNS protocol that gave researchers a first glimpse into a new protocol abuse vector of potentially surpassing potency; the method was termed ‘DNS amplification’. A legitimate TCP/IP exchange can be thought of almost as a normal conversation between friends; each person speaks in turn according to understood rules, each makes an effort not to talk over the other nor to monopolise the conversation. Responses are roughly in proportion to questions, “Hey, how are you?” “I’m fine thanks, you?”.

An amplified exchange is more like a police officer asking for “License and registration,” a short inquiry prompting you to hand over a proportionally huge amount of data in return; in the digital realm, an innocent DNS server will respond to a terse inquiry with an immensely involved response. DNS is after all  fundamentally an information service, why should it not provide any and all data asked of it?

DNS Protocol Amplification

DNS Protocol Amplification

Further, the more authoritative the DNS server is with respect to the rest of the network, the more data it will return in its responses; the security protocol DNSSEC actually worsens this situation, as a DNSSEC-equipped server will respond with its entire cryptographic profile as well. Fortunately this can be mitigated via rate-limiting of responses by conscientious administrators, but highlights how even systems designed to improve network integrity can be turned to nefarious purpose.

Pictured: early chargen daemon

Pictured: early chargen daemon

In an amplification attack, an attacker manipulates a common protocol like DNS into this highly asymmetric exchange, a small transmission of data provoking a much greater proportion in the response. Another classic example is the ‘chargen‘ service, which is highly asymmetric by design.

Chargen responds to a connection with lines upon lines of ASCII, originally intended for network and application testing; viewed through the lens of an ‘amplifier’, chargen multiplies incoming data by a factor of several hundred.

DNS itself will offer a response over fifty times as large as the requesting data, other protocols such as NTP can amplify even more strongly. A UDP service is like a person who simply talks and does not listen, not keeping track of the conversation or the other participants, simply sending the data they believe is needed without acknowledgement or verification; if you say to a UDP speaker “Hey, tell me your life story” they will happily ramble on long after you fall asleep or leave.

Through a glass, darkly

Initially this would seem to be little more than a curiosity; of what benefit is the potential to direct greater traffic against oneself? In an ideal connected world this would remain a peculiarly self-destructive form of DoS with little potential for propagation; unfortunately the systems that underpin the internet are far from perfect. Services that rely upon UDP are much more open to abuse than their TCP cousins, as they offer the opportunity to ‘reflect’ internet traffic via simple source forgery.

image source: nsfocus.com

UDP Spoofing and DNS amplification

By transmission of a small UDP request with a victim’s IP substituted as the source, a service is compelled to respond in an amplified manner, and direct this enlarged response against an arbitrary victim.

Returning to our conversation analogy, an amplified reflection attack is like ordering fifty pizzas from ten different shops all delivered to the same house. A short phone call is all it takes, the pizza shops do not verify that the house number they deliver to is the one that ordered the pizza, they just show up with a pile of boxes and expect the ‘customer’ to and accept them. Multiply this by a few hundred pizza shops and a few thousand pizzas per second and you have an idea of how disruptive an rDDoS can be.

img-source Flickr under CC BY-SA 2.0

And they’re all topped with double anchovies and lutefisk

This victim could be an individual, a website or an entire commercial or national entity, with a Denial-of-Service condition as the result. When this task is automated and divided amongst the many thousands of compromised hosts that make up the average ‘botnet’, the result is a Reflected Distributed Denial-of-Service (rDDoS) attack of previously unseen capability, rendering websites and networks all but inaccessible for the duration of the attack with minimal overhead required on the part of the attacker and a strong degree of anonymity thrown into the bargain.

This remains possible because many ISPs and NSPs worldwide still fail to observe best practice; it is not possible to substantively alter the source of an internet packet if the upstream routers apply simple network ingress filtering as described in the IETF’s BCP38 document. By rejecting packets that appear to originate from outside the proper network, the global reach and impact of UDP reflection attacks would be sharply curtailed. Sadly, global compliance with this practice seems far off.

Anatomy of an NTP-Reflection Attack

Anatomy of an NTP-Reflection Attack

Weaving together reflection and amplification handily eliminates the extreme resource deficit a lone attacker faces when hoping to DoS a small corporate entity or educational institution; rather than rely upon his own connectivity resources in pitched battle against a better-equipped target, even a minor malcontent is able to leverage significant bandwidth against his prey by co-opting the greater resources of a third party.

Multiply this capability by thousands of attacking machines in a distributed attack, and we witness the immense 400Gb/sec firestorms that so recently slammed CloudFlare and Spamhaus, causing difficulties for even these security supergiants until the traffic could be brought under control.

With great power…

The emergence of Reflected DDoS is of particular concern to organisations with sophisticated backbone infrastructures such as the University; the sad reality is we now live in a connected world where 300Gbps DoS attacks can be considered ‘the norm’. Remember that orange graph at the top of this blog? The bar for 2014 would be over four times higher and the year is far from over. However spirited and effective the defence of our own systems and users may be, this new strain of DDoS co-opts legitimate systems on the University network into ‘attacking’ external organisations and networks. This presents a significant risk to the public-facing image of the University, as well as our reputation with JANET and its constituents, and could result in connectivity issues for the wider University-assigned IP address spaces.

The grim future that awaits us all

The grim future that awaits us all

A clear burden of responsibility falls upon us to ensure that our significant technological resources are not leveraged to attack unwitting organisations on the wider internet, most of whom will lack the resources to defend themselves from the sheer volume of traffic these attacks can direct.

With the combined bandwidth of our Janet connections, the University systems could easily translate into a multi-Gigabit firehose of UDP DoS traffic if successfully abused. This cannot be allowed to happen.

Our response

It seems clear that the risk presented by this form of abuse has risen high enough to merit a proactive mitigation. In accordance with OxCERT’s mandated responsibilities towards the integrity of the University infrastructure, specific traffic blocks will be enacted across all units, services and sponsored connections. Measures are in place to minimise the impact on legitimate traffic and services. OxCERT’s strategy compares with analogous actions taken by ISP- and NSP-level CERT entities as part of a concerted global effort to diminish the effectiveness of rDDoS campaigns, as the true danger of inaction against this threat becomes clearer.

The onus is upon us to act responsibly in the face of the evolving challenges to network and information security, and as ever the priority must lie with overall service integrity and the protection of the University’s good standing. We expect any detriment to service levels to be minor or negligible, particularly in contrast to the benefits realised by strengthening and consolidating the framework upon which those services ultimately depend. It is worth noting that even a few vulnerable machines within a unit – for example NTP servers responding to Mode 6 queries – are fully capable of saturating the connection for that unit, effectively cutting the organisation off from the rest of the internet and the University, while simultaneously propagating a DoS against an external entity.

In a way we as service and network providers must approach this problem with a similar attitude to fire prevention; it is everyone’s problem, and everyone’s responsibility. As Smokey the Bear says, “Only you can prevent Reflected Distributed Denial-of-Service attacks”.

img source: Wikimedia Commons

Smokey hates unsecured NTP servers.

As ever, we must strive to strike the proper balance between security and usability; service limitations will be imposed only in cases where we anticipate no adverse impact to legitimate University business. We strongly encourage colleges and departments to limit their externally-accessible services to those which are both necessary and properly secured, and OxCERT will work with units to help ensure everyone’s goals are achieved.

Posted in General Security | Comments Off

Open Heart(bleed) Surgery

If you haven’t heard by now of the so-called “Heartbleed” Internet security bug that last week sent the Internet security community into something of a frenzy, then you probably don’t need to worry and almost certainly won’t be reading this!  For those of us who use the Internet and watch the news however you may want to read on.

Heartbleed-Patch-Needed

“Heartbleed” is the name given to a recently discovered flaw in  a specific implementation of one of the world’s most widely used Internet security protocols - SSL/TLS.  Called OpenSSL the software is used to protect sensitive data (such as usernames, passwords, payment details etc.) sent backwards and forwards between your computer and “secure” websites.  Although it is hard to know precisely how many websites are affected by this vulnerability, it is estimated that about two thirds of the world’s websites use OpenSSL and that around 17% of sites are vulnerable to this bug.  That is about half a million websites and, since they may have been vulnerable since the bug was introduced into the software (as far back as 2011), it is rightly being treated as a pretty big deal.  As renowned security expert Bruce Schneier put it “On the scale of 1 to 10, this is an 11.”

Unsurprisingly then Heartbleed has attracted a lot of attention, but this has led to confusion amongst many with, for example, conflicting advice on whether and when to change passwords.  Worrying and panicking however, won’t do anyone any good so what are the risks, what should you do and what is the University of Oxford doing in response?

What are the Risks?

Well the good news is that once the problem was noticed the response has been pretty effective with many major service providers having patched the vulnerability already.  The trouble is that implementations of OpenSSL may have been vulnerable for over two years.  How much of a problem this actually is nobody really knows at the moment, but the risk is that cyber-criminals may have been aware of the vulnerability before the good guys were.  So far though, there have been no reports of widespread exploitation (either before or after the bug was announced) and, although an attack against a website you use could have disclosed sensitive information (such as passwords, payment details etc.) it would be more difficult for attackers to target specific information.  In other words, even if vulnerable sites you use were exploited it is far from certain that any of your details will have been exposed.  I’ve no intention of explaining how the exploit works but if you want a decent, non-technical, explanation as to why this is the case then look no further than xkcd.

So what should I do?

Well, as mentioned, first of all don’t panic.  Changing passwords is a good idea (and we’ll come to that in a bit) but apart from that there isn’t much you can do about what has already happened.  What you can do is to take this opportunity to improve your online security practices.  Remember that this vulnerability is not a weakness in the underlying protocols that secure our Internet traffic, but a vulnerability in software that implements them.  In other words human error (you can forget conspiracy theories in this case.  No, really!).  This is, perhaps, a timely reminder that we shouldn’t take security and privacy online for granted and we can all play a part in protecting ourselves from the risks.  Good security happens in layers!  If you don’t use good, unique passwords for different sites and don’t use 2-factor authentication where it is available then now might be good time to start.  Many are advising users to start using a password manager such as LastPass or KeePass when you start to change your passwords.  Similarly, now is the time to start following good standard advice like regularly checking your bank statements.

Keep Calm and Use the ToolkitYou should also be aware that this vulnerability is very likely to lead to an increase in phishing scams.  Since pretty much everyone who uses the Internet is being asked to change their passwords, the bad-guys are likely to want a piece of this action and use the opportunity to send round fake emails asking for passwords and/or linking to fake sites.  Be aware of this threat and, if you are in any doubt as to whether an email (or phone call for that matter!) is legitimate then ask someone technical for help (perhaps your local IT support staff or the IT Services help desk).

If you want advice on good practice when it comes to online security (including how to spot phishing emails) then why not check out our information security website or, better still, book on one of our lunchtime courses which cover what you need to know and do.

So should I change my passwords?

Yes it is probably a good idea but before you change your password for any individual site you might first want to check:

  1. Was the site affected;
  2. Has the organisation patched its systems;
  3. Have they changed their SSL certificates; and
  4. Have they told you it has been fixed?

It can sometimes be hard to get clear information on this but one site has come up with a decent list of well-known organisations and summarised their position.

What about my University passwords and what is the University doing about this problem?

WebauthWell, for the last week we’ve been assessing the scale of the problem within Oxford and, where possible, applying fixes.  The response from both central IT Services and amongst the many IT support staff across the departments and colleges has been swift and impressive.  The University takes your security and privacy online very seriously.  The good news is that most of the central services that deal with passwords (that we’ve assessed so far anyway) weren’t vulnerable to this attack.  This includes Nexus (email and calendaring), Webauth (used for Single Sign On) and VPN.  However Oxford is a very complex organisation when it comes to IT so let’s not break out the champagne and look smug just yet.  Because some of the backend systems that interact with our main services were running vulnerable versions of OpenSSL it is possible that some credentials may have been exposed.  I ought to stress at this point that we believe the actual risk that any passwords have been exposed on a large scale to be very low.  However wherever we perceive that this has been a possibility then we are making users change their passwords.  I’ve tried to summarise the position on a “per credential-type” basis below:

Single Sign On (SSO)/Oxford passwords

These are the passwords you use for Nexus and for SSO protected resources.  Neither Webauth, Nexus or the Shibboleth service are affected by this vulnerability, nor is the production SMTP service that is used by some for sending mail.  However a test SMTP environment was vulnerable and, although this isn’t used directly to handle any live credentials there is a theoretical attack that could have affected those that use the SMTP service.  There is no evidence this has happened and we think the risk is extremely low.  Nonetheless, if you fall into this category we will be expiring your password and contacting you to ask to you change it as a precaution.

For everyone else then you should change your password if you are concerned at all, or it you use it anywhere else.

Remote Access Passwords

These are the passwords used (mostly) for the VPN service which, again, was not directly vulnerable.  However one of the backend systems that deals with credentials was vulnerable for a limited time period and, if you changed or set a remote access password within that period (approximately the last year), then a successful attack is also theoretically possible. Again, we feel the risk is very low but this  does affect a greater number of users than for SSO passwords. So we will also be expiring those potentially affected passwords and contacting users.

For everyone else – change your password if you are at all concerned and/or if you use the same password elsewhere.

HFS passwords

HFS is the backup service offered by IT Services for staff and postgraduate students. Again the primary service is unaffected by the vulnerability but, similarly to remote access passwords, it is possible passwords could have been exposed via a supporting service.  Again there is little risk that this could be used in any meaningful attack and, as it happens, the HFS service already automatically renegotiates passwords with the client software and so we are considering the merits of making sure this happens sooner than usual.

In other words there is nothing you need to do – affected passwords will be changed automatically and you won’t even notice.

What else?

Of course this only covers central services and the University operates in a very devolved way.  Unfortunately we can’t answer questions about all services offered by departments and colleges so if you want to know more you should ask your department and/or college.

What about other sensitive data?

Indeed this vulnerability does not just affect passwords and the University runs many systems that handle personal data, financial data and other confidential information.  We are continuing to investigate all central services to see whether or not they could have been vulnerable to this bug.  We’ll therefore be reporting further when we have all the information we need.  In the meantime there is no evidence that any of your sensitive or personal information has been placed at risk.

To Summarise

This is clearly a very serious security bug and has had a significant and far reaching effect on service providers all over the Internet.  However the bug has a fix which has already been widely deployed and, whilst we don’t yet know the overall impact, the worst case scenario doesn’t seem to be the most likely outcome.  However we should all take this as an opportunity for improvement in our online security practices and ensure that we take responsibility for our own security and privacy as far as is possible.  Within the University we are taking the vulnerability very seriously which is demonstrated by the fact that we are investigating the potential impact as thoroughly as possible and, where we see any risk to end-users, taking appropriate action.  We will continue to do so along with all of the other activities we carry out to protect your security and privacy online.

Posted in Information Security | Comments Off

TRANSITS I Workshop, Prague

At the end of November I attended the TERENA TRANSITS I workshop in Prague. TRANSITS I is aimed at those who have recently joined a CERT or who have been tasked with creating a new CERT. Attendees at the workshop came from a variety of organisations across Europe and beyond. Members of European CERT/CSIRT teams have developed the course and kindly volunteered time to deliver the content, TRANSITS is also supported by ENISA (European Network and Information Security Agency). Overall I found this to be a useful and informative few days, the TRANSITS course is a valuable resource for anyone joining or setting up a CERT team for the first time, the course contains modules on the operational, organisational, technical and legal issues faced by a CERT team.

Operational

Image of Prague Castle

The operational module covered the incident handling process of a CERT. Incident handling is the bread and butter of a CERT’s working day and it was interesting to hear how other CERTs approach this. Also discussed were various tools that can be used to collate information on threats and guide the process of turning a vulnerability alert into an advisory which can be published, something that we do on almost a daily basis. One of these tools, Taranis, we are hoping to implement in the future.

Organisational

This module covers where a CERT sits within the structure of their organisation. It is important for any team to have a firm grasp of its mission, its raison d’etre, as this informs all further decisions. OxCERT’s mission is defined as:

“To protect the integrity of the University backbone network and to keep services running”

This also defines our constituency; those that connect to the backbone network of the University of Oxford. Leading on from this we also need the tools and the authority to carry out our mission. One example of such a tool is the ability to block from the network hosts that may threaten the integrity or availability of services for other University users.

Technical

The technical module contains an overview of the various threats a CERT can expect to deal with.  Among those that we unfortunately see on a day-to-day basis are keylogging malware, SQL injection and botnets, to name but a few. The module also gives an overview of various tools and resources that can be used to deal with these threats.

Prague Castle

Legal

Laws and guidance are often updated so it is essential to keep up to date and ensure you are working on the correct side of the law, especially as our work often leads us into situations where it would be easy to overstep the mark. It was also particularly interesting to compare the different legal requirements affecting teams across Europe. It is helpful to bear this in mind particularly when travelling, as an activity that is legal in one country may not be in another.

This module also discusses the issue of disclosure, i.e. what information to disclose to who, and when? Inevitably this will be a mixture of policy and per-incident pragmatism, but it is a topic that is worth consideration for all CERT teams.

Apart from the taught materials the course also gave an opportunity to meet with members of other CERTs to network and exchange PGP keys (to sign later). I found the course presented a good overview of CERT activities and provides a suitable starting point for a recent or would-be CERT member.

Posted in Uncategorized | Comments Off

Farewell to XP (part 2)

In the first part of this post, I looked at the background to the end of support for Windows XP in April 2014. In this (somewhat delayed, apologies) second part I will consider what those in the University will need to do if they are still using Windows XP, although hopefully much of the content will be equally useful for those elsewhere who are still maintaining XP systems. I will assume that readers are not in a position to consider putting off the problem through Microsoft’s Custom Support programme.

Microsoft are not continuing full support after April

Microsoft aren't making a full U-turn, sorry. CC BY-SA 3.0 byhttp://commons.wikimedia.org/wiki/User:Kingroyos

Microsoft aren’t making a full U-turn, sorry.
CC BY-SA 3.0 by http://commons.wikimedia.org/wiki/User:Kingroyos

Since I wrote the first post there has been a slight relaxation in policy by Microsoft: support for Microsoft anti-malware products on Windows XP has been extended until July 2015. It is important to note that this is not the same as Microsoft extending full security support for Windows XP, despite what has been reported in some news articles (at the time of writing this states “Microsoft has decided to continue providing security updates for the ageing Windows XP operating system until 2015″).

Microsoft are simply adding a limited amount of protection and probably little that will not be offered anyway through third-party antivirus products which continue to support Windows XP after April. Note that Microsoft’s own blog post states “Our research shows that the effectiveness of antimalware solutions on out-of-support operating systems is limited.”

Our advice is that this changes nothing: continue pressing ahead with your upgrade and/or mitigation plans, as described in the remainder of this post.

What should people in the University do?

At the time of writing, Windows XP remains in widespread use around the University, although hopefully IT staff should have been aware of the end-of-support date for a year or more and upgrade plans are well under way. It is inevitable, however, that there will be parts of the University where it simply will not be possible to complete the process of migration away from XP in time. Moreover there will be other areas where XP simply must remain in use, as no other realistic option exists. So what should staff in this position be doing? As mentioned in part 1, “nothing” is not an option!

Risk assessment and prioritisation of upgrades

The most important thing to do in this situation is to determine where the greatest risks lie and to prioritise accordingly. For the purposes of this article I shall consider only the risks posed by the systems currently running Windows XP, although these must be assessed in the wider context of the overall risks in each department and college. Concentrating all efforts on upgrading XP systems and neglecting everything else is almost certainly not the path of wisdom; your “business as usual” activities are just that.

What is most likely to be attacked?

The vast majority of incidents handled by OxCERT can be attributed to one of three main causes: vulnerabilities in the user, vulnerabilities in public-facing services, and vulnerabilities in desktop systems and applications. To the disappointment of IT staff everywhere, replacing Windows XP will do little or nothing for the vulnerabilities in users: they will continue to make the same mistakes as before, for instance responding to phishing emails, or executing malicious email attachments. While local services have been targetted in the past (e.g. Blaster, Conficker), Windows XP is not normally considered an appropriate platform for public-facing services, so it is the third category that merits attention.

The major attack vectors against a desktop system are those which are likely to handle untrusted data from the outside world. For the vast majority of users, such data will mostly come through their web browser or their email client. Malicious content may trigger vulnerabilities in the core operating system, in the web browser or email client, in libraries and components used to handle particular types of content (for instance image display), in additional Microsoft sofware (eg Silverlight, Office) or in third-party software (such as Java and Flash). It is worth remembering that Internet Explorer 8 is the latest version of Internet Explorer to be supported by XP, limiting the amount that can be done to keep an up to date Microsoft web browser on an XP based machine.

Not all of the installed software will lose support next April. Given the size remaining XP userbase, many third parties will likely continue to support their own software on the platform for some time yet, including some Microsoft applications. Note that extended support for Office 2003 will end at the same time as that for Windows XP, so you’ll just have to get used to that ribbon, sorry. Importantly, most anti-virus vendors won’t cut support immediately: for University users, Sophos have committed to supporting XP until at least September 2015. Antivirus won’t come close to protecting against all attacks (it never did) but is nevertheless well worth having.

Clearly you will need to prioritise upgrades for some desktop users over others. Determining which users should be upgraded first will depend on local circumstances. You may go for senior and high-profile staff first on account of the confidential data they are handling. Then again, they may be those complaining loudest if something doesn’t work, so you may choose to start with users who are more accepting of the inevitable teething problems.

Specialist systems

Does it sometimes feel like you're between a rock and a hard place?

Does it sometimes feel like you’re between a rock and a hard place?

What about the more difficult cases? Inevitably there will be some cases which are particularly problematic, if not impossible to upgrade. Firstly, Windows XP installations are also embedded into many devices, for example vending machines and scanners. Such systems may run a full XP installation, or they may run Windows Embedded. It is important to distinguish the two; not least the different support lifecycles. XP Embedded is supported until the end of 2016; indeed NT Embedded 4.0 remains supported until the end of August 2014. How, and indeed if, updates are delivered and applied is up to the manufacturer of the device, as are other security measures. Updates which are critical for desktop systems may well be irrelevant in the context of a particular embedded system.

If a device is not using Windows Embedded, however, the April deadline applies. If networked, they’re vulnerable to attacks, and indeed we have seen vending machines on unfirewalled public IP addresses which have been infected with malware. These systems won’t be the only cases which are particularly problematic, if not impossible to upgrade. We are aware of scientific and medical equipment costing six or seven figure sums which are controlled from XP desktops. Upgrading them is frequently not an option; indeed in some cases the original vendor is no longer trading.

Avoid unnecessary risks

With such systems we advise considering their essential usage. What software needs to run on the XP system? What, if any, network connectivity is required? For some systems it may be appropriate to disconnect from the network entirely. Beware though that may simply shift the risks. If switching from file transfer over the network to file transfer via removable media, bear in mind that removable media may harbour infections. A system that is permanently offline will not be running up-to-date antivirus, barring very frequent manual updates. Infections on removable media can be partially mitigated by disabling Autorun and Autoplay (some additional information is available for IT staff within the university).

If a system does need to retain network connectivity then consider placing it on a strictly-firewalled network segment. Consider applying a “default-deny” policy in both directions. For instance the only access required may be to a staging area on a local fileserver, in which case the only additional traffic expected might be with the local DNS resolvers and authentication systems.

Don’t forget the human risks – your precautions are futile if your users simply work around them because they see it is necessary in order to get their work done, for instance by reinstalling the software you removed, or by plugging a network cable back in. Be sure that possible usage cases have been considered as early as possible, and ensure that users understand why actions are needed. You’re not doing it to be awkward but to minimise the risks to their equipments and data, while trying to minimise the inconvenience to them in their work.

It takes all the running you can do, to keep in the same place

Does it seem like you're getting anywhere? Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

Does it seem like you’re getting anywhere?
Image from Flickr by [MilitaryHealth], licensed under CC BY 2.0.

When you’ve finally dealt with that last Windows XP system (and the last Office 2003 installation), congratulations. Sadly, you’re unlikely to get much of a rest, as you’ll soon need to start worrying about the next one. End of support for Windows Server 2003 is in July 2015, Windows Vista in 2017.
Sometimes no explicit resourcing is required because you move to newer versions as part of natural system replacement cycles, but this will not always be the case, especially when dealing with software support lifetimes shorter than that of the hardware. It pays to ensure that your superiors are aware well in advance of when major upgrades need to be carried out, so that with luck the necessary resources can be made available in good time. Plan early, plan well, and stay safe.

Posted in General Security, Microsoft | Comments Off

Farewell to XP (part 1)

8 April 2014 marks the end of an era for many IT staff, and users too. After over 12 years, Microsoft will finally be terminating support for Windows XP, arguably its most successful operating system ever.

A little history

The end of the line for XP is fast approaching

After over twelve years, the end of the line for Windows XP is fast approaching

Windows XP was released in August 2001, and it’s worth reflecting briefly on how different things were. A fairly typical PC might have a single-core 32-bit processor running at around 1GHz, 256MB RAM and 30GB storage. Windows NT and 2000 had achieved some popularity in business environments, but the old Windows-on-DOS platform dominated (though the less said about Windows Me the better). Away from the University network, domestic broadband was still something of a novelty and most users were still on dialup.

Meanwhile, Apple were a niche player, perhaps best-known for the translucent CRT-based iMac, and only a few adventurous types had tried the new OS X 10.0, still in need of its training wheels. The iPod had yet to be released, and few people had ever heard of smartphones. The cellphone market was dominated by Nokia, producing handsets optimised for making telephone calls. The dominant web browser was Internet Explorer 5; a few people still stuck with Netscape. Sites such as Facebook, Twitter or GMail remained years away; Wikipedia had fewer than 10000 pages and few had yet heard of it.

Past security problems

Security threats were not unknown, although rarely financially motivated: users might be tricked into opening a picture of a tennis player, releasing a store of emails, while the previous month had seen the Code Red worm infect hundreds of thousands of webservers before attempting, unsuccessfully, to attack the White House. The world had yet to experience the shock of the real-world attacks of September the eleventh.

It is perhaps not so surprising that Windows XP was not written with security in mind from the start. Of course the original Windows XP would evolve significantly, with three service packs offering substantial improvements in security and stability. From the University’s point of view, Service Pack 2 perhaps made the greatest difference, in that by default the Windows firewall was now enabled. The Blaster worm and its derivatives had resulted in over one thousand infections across the University network in a matter of days. This one simple change made such widespread network-based attacks far less likely; indeed the only attacks on a comparable scale we’ve seen subsequently attacked another operating system entirely.

What are the risks?

Doing nothing is really not an option. Each month’s Microsoft updates include fixes for multiple vulnerabilities in Windows. Some will have been identified by Microsoft, and some by other “white hat” researchers, but others are found first by the bad guys (“zero-days”), and only become known to Microsoft once successful exploitation is discovered. For any attacker finding a zero-day vulnerability in Windows XP today, should they use it now? Almost certainly not: if Microsoft have had twelve years to identify it, are they likely to do so within the next few months? If alerted to it while XP remains under support, they are likely to investigate and fix it as soon as possible. If the exploit isn’t used in anger until after 8 April, it may still be investigated and fixed in supported Windows releases, but Windows XP users may be sitting ducks indefinitely.

Ending support

XP was subsequently followed by newer offerings, with much-enhanced security features built in from the ground up. The unloved Vista released almost seven years ago and was followed in 2009 by the far more popular Windows 7, then again last year by the radically different and much-criticised Windows 8. Retaining any degree of support for four such differing releases is clearly a substantial overhead even for a business the size of Microsoft. There comes a time at which they must decide enough is enough and cut support.

I’m not aware of any other mainstream operating system which has retained support for such a long time, and so far, the nearest competitors have also been Microsoft products: Windows 2000 managed ten and a half years; Windows 98 managed eight years (after being granted a reprieve two years earlier). Windows Server 2003 will get twelve years. Red Hat Enterprise Linux will in time manage slightly longer, as the two most recent versions are scheduled to reach thirteen years early in the next decade.

Will this really be the end of support for XP?

If you're rich enough, you may avoid it

If you’re rich enough, you may avoid it.
Image from Flickr by [garydenness] licensed under CC BY-NC-SA 2.0

Possibly. There is precedent for Microsoft granting a stay of execution: it happened with Windows 98. Support for Windows 98 was originally planned to end in January 2004, but after vocal protests, it was extended for a further two and a half years, until July 2006. In late 2003, Windows 98′s share of the install base was probably comparable to Windows XP’s share today, and shrank considerably during the extra thirty months.

It’s not impossible that Microsoft will do something similar this time, but we simply cannot afford to work on the assumption that it will. The situation is not really comparable. July 2006 was just over eight years since the release of Windows 98; we are already past twelve years with XP. And if I were Microsoft I would be very keen to avoid the perception of “crying wolf” over end of support dates. Last-minute extensions are a great way to annoy those who have put considerable effort into ensuring that they are ready for the originally-announced date, and encourage people to ignore the issue in future.

Even without a stay of execution, April will not be quite the end … if you’re rich. Microsoft’s Custom Support programme will offer patches for critical vulnerabilities to those who can afford them. But prices are in the “if you have to ask how much, you can’t afford it” league. Initial fees are estimated at at least $200 per system per year to retain access to critical updates, but with limits as to the minimum number of systems that will almost certainly render the programme unaffordable within the university. Since Microsoft will continue to produce the updates, they could decide to offer fixes more widely in the event of a particularly virulent infection, but would they actually do so? Perhaps they would as a goodwill gesture if a vulnerability is threatening the overall stability of the global internet, but for lesser threats I really wouldn’t want to bet on it. Play safe and upgrade.

What should people in the University do?

The short, flippant answer is of course “upgrade”. But of course in reality it is not that simple, and the answer is a lengthy article of itself. I will therefore address this in detail in a second post.

Posted in General Security, Microsoft | Comments Off

Cruelty to cats: Apple’s new security support policy?

Smilodon skull

Is Apple hoping that their own big cats will soon go the way of Smilodon?

On Tuesday of last week, Apple proudly proclaimed the launch of their latest and greatest operating system, OS X 10.9 Mavericks. After over 12 years, they’ve finally run out of big cats and moved on to Californian placenames. What’s more, they’ve even removed one of the obstacles to upgrading by making the new release available free of charge. But, as a few others have noted, there appears to be a nasty sting in the tail if you look more closely.

Among the many security advisories released by Apple on Tuesday is a slight oddity: there’s one named OS X Mavericks v10.9, released for “Mac OS X v10.6.8 and later”. Listed are over 40 separate security fixes in OS X 10.9. Clearly these can’t be fixes for bugs in 10.9, since it’s just released; they are fixes for security problems in older versions of OS X. There are no security bundles or point releases which keep you on the old release; the message seems to be that everyone should upgrade to Mavericks. As far as Apple is concerned, those big cats are on the road to extinction.

Can we be sure? No. We have no inside view of what goes on among the corridors and conference rooms of Cupertino. But we can make an educated guess on the basis of the information available. Not least because this situation is strangely familiar. Compare the security advisory for OS X Mavericks v10.9 with that for iOS 7, or indeed earlier releases of iOS. The bugs may differ, but the overall structure is the same, and we know what the support position is with iOS: if you want security patches, you run the latest version. It’s free, so what’s stopping you? Your chosen device turns out not to be supported any more? Tough. The Apple Store is that way; go and be a good little capitalist consumer.

Apple’s policy on security support

Apple don’t appear ever to have issued any official public statement regarding security support for OS X. Nevertheless in recent years a pattern has been established, which could be extrapolated to predict the likely future position. Security fixes would appear for the current version of OS X and for the previous version, although some private comments suggested that support for the previous version was not guaranteed. Occasionally fixes might even appear for the previous-but-one release, especially since Flashback malware struck in early 2012. The past few months have seen a handful of updates for 10.6.8, including Java (a vulnerability in which led to the Flasback outbreak), Safari and Quicktime, though nothing in the underlying operating system.

So why not upgrade?

Are you ready to upgrade yet?

Are you ready to upgrade yet?

You may ask why anyone would not want to upgrade to Mavericks? After all, it’s free. In 2012 I paid £20.99 to upgrade a Snow Leopard system to Lion; back in 2005 it cost me nearly sixty pounds to go from Panther to Tiger. The financial barrier to updating no longer exists.

I can think of several reasons why one might not want to upgrade, at least not yet:

Mavericks doesn’t support your hardware

You can’t really escape this one. Apple publish a minimum hardware specification for Mavericks. It’s similar, but not identical to, the requirements for Mountain Lion. There are certainly quite a few systems around which cannot be upgraded from Lion to Mountain Lion, including several in my department, although some people were simply waiting for the release of the new MacBook Pros before buying new hardware.

You avoid “dot zero” releases

It’s common for any new major software version to come with a whole load of interesting new bugs. Many people in the past have tended to wait until at least 10.n.2 before upgrading, because they don’t wish to be the ones effectively completing Apple’s beta testing. The bugs aren’t necessarily trivial, for instance the LDAP authentication bug that came with 10.7.0 which allowed users to authenticate successfully regardless of the password entered. That was no mere “teething problem” but revealed a fundamental flaw in Apple’s quality assurance.

Your applications don’t run on Mavericks

The California surf isn't for everyone just yet

The California surf isn’t for everyone just yet

Not every software vendor is involved in Apple’s beta program and able to have updates available the moment a new release appears. Here in the university, three such applications are our network backup system (based on IBM’s Tivoli Storage Manager or TSM), Sophos Anti-Virus, and our whole disk encryption service.

In the past it has taken months for IBM to release an official TSM backup client for a new OS X release. A client for an older release might work correctly but there is a risk of unexpected problems, but won’t be officially supported by IBM. We can allow users to back up at their own risk but still need to conduct some local testing. It would be irresponsible for us to let users back up without having a reasonable degree of confidence that users will be able to successfully restore their data should the need arise. [Update, 4 November: the HFS team seem confident that there are no major problems, although there remains no official support from IBM]

Depending on the application, the failure mode may or may not be immediately apparent. We have heard of one University computer being rendered unusable following an attempt to upgrade in spite of advice not to upgrade until an application incompatibility can be resolved.

Before someone starts advocating Time Machine and Filevault, yes, they have their uses, especially for a home user, but are not necessarily appropriate in our environment.

A critical feature has been removed in Mavericks

Features come and go with each release. The ones that disappear aren’t necessarily well-publicised prior to release day. As an example, a friend has reasons to depend upon SyncServices and was somewhat disgruntled to find it gone in Mavericks. Finding an appropriate alternative takes time and effort.

You don’t have the connectivity to upgrade yet

Mavericks is a 5.29GB download. 5GB is a lot larger than a typical security update, even with some of the large updates Apple have pushed out in the past. Some people are on slow or metered connections. In many rural areas, at least in the UK, the download might take several hours, during which the network may be effectively unusable for any other purpose. For people travelling, it may be several times larger than their monthly cellular data allowance or what can be downloaded over a hotel wifi connection overnight. In my case I can purchase extra allowance for my 3G stick but it would cost me £75 to do so even if everything worked perfectly. And as a major research university we have people doing fieldwork in areas of the world that can only dream of such good connectivity.

You don’t have the time to upgrade yet

Again, a big one for a university. For a typical home user, it’s fairly straightforward to set the download running, and perhaps spend a few hours sorting out a few niggles of the new release. Great for them, but it doesn’t necessarily scale. It takes significant time and effort to upgrade a classroom full of systems. If you weren’t expecting to have to upgrade them until OS X 10.10 appears on the horizon (next summer?) then the necessary resources are devoted elsewhere. Upgrading might disrupt teaching, experiments, even examinations. Months of work may need to go into the set up and testing of a new release before it can be deployed.

Now, you may say that Apple aren’t much interested in the enterprise market, and I wouldn’t disagree with you. Nevertheless they have, historically, had a huge customer base within the educational sector. It wasn’t so long ago that support for the AppleTalk networking protocol was a key requirement of the university’s backbone network.

I can’t upgrade yet; what should I do to protect my computer?

As usual it’s all about risk. Do what you reasonably can in order to protect your computer, your information, and yourself. There is no such thing as “completely safe”, but you can take measures to reduce the probability of bad things happening. We cannot predict what the next major attack against OS X will be, but the more possible risks that are addressed, the less likely it is to hit you.

Applications and plugins

Mountain Lion

How do you stay safe with a Mountain Lion?

Bear in mind that a high proportion of attacks target vulnerabilities in applications, not the underlying operating system. For instance, Flashback, the most widespread malware seen for Macs in recent years, targetted a vulnerability in Java. At the time, Java was supplied through Apple, and updates frequently appeared many weeks after their release by Oracle; this has subsequently changed. Many applications will continue to receive updates, possibly for a few years yet, but some will not and is is important to understand where the risks lie.

The most vulnerable applications are those which can receive information directly from arbitrary places in the outside world. Generally those will be your web browser and email client, together with plugins and helper applications used to handle certain kinds of content: Java, Flash, Quicktime, PDFs, Office documents.
Without a clear statement from Apple as to which they will still support on older releases, we must make an educated guess based on the evidence currently available.

Apple released updates for Safari (and the underlying Webkit library used by other applications handling web-based input) for OS X 10.7 and 10.8 last week, so there are reasonable chances that this won’t immediately be a problem.

However, it is possible that Apple Mail is only now supported on 10.9, given the inclusion of several mail-related vulnerabilities on the list of updates in 10.9. Unless you’re particularly keen on Apple Mail you may wish to consider a different email client such as Thunderbird, or simply using webmail, until you upgrade to Mavericks.

Flash is not shipped by Apple so will likely remain supported by Adobe for the time being. Despite their change in policy after Flashback, Apple have still been distributing Java updates as soon as they are released by Oracle; given the negative publicity about Flashback it is likely they will continue doing so for the time being. The situation with Quicktime is less certain.

PDF handling is by default done through Preview.app; as part of the core operating system it is likely that this may not receive further updates on 10.7 or 10.8; perhaps there is some value in considering a switch to using Adobe’s PDF reader on these platforms. For Office files, consider Microsoft Office (available at preferential rates for many University members), or the free (in multiple senses of the word) LibreOffice. If you are switching to third-party applications for particular filetypes, ensure they are configured as the default.

Follow good practice

A lot comes down to the good practice that we advocate all the time. Install antivirus software – it doesn’t guarantee 100% protection but is a lot better than nothing, and Sophos is available for free for members of the university. Ensure that all software is checking for updates on a regular basis, at least once a week (and much more frequently in the case of antivirus). Make sure any available updates get installed promptly. Consider using a firewall. OS X includes a basic software firewall: ensure it is enabled. A hardware firewall may offer better protection; many University colleges and departments have a firewall in place, and standard domestic broadband routers generally include at least a basic firewall capability. Exercise caution in opening email attachments, even if they appear to come from someone you know, or in downloading software from untrusted sources.

Plan on upgrading eventually

Finally, bear in mind that despite these measures, you still lack security support for the core operating system. Following the above advice is a stopgap measure that will prevent some (and possibly most) possible attacks, and buys you some time, but not infinite time – consider it as advice to tide you over for perhaps a few months, but certainly not years. You still need to plan to upgrade at some point, but at a time that better suits you and your work, not Apple’s marketting department.

If you have hardware that can’t run Mavericks, and can’t afford Apple’s latest hardware offerings any time soon, remember that alternate operating systems do exist. There is a software company based in Redmond who will gladly sell you an operating system for any Mac released in the last seven years, though avoid Windows XP otherwise yourself in a similar situation next April. If you are more adventurous, free alternatives exist.

Take care and stay safe.

Posted in Apple, General Security | 1 Comment

2013 FIRST Conference

View of a park in BangkokTwo members of OxCERT attended the 25th annual conference of FIRST (Forum of Incidence Response and Security Teams) held at the Conrad Hotel in the bustling city of Bangkok, Thailand. This year’s hosts were ThaiCERT, the Electronic Transactions Development Agency and the Ministry of Information and Communication Technology. It was a packed schedule over five days but here are some of the highlights.

The conference kicked off in grand style on Monday morning with opening remarks from her Excellency Ms. Yingluck Shinawatra, Prime Minister of Thailand. The Prime Minister welcomed us all to Thailand and discussed the benefits that the Internet can bring to all people and that security is necessary to preserve those benefits.

The Prime Minister’s appearance was followed by the first keynote speech of the conference, given by James Pang, discussing Interpol’s role in facilitating international police cooperation to combat cyber crime. According to Interpol, around the world 14 people fall victim to cyber crime every second.

The second day began with opening remarks from Chris Gibson and a quick video showing the fantastic job ThaiCERT did in organising this year’s football tournament. The first session of the day was a keynote speech from Dr. Paul Vixie of the Internet Systems Consortium. Paul talked about some of the botnet takeovers he has been involved in and some of the problems associated with sharing information from those takeovers. To address these problems the ISC has created the Security Information Exchange, which is designed to be a scalable framework for information sharing, this may be a useful resource for us in the future.

View of rooftops in BangkokOn Wednesday morning Jeff Bollinger, Brandon Enright and Matthew Valites from Cisco gave a presentation titled “Winning the game with the right playbook”. During this interesting talk the team from Cisco highlighted the importance of going beyond predefined reports generated by security equipment to create succinct reports tailored to the individual environment.

The talk went on to discuss the use of Splunk to aggregate data from a variety of sources based on common fields such as timestamp and IP address. We collect information from multiple sources and much of it is queried and correlated by hand, a service such as Splunk that could manipulate that information could be very useful.

After lunch Tomasz Bukowski from NASK/CERT Polska and Arseny Levin and Rami Kogan from Trustwave Spiderlabs gave talks about various types of malware and some of the techniques malware authors use. It’s helpful for us to have a good understanding of the way different pieces of malware behave so we stand a better chance of detecting them on our network.

Wednesday night was the night of the conference banquet, this year we were driven through Bangkok to the Siam Niramit theatre (holder of the Guinness world record for tallest stage). At the theatre we were treated to an impressive show based on Thai history and culture, complete with a live elephant! After the show we had a delicious Thai meal before heading back to the hotel.

Statue near a temple in BangkokOn Thursday morning John Kristoff of Team Cymru gave a presentation on security issues related to IPv6. As we all know, IPv6 is going to come into mainstream use sooner or later and it is likely to be well worth the time and effort to be prepared from a security point of view when it does.

Michael Jordon from Context finished the day with an interesting talk on using AI to detect malicious domains using registrar information. He described a proof of concept that he has been developing to use Bayes-theorem to determine how likely a domain is to be malicious. The idea of using artificial intelligence for this sort of purpose is an interesting one, although as the field is still in its infancy we may have to wait some time before we can practically make use of this.

The final day of the conference was a short one, Lauri Korta-Parn and Masako Someya from the Cyber Defense Institute Inc. gave a talk on Improving Cybersecurity Capabilities of Critical Infrastructures. The talk began with some examples of cyber attacks targeting critical infrastructure around the world, including the Stuxnet worm which targeted uranium processing facilities in Iran.

We may not be processing uranium at the University but we do have IP based control systems for various pieces of equipment and must ensure that they are properly secured.

Finally all that remained was to say goodbye to the other delegates and have a final look around Bangkok before heading back to the airport for the long flight home. Overall this has been a very interesting and informative conference and has given me plenty of food for thought. FIRST and ThaiCERT have done an excellent job and I’m sure everyone will be looking forward to next year in Boston!

Posted in FIRST Conference | Comments Off