Scam Emails Offering Legal Practice Course Funding

We’ve just been made aware of scam emails having been received by students at other Universities who have completed their law degree.  The scam falsely states that True Personal Injury Solicitors is a government body that can assist students by partially funding their LPC course.

No reports of the scam being received by Oxford students as of yet but law students should keep an eye out.  For more details see the Solicitors Regulation Authority Alert.

Posted in Current Threats | 1 Comment

2014 FIRST Conference: Friday

Imperial Ballroom, Boston Park Plaza

Imperial Ballroom, Boston Park Plaza

The final day of the conference began with a keynote from Bruce Schneier of Co3 Systems, generous sponsors of the banquet on Wednesday. This was entitled “The Roles of People and Technology in Incident Response”. He discussed the types of attacks seen today, the contribution of network effects (and of vendor lock-in) in the IT market – arguably less of a problem in the security market, especially when it comes to incident response tools, but it is hard to identify the best products and they are generally not the most successful. He went on to discuss how humans can be bad at dealing with risks, especially when it comes to investing in mitigation against things that might not happen. Nevertheless, there is a growing realisation that security incidents are not a matter of “if” but “when”, and management are more willing to invest when they are scared. During questions Bruce touched on the subject of encryption, stating that while one-click email encryption with PGP exists, it is one click too many for most users.
Sailing on the Charles River

Sailing on the Charles River

For the final talks I attended, Mikko Karikytö of Ericsson gave a high-level overview of an incident involving telecommunications fraud through one of their partners. This was followed by Jake Kouns and Carsten Eiram on “Evidence Based Risk Management and Incident Response”. While we may often be critical of the time it takes major software vendors to patch vulnerabilities, the situation can be far worse with manufacturers of SCADA (supervisory control and data acquisition) systems, who are relatively new to the security concepts long learned by the major IT companies. In one case a delay of 451 days was observed between the reporting of a vulnerability and patches being released.

Prudential Tower and Quest Eternal

Prudential Tower and Quest Eternal

The conference closed with a summary of some of the activities during the conference, thanks to all involved in making it a success, and not least the raffle for numerous vendor prizes. As is traditional, Masato Tereda presented the final results of his attempts to meet all conference attendees, and described this in the manner of a spreading malware infection, complete with CVSS scores and data in the STIX format for exchange of threat information.

As usual the conference has been a great success, and has included a number of enlightening talks on a huge range of topics, as well as the opportunity to meet with people from a wide range of countries, organisations and perspectives. For me some the most interesting presentations have been regarding the non-technical aspects of incident response, in particular effective collaboration between multiple teams, and the importance of regular incident response drills covering a range of scenarios, so that the organisation can respond more effectively when a major incident is for real.

Posted in FIRST Conference | Comments Off

2014 FIRST Conference: Thursday

Downtown Boston from the Arnold Arboretum

Downtown Boston from the Arnold Arboretum

Day four of the conference started with a keynote from Intel’s Malcolm Harkins, “Business Control Vs. Business Velocity – Practical Considerations for Business Survivability in the Information Age”. This looked at the relationship between security teams and the needs of their businesses as a whole, with a philosophy of “protect to enable”. If security measures are seen by users as obstructive, they will work around them and potentially increase the overall business risk.

Johan Berggren of Google then spoke about digital forensics, in particular a tool named GRR (Google Rapid Response) devised to enable system forensics to be run across their systems, regardless of operating system or physical location, without the need for additional physical resources. Olivier Thonnard of Symantec followed this with a talk on the evolution of targetted attacks over the past three years. These themes then continued with Junghoon Oh of Ahnlab looking at forensic analysis of lateral movement of a targetted attack in a Windows environment, using some of the methods discussed earlier in the week.

Paul Revere: effective communicator

Paul Revere: effective communicator

Peter O’Dell of Swan Island Networks spoke on the theme of “Cyber Security for Board of Directors and Senior Management”, looking at how to ensure that appropriate attention is given to cybersecurity risks at the top level within an organisation, with clear and effective communication of the risks, and realistic cost-effective proactive measures that can reduce them.

The final talk of the day looked at pBot botnets, something of an unusual family in that they take control of webservers as opposed the desktop and laptop systems targetted by most botnets. Vulnerabilities in popular content management systems such as WordPress and Joomla are exploited using remote file inclusion attacks to take control of the systems, with a command and control infrastructure based upon IRC but generally running on ports more usually associated with other protocols.

USS Constitution

USS Constitution

Presentations concluded early for the day in order to make way for the FIRST Annual General Meeting. This is always an important part of the conference and the need for all teams to be represented, either in person or by proxy, is repeatedly stressed. As well as elections for the steering committee, this year’s saw the approval of a major change to the structure of the organisation. Reports were presented on all major aspects of FIRST’s activities. Of particular interest to me was a comment by Seth Hanford on Cisco regarding the well-known Common Vulnerability Scoring System. Back in April, the Heartbleed bug struck, prompting Bruce Schneier to comment “On the scale of 1 to 10, this is an 11.”. The current version of CVSS (version 2) scored Heartbleed a mere 5.0 (out of 10), which served both to highlight the need for an updated system, but also to demonstrate that a single numeric score cannot always summarise the full risk and impact of a particular vulnerability.

Posted in FIRST Conference | Comments Off

2014 FIRST Conference: Wednesday

Charles River and city skyline

Charles River and city skyline

The third full day of the conference began with a keynote presentation by Andy Ozment of the Department of Homeland Security, entitled “The Role of DHS in Securing our Nation’s Cyberspace”, exploring the business of protecting US government, businesses and critical national infrastructure, and the challenges of outreach at board level – a question raised far too frequently is “why would anyone want to hack us?”.

Next for me was a talk on open-source security issues, followed by one on identification of the “root” cause of reported incidents. The aim of this project is to produce a simple taxonomy through which, with the aid of a flowchart, the underlying cause of a security incident can quickly be identified as belonging to a number of basic categories, including zero-day exploits and socially-engineered vulnerabilities. We already use a system of standard incident categories which are based on the consequences of incidents; such a taxonomy should help us to record at a basic level the cause of each incident too, although inevitably a substantial number are likely to be of unknown cause.

Faneuil Hall

Faneuil Hall

After lunch was a talk by Paul Vixie on the Operations Security Trust project, which aims to create a thriving community of trusted security colleagues through which sensitive and confidential information can be shared, without fear that the information may be used irresponsibly. I followed this with a talk by Pascal Arends of Fox-IT with the title “Investigator of Interest – Our Philosophy of Adaptive Incident Response to Turn
the Tables During an Investigation”. This considered how to respond effectively to a major intrusion while unsure as to the extent of the intrusion or to which the attackers are watching your response. Some tactics give far less away than others. For example, running tools such as tcpdump on a compromised server may be readily visible to the attackers; taking a copy of the traffic through a network tap is less noticeable but will require a temporary disconnection of the link; enabling a SPAN port on a switch will likely go unnoticed.

The penultimate talk of the day was one by Robert Pitcher from the Canadian Government regarding security exercises, and how to ensure that all those likely to be involved in response to a real-life incident can become familiar with their role through table-top exercises and incident simulations. This proved a most illuminating talk; it is evident that the University’s response to major incidents has at times been less than perfect and there is definite value in being better prepared so that when such incidents do strike, we can respond more quickly and effectively.

The concluding talk covered a malware analysis framework named Dorothy2. Malware analysis is a topic of particular interest to us, and while this may not be our chosen path as we develop our capabilities, it is interesting to hear about the alternatives available.

Not quite the Boston Symphony Orchestra...

Not quite the Boston Symphony Orchestra…

Traditionally, the Wednesday evening of the FIRST conference is the conference banquet, and previous years are hard acts to follow, not least after the elephants last year. This year’s chosen venue was the seemingly more sedate Boston Symphony Hall, from the hotel a gentle walk through the Back Bay area of the city. The dinner itself took place in the main auditorium, and while it was only natural that there be a musical theme to the after-dinner entertainment, few of us quite knew what to expect. We were treated to a performance by local band Decades by Dezyne, featuring a variety of popular soul and R&B numbers, several costume changes and one or two surprise “guest appearances” including James Brown. In all a most enjoyable evening’s entertainment.

Posted in FIRST Conference | Comments Off

2014 FIRST Conference: Tuesday

Massachusetts State House

Massachusetts State House

Day two of the conference began with a keynote from Gene Spafford, professor of computer science at Purdue University. Gene was a keynote speaker on FIRST’s previous visit to Boston for the 1994 conference, and compared the situation today with that of twenty years ago. Incident response teams are less the equivalent of the fire brigade and more that of janitors, always clearing up other people’s mess. He sees the security incident as applying layer upon layer of defences trying to address critical deficiencies in computer systems and in the previous layers of defences, with systems ultimately collapsing under the sheer weight of patches; there is too little incentive for software authors to produce secure systems in the first place.

I followed the keynote with a couple of technical talks, the first by Tim Slaybaugh of Northrop Grumman entitled “Pass-the-Hash: Gaining Root Access to Your Network”. This described means of obtaining and replaying password hashes on Windows systems, avoiding the need to crack the hashes in the first place, and how to detect where such tools have been used. The following talk, by Tomasz Bukowski of CERT Polska, looked at sinkholing domains as a means of subverting malware command and control channels, and identifying infected systems and examining their behaviour. OxCERT make frequent use of a basic form of sinkholing, as well as making use of data provided by other organisations maintaining sinkholes, but we could take the process significantly further.

Swan Boat, Public Garden

Swan Boat, Public Garden

After lunch, Mitsuaki Akiyama of NTT-CERT discussed “honeytokens”, taking the concept of “honeypot” systems further to use decoy credentials, database entries and documents to track malicious behaviour and to explore the links between attackers. This was followed by Peter Kruse talking on the Tinba banking trojan, its means of propagation and the intelligence that can be gained on the command and control infrastructure and on those behind it. While Tinba was new to me, we encounter similar malware on almost a daily basis. Finally, a group funded by the US Department of Energy discussed the challenges of data-sharing and the conversion of data between the many different formats in use by different teams.
"Make way for ducklings", Public Garden

“Make way for ducklings”, Public Garden

The final talks of the day were a sequence of “lightning” talks, run to a strict five-minute time-limit, offering a brief insight into a wide range of of topics, including the activities of several teams around the globe, the challenges of scaling vulnerability identifiers to cope with more than 10,000 vulnerabilities per year, and FIRST regular Masato Terada on his annual project to meet as many conference attendees as possible. This was followed with a vendor showcase reception, offering the opportunity to socialise and to speak to a wide range of security vendors, both familiar and new to us, about the wide range of products and services on offer.

Posted in FIRST Conference | Comments Off

2014 FIRST Conference: Monday

Park Street Church from Boston Common

Park Street Church from Boston Common

Once again it’s time for the annual FIRST Conference. This year it’s in Boston, one of the oldest cities in North America and packed with history at almost every turn. The venue is the Park Plaza hotel, previously the venue of the 1994 FIRST Conference, almost an eternity ago when it comes to computer security.

The conference started yesterday evening with a reception, a welcome opportunity to catch up with familiar faces and to meet new people. This year sees a record turnout, with well over seven hundred attendees.

The presentations began early Monday morning with a special keynote by two members of the FBI Boston Division concerning the response to the Boston Marathon bombing in April 2013, a sombre reminder that for some, security is literally a life-and-death matter.

Tributes to the Boston Marathon victims, Arlington Street Church, August 2013

Tributes to the Boston Marathon victims, Arlington Street Church, August 2013

Following a break, the presentations split into three streams. Often two or more presentations of interest will coincide and today was no exception; I eventually decided upon those I considered to offer the most to OxCERT as a whole in spite of considerable personal interest in the alternatives. First up was David Bianco of FireEye, speaking on Enterprise Security Monitoring. This introduced such topics as the “Cyber Kill Chain” and the “Pyramid of Pain”, and how best to use them to gain the most insight and threat intelligence. I followed this with Pawel Pawlinksi of the Polish national CERT on automated data processing, a topic of considerable interest to us as we struggle to keep on top of the information we receive.

Marathon Sports, Boylston Street

Marathon Sports, Boylston Street

After lunch, two members of the JPCERT Co-ordination Center discussed the problems of open DNS resolvers and their approach to mitigation of the problem with the aid of a simple check website. This was followed by Ben April of Trend Micro on Bitcoin for the Incident Responder, a good introduction to the best-known “crypto-currency” and the workings of transactions.

Following a panel discussion on “Developing Cybersecurity Risk Indicators – Metrics” was what for me proved the most interesting talk of the day, from Steve Zaccaro, professor of psychology at George Mason University. This stressed the need not only for effective teamwork within incident response teams but to take teamwork a stage further, with effective working between multiple teams. Steve followed this by leading a “Birds of a Feather” session on CSIRT Effectiveness, which prompted much discussion as to approaches that do and do not work among the wide variety of incident response teams represented. These two sessions, and subsequent discussions with other delegates, have for me prompted much reflection on OxCERT’s own processes and our working relationships with other teams, both within IT Services and across the University.

Posted in FIRST Conference | Comments Off

FIRST Technical Colloquium 2014, Amsterdam

In April two members of OxCERT were fortunate enough to attend the FIRST Technical Colloquium in Amsterdam, kindly hosted by Cisco at their Campus offices. The event was well attended by representatives from national CERTs and SOC teams, including a significant presence from Cisco themselves. As always the talks were both interesting and informative, this post will touch on a few of the highlights.

Jeremy Junginger of Cisco gave an enlightening and entertaining talk entitled Threat Actor Techniques. He discussed the ‘workflow’ of an attack, detailing how an attacker can use an initial foothold to gain further privileges. Based on a real-world scenario, Jeremy’s hands-on demonstration of privilege escalation gave the audience a unique (yet somewhat chilling) insight into how simple, everyday choices made by system and security administrators can quickly lead to the complete takeover of an otherwise locked-down system. Emphasising the value of ‘lateral movement’ within a compromised network, Jeremy quickly directed his attack around standard defences rather than through them, eventually leading to a compromise of administrator credentials and exfiltration of arbitrary data within the short timeframe of the presentation; free of the obligation to present and explain, the attack could have been successful in under 10 earth minutes.


By contrast. Dave Jones, also from Cisco, gave a presentation on mitigating attacks that target administrator or ‘root’ credentials. This followed on neatly from his talk at the Bangkok FIRST conference. Dave focused on the application of two-factor and multi-factor security and how widely it can, and arguably must, be deployed in order to preserve the sanctity of administrative privilege. Whilst many of the techniques presented should not be new information to most security professionals, few of us can truly claim to follow all of them as rigorously as we should and being reminded to keep our house in order is no bad thing!

Henry Stern of Farsight gave an interesting talk about DNStap, a tool which allows for efficient logging of DNS transactions without the need for packet capture. The capturing stage of traditional DNS monitoring has always proven the most resource-intensive, as many of the system functions involved are fundamentally blocking in nature. DNS logging at gigabit line speeds is challenging enough, and traditional approaches simply do not scale efficiently enough once the 10G barrier is breached. DNStap achieves its goals of efficient DNS logging by integrating directly with the DNS architecture itself, bypassing the need to create and analyse intermediate packet captures; this approach supports many common implementations and may represent the future standard approach to the hard-problems of tracking and monitoring malicious domains, such as the ‘fast-flux’ algorithms employed by the Gameover-ZeuS malware networks.

Seth Hanford, also of Cisco, talked about CVSS (Common Vulnerability Scoring System) version 3. Classification of vulnerabilities may not feature in most security professional’s top ten most interesting subjects, but every single vulnerability report and security bulletin you read will refer to that standard CVSS number somewhere in the reference trail. The integrity and relevance of the CVSS system has kept it in regular use by the entire IT industry for over a decade; having a standard way to quickly assess the severity of a given vulnerability is very valuable and something which OxCERT regularly make use of.

Martin Lee, again of Cisco, gave a presentation about a concept of great debate within Cisco, in fact it is literally postered across many of the walls of the Cisco campus; the “Internet of Things”. Distinct yet intertwined with the network of servers and information content we know and understand, the Internet of Things refers to the growing percentage of networked devices which are real-world functional obects. With the advent of IPv6 and aggressing conservation of IPv4 via NAT, everything from your phone and smartwatch to your fridge and air conditioning is becoming globally addressable, and therefore accessible. The recently publicised attacks against Smart TVs with internet connectivity are a haunting vision of things to come, as the costs of storage and processing power continue to fall steadily we can expect to see connectivity become a pervasive element of nearly all electrical appliances. Martin went on to highlight some of the benefits of this expansion; smart building that can monitor and regulate their own power usage and temperature, intelligent transport networks that can re-configure to avoid congestion. Of course, being a security talk, the meat of the presentation consisted of the potential risks of creating a network of newly automated devices with the influence of the operator strongly diminished; what if a malicious person attacked your data center’s environmental systems and switched off the alerts, promptly followed by the air conditioning? A reliance upon a tiny ARM9 core and a Broadcom wireless chip to tell you about fifty million pounds worth of burning silicon seems foolish in this scenario.


Overall the attendance of the TC was thoroughly worthwhile, and confirmed to us the value of the smaller format as compared to the FIRST Conference. The more proximate surroundings permitted the exchange of some frank questions and answers that may not have found expression in a wider setting, and the talk certainly gave our delegates plenty to think about and report back.

Posted in Uncategorized | Comments Off

Gameover for P2P Zeus?


Over the past few days you may have spotted headlines in the press that appear to claim the UK has two weeks to save itself from a massive cyber attack. You may be asking: what is this threat, and what is the University doing about it? Excellent questions, but let’s start from the beginning.

On the 2nd of June the UK National Crime Agency announced that it had, as part of an international collaboration, disrupted the Gameover Zeus botnet (aka GOZeus or P2PZeus), hindering the ability of infected machines to communicate with one another or the criminals behind the botnet.

What is Gameover Zeus?

Gameover Zeus is malware that allows criminals to completely control an infected computer. It’s typically used to steal online banking credentials and has also been used to spread Cryptolocker ransomware.

Once a computer is infected (usually via a malicious email attachment or by visiting a website that drops malware) it will attempt to join a peer-to-peer network of other compromised machines. Becoming a bot in the botnet.

Instructions from the botherders (criminals who run the botnet) are passed to the bots via the p2p network, effectively masking the location of the command and control infrastructure. Stolen information is passed back from the bots in the same way. This greatly frustrates any attempt to shutdown the botnet, take out one C&C server and another will spring up and join the network.

Instructions from the botherders are cryptographically signed. Otherwise it would be possible to impersonate a C&C server and send out an instruction to have the malware deactivate itself. That would be nice, but life’s not that easy.

zeus p2p botnet

Simplified diagram of the Gameover Zeus botnet
Clipart courtesy of

What happens in two weeks time?

Details are scarce as to exactly how the takeover has been achieved and the NCA cautions that the bad guys are likely to regain control soon. However, they estimate that we have a grace period of approximately two weeks before the Gameover botnet comes back into use.

The NCA is encouraging everyone to use this time to extricate existing Gameover Zeus infections and also to ensure their machines are as resilient as possible.

Potentially once the criminals regain control they may make a concerted effort to infect more machines. They may also seek to update their malware to prevent this happening again, without more details about the take down it is difficult to guess.

What is the University doing about this?

OxCERT have been tackling Zeus malware in its various forms since approximately 2008 and we will be looking into what we can do to detect even more infections in the future.

We welcome the increased attention on Zeus, which is – and always has been – a serious problem. However we are unlikely to be in a much worse situation in two weeks time then we were before the take down.

For now our advice remains the same, ensure you use supported operating systems and software (and keep them up to date). Install an appropriate anti-virus, again, keeping it up to date. Most importantly of all, remain vigilant; particularly beware of unsolicited emails with attachments or web links.

Also bear in mind that unscrupulous individuals may seek to take advantage of the public anxiety surrounding Zeus. If you receive a notification that you are infected please take a moment to verify the source of the information.

Jim Linwood

Jim Linwood

To Summarise 

Zeus is a serious global problem and we’re pleased to see an international effort to tackle it. But in the mean time, stay safe and don’t panic.

Posted in General Security | Comments Off

On Reflection

Emerging Denial-of-Service Attacks and You

img-source Flickr (George Ellenburg) under CC BY-NC-SA 2.0

“Those who cannot remember the past are doomed to repeat it”

With the threat of the Heartbleed crisis steadily diminishing due to a worldwide effort to patch and secure SSL, the attention of the security community must return to the issues displaced by sheer severity of that infamous bug. Shortly before the announcement of the Heartbleed vulnerability, those with exceptional memories may recall a number of increasingly concerning reports centred around certain UDP protocols and their susceptibility to abuse. Denial-of-Service (DoS) attacks of unfamiliar patterns and rapidly expanding capability were witnessed, exploiting holes in long-established and familiar internet protocols to terrible effect. In the wake of several successful attacks against security actors, US-CERT compiled and published Alert TA14-017A; today we explore the conclusions of this report and the nature of the threat it describes.

The DoS problem

CERT teams globally have become painfully aware of the increasing complexity of DoS techniques, as the perpetrators seek to evade or overwhelm the extensive technical countermeasures now in place to mitigate traditional DoS tools. In the wake of many well-publicised (and successful) DoS campaigns against commercial and political entities, the enormous technical focus applied to the DoS problem has done much to limit the effectiveness of traditional exhaustion attacks.

Copyright © 1999 - 2014, Arbor Networks, Inc. All rights reserved.

Copyright © 1999 – 2014, Arbor Networks, Inc. All rights reserved.

Even ‘Distributed’-DoS attacks that make use of hundreds or even thousands of compromised ‘bots’ to sustain the assault can be strongly attenuated by cunning use of ‘tarpitting‘ to slow down attacking machines, or via complex VM-based decoy systems. At a higher level, manipulation of Domain Name Service configuration can ‘blackhole’ much of the malicious traffic into digital oblivion, or even reverse the attack back against the systems of the perpetrators.

These techniques have always pivoted upon availability of resources; the side capable of marshalling more was generally more likely to prevail. As with all things, this delicate situation would be destined to change.

The changing face of evil

In a twist of irony, it was the ever-faithful DNS protocol that gave researchers a first glimpse into a new protocol abuse vector of potentially surpassing potency; the method was termed ‘DNS amplification’. A legitimate TCP/IP exchange can be thought of almost as a normal conversation between friends; each person speaks in turn according to understood rules, each makes an effort not to talk over the other nor to monopolise the conversation. Responses are roughly in proportion to questions, “Hey, how are you?” “I’m fine thanks, you?”.

An amplified exchange is more like a police officer asking for “License and registration,” a short inquiry prompting you to hand over a proportionally huge amount of data in return; in the digital realm, an innocent DNS server will respond to a terse inquiry with an immensely involved response. DNS is after all  fundamentally an information service, why should it not provide any and all data asked of it?

DNS Protocol Amplification

DNS Protocol Amplification

Further, the more authoritative the DNS server is with respect to the rest of the network, the more data it will return in its responses; the security protocol DNSSEC actually worsens this situation, as a DNSSEC-equipped server will respond with its entire cryptographic profile as well. Fortunately this can be mitigated via rate-limiting of responses by conscientious administrators, but highlights how even systems designed to improve network integrity can be turned to nefarious purpose.

Pictured: early chargen daemon

Pictured: early chargen daemon

In an amplification attack, an attacker manipulates a common protocol like DNS into this highly asymmetric exchange, a small transmission of data provoking a much greater proportion in the response. Another classic example is the ‘chargen‘ service, which is highly asymmetric by design.

Chargen responds to a connection with lines upon lines of ASCII, originally intended for network and application testing; viewed through the lens of an ‘amplifier’, chargen multiplies incoming data by a factor of several hundred.

DNS itself will offer a response over fifty times as large as the requesting data, other protocols such as NTP can amplify even more strongly. A UDP service is like a person who simply talks and does not listen, not keeping track of the conversation or the other participants, simply sending the data they believe is needed without acknowledgement or verification; if you say to a UDP speaker “Hey, tell me your life story” they will happily ramble on long after you fall asleep or leave.

Through a glass, darkly

Initially this would seem to be little more than a curiosity; of what benefit is the potential to direct greater traffic against oneself? In an ideal connected world this would remain a peculiarly self-destructive form of DoS with little potential for propagation; unfortunately the systems that underpin the internet are far from perfect. Services that rely upon UDP are much more open to abuse than their TCP cousins, as they offer the opportunity to ‘reflect’ internet traffic via simple source forgery.

image source:

UDP Spoofing and DNS amplification

By transmission of a small UDP request with a victim’s IP substituted as the source, a service is compelled to respond in an amplified manner, and direct this enlarged response against an arbitrary victim.

Returning to our conversation analogy, an amplified reflection attack is like ordering fifty pizzas from ten different shops all delivered to the same house. A short phone call is all it takes, the pizza shops do not verify that the house number they deliver to is the one that ordered the pizza, they just show up with a pile of boxes and expect the ‘customer’ to and accept them. Multiply this by a few hundred pizza shops and a few thousand pizzas per second and you have an idea of how disruptive an rDDoS can be.

img-source Flickr under CC BY-SA 2.0

And they’re all topped with double anchovies and lutefisk

This victim could be an individual, a website or an entire commercial or national entity, with a Denial-of-Service condition as the result. When this task is automated and divided amongst the many thousands of compromised hosts that make up the average ‘botnet’, the result is a Reflected Distributed Denial-of-Service (rDDoS) attack of previously unseen capability, rendering websites and networks all but inaccessible for the duration of the attack with minimal overhead required on the part of the attacker and a strong degree of anonymity thrown into the bargain.

This remains possible because many ISPs and NSPs worldwide still fail to observe best practice; it is not possible to substantively alter the source of an internet packet if the upstream routers apply simple network ingress filtering as described in the IETF’s BCP38 document. By rejecting packets that appear to originate from outside the proper network, the global reach and impact of UDP reflection attacks would be sharply curtailed. Sadly, global compliance with this practice seems far off.

Anatomy of an NTP-Reflection Attack

Anatomy of an NTP-Reflection Attack

Weaving together reflection and amplification handily eliminates the extreme resource deficit a lone attacker faces when hoping to DoS a small corporate entity or educational institution; rather than rely upon his own connectivity resources in pitched battle against a better-equipped target, even a minor malcontent is able to leverage significant bandwidth against his prey by co-opting the greater resources of a third party.

Multiply this capability by thousands of attacking machines in a distributed attack, and we witness the immense 400Gb/sec firestorms that so recently slammed CloudFlare and Spamhaus, causing difficulties for even these security supergiants until the traffic could be brought under control.

With great power…

The emergence of Reflected DDoS is of particular concern to organisations with sophisticated backbone infrastructures such as the University; the sad reality is we now live in a connected world where 300Gbps DoS attacks can be considered ‘the norm’. Remember that orange graph at the top of this blog? The bar for 2014 would be over four times higher and the year is far from over. However spirited and effective the defence of our own systems and users may be, this new strain of DDoS co-opts legitimate systems on the University network into ‘attacking’ external organisations and networks. This presents a significant risk to the public-facing image of the University, as well as our reputation with JANET and its constituents, and could result in connectivity issues for the wider University-assigned IP address spaces.

The grim future that awaits us all

The grim future that awaits us all

A clear burden of responsibility falls upon us to ensure that our significant technological resources are not leveraged to attack unwitting organisations on the wider internet, most of whom will lack the resources to defend themselves from the sheer volume of traffic these attacks can direct.

With the combined bandwidth of our Janet connections, the University systems could easily translate into a multi-Gigabit firehose of UDP DoS traffic if successfully abused. This cannot be allowed to happen.

Our response

It seems clear that the risk presented by this form of abuse has risen high enough to merit a proactive mitigation. In accordance with OxCERT’s mandated responsibilities towards the integrity of the University infrastructure, specific traffic blocks will be enacted across all units, services and sponsored connections. Measures are in place to minimise the impact on legitimate traffic and services. OxCERT’s strategy compares with analogous actions taken by ISP- and NSP-level CERT entities as part of a concerted global effort to diminish the effectiveness of rDDoS campaigns, as the true danger of inaction against this threat becomes clearer.

The onus is upon us to act responsibly in the face of the evolving challenges to network and information security, and as ever the priority must lie with overall service integrity and the protection of the University’s good standing. We expect any detriment to service levels to be minor or negligible, particularly in contrast to the benefits realised by strengthening and consolidating the framework upon which those services ultimately depend. It is worth noting that even a few vulnerable machines within a unit – for example NTP servers responding to Mode 6 queries – are fully capable of saturating the connection for that unit, effectively cutting the organisation off from the rest of the internet and the University, while simultaneously propagating a DoS against an external entity.

In a way we as service and network providers must approach this problem with a similar attitude to fire prevention; it is everyone’s problem, and everyone’s responsibility. As Smokey the Bear says, “Only you can prevent Reflected Distributed Denial-of-Service attacks”.

img source: Wikimedia Commons

Smokey hates unsecured NTP servers.

As ever, we must strive to strike the proper balance between security and usability; service limitations will be imposed only in cases where we anticipate no adverse impact to legitimate University business. We strongly encourage colleges and departments to limit their externally-accessible services to those which are both necessary and properly secured, and OxCERT will work with units to help ensure everyone’s goals are achieved.

Posted in General Security | Comments Off

Open Heart(bleed) Surgery

If you haven’t heard by now of the so-called “Heartbleed” Internet security bug that last week sent the Internet security community into something of a frenzy, then you probably don’t need to worry and almost certainly won’t be reading this!  For those of us who use the Internet and watch the news however you may want to read on.


“Heartbleed” is the name given to a recently discovered flaw in  a specific implementation of one of the world’s most widely used Internet security protocols - SSL/TLS.  Called OpenSSL the software is used to protect sensitive data (such as usernames, passwords, payment details etc.) sent backwards and forwards between your computer and “secure” websites.  Although it is hard to know precisely how many websites are affected by this vulnerability, it is estimated that about two thirds of the world’s websites use OpenSSL and that around 17% of sites are vulnerable to this bug.  That is about half a million websites and, since they may have been vulnerable since the bug was introduced into the software (as far back as 2011), it is rightly being treated as a pretty big deal.  As renowned security expert Bruce Schneier put it “On the scale of 1 to 10, this is an 11.”

Unsurprisingly then Heartbleed has attracted a lot of attention, but this has led to confusion amongst many with, for example, conflicting advice on whether and when to change passwords.  Worrying and panicking however, won’t do anyone any good so what are the risks, what should you do and what is the University of Oxford doing in response?

What are the Risks?

Well the good news is that once the problem was noticed the response has been pretty effective with many major service providers having patched the vulnerability already.  The trouble is that implementations of OpenSSL may have been vulnerable for over two years.  How much of a problem this actually is nobody really knows at the moment, but the risk is that cyber-criminals may have been aware of the vulnerability before the good guys were.  So far though, there have been no reports of widespread exploitation (either before or after the bug was announced) and, although an attack against a website you use could have disclosed sensitive information (such as passwords, payment details etc.) it would be more difficult for attackers to target specific information.  In other words, even if vulnerable sites you use were exploited it is far from certain that any of your details will have been exposed.  I’ve no intention of explaining how the exploit works but if you want a decent, non-technical, explanation as to why this is the case then look no further than xkcd.

So what should I do?

Well, as mentioned, first of all don’t panic.  Changing passwords is a good idea (and we’ll come to that in a bit) but apart from that there isn’t much you can do about what has already happened.  What you can do is to take this opportunity to improve your online security practices.  Remember that this vulnerability is not a weakness in the underlying protocols that secure our Internet traffic, but a vulnerability in software that implements them.  In other words human error (you can forget conspiracy theories in this case.  No, really!).  This is, perhaps, a timely reminder that we shouldn’t take security and privacy online for granted and we can all play a part in protecting ourselves from the risks.  Good security happens in layers!  If you don’t use good, unique passwords for different sites and don’t use 2-factor authentication where it is available then now might be good time to start.  Many are advising users to start using a password manager such as LastPass or KeePass when you start to change your passwords.  Similarly, now is the time to start following good standard advice like regularly checking your bank statements.

Keep Calm and Use the ToolkitYou should also be aware that this vulnerability is very likely to lead to an increase in phishing scams.  Since pretty much everyone who uses the Internet is being asked to change their passwords, the bad-guys are likely to want a piece of this action and use the opportunity to send round fake emails asking for passwords and/or linking to fake sites.  Be aware of this threat and, if you are in any doubt as to whether an email (or phone call for that matter!) is legitimate then ask someone technical for help (perhaps your local IT support staff or the IT Services help desk).

If you want advice on good practice when it comes to online security (including how to spot phishing emails) then why not check out our information security website or, better still, book on one of our lunchtime courses which cover what you need to know and do.

So should I change my passwords?

Yes it is probably a good idea but before you change your password for any individual site you might first want to check:

  1. Was the site affected;
  2. Has the organisation patched its systems;
  3. Have they changed their SSL certificates; and
  4. Have they told you it has been fixed?

It can sometimes be hard to get clear information on this but one site has come up with a decent list of well-known organisations and summarised their position.

What about my University passwords and what is the University doing about this problem?

WebauthWell, for the last week we’ve been assessing the scale of the problem within Oxford and, where possible, applying fixes.  The response from both central IT Services and amongst the many IT support staff across the departments and colleges has been swift and impressive.  The University takes your security and privacy online very seriously.  The good news is that most of the central services that deal with passwords (that we’ve assessed so far anyway) weren’t vulnerable to this attack.  This includes Nexus (email and calendaring), Webauth (used for Single Sign On) and VPN.  However Oxford is a very complex organisation when it comes to IT so let’s not break out the champagne and look smug just yet.  Because some of the backend systems that interact with our main services were running vulnerable versions of OpenSSL it is possible that some credentials may have been exposed.  I ought to stress at this point that we believe the actual risk that any passwords have been exposed on a large scale to be very low.  However wherever we perceive that this has been a possibility then we are making users change their passwords.  I’ve tried to summarise the position on a “per credential-type” basis below:

Single Sign On (SSO)/Oxford passwords

These are the passwords you use for Nexus and for SSO protected resources.  Neither Webauth, Nexus or the Shibboleth service are affected by this vulnerability, nor is the production SMTP service that is used by some for sending mail.  However a test SMTP environment was vulnerable and, although this isn’t used directly to handle any live credentials there is a theoretical attack that could have affected those that use the SMTP service.  There is no evidence this has happened and we think the risk is extremely low.  Nonetheless, if you fall into this category we will be expiring your password and contacting you to ask to you change it as a precaution.

For everyone else then you should change your password if you are concerned at all, or it you use it anywhere else.

Remote Access Passwords

These are the passwords used (mostly) for the VPN service which, again, was not directly vulnerable.  However one of the backend systems that deals with credentials was vulnerable for a limited time period and, if you changed or set a remote access password within that period (approximately the last year), then a successful attack is also theoretically possible. Again, we feel the risk is very low but this  does affect a greater number of users than for SSO passwords. So we will also be expiring those potentially affected passwords and contacting users.

For everyone else – change your password if you are at all concerned and/or if you use the same password elsewhere.

HFS passwords

HFS is the backup service offered by IT Services for staff and postgraduate students. Again the primary service is unaffected by the vulnerability but, similarly to remote access passwords, it is possible passwords could have been exposed via a supporting service.  Again there is little risk that this could be used in any meaningful attack and, as it happens, the HFS service already automatically renegotiates passwords with the client software and so we are considering the merits of making sure this happens sooner than usual.

In other words there is nothing you need to do – affected passwords will be changed automatically and you won’t even notice.

What else?

Of course this only covers central services and the University operates in a very devolved way.  Unfortunately we can’t answer questions about all services offered by departments and colleges so if you want to know more you should ask your department and/or college.

What about other sensitive data?

Indeed this vulnerability does not just affect passwords and the University runs many systems that handle personal data, financial data and other confidential information.  We are continuing to investigate all central services to see whether or not they could have been vulnerable to this bug.  We’ll therefore be reporting further when we have all the information we need.  In the meantime there is no evidence that any of your sensitive or personal information has been placed at risk.

To Summarise

This is clearly a very serious security bug and has had a significant and far reaching effect on service providers all over the Internet.  However the bug has a fix which has already been widely deployed and, whilst we don’t yet know the overall impact, the worst case scenario doesn’t seem to be the most likely outcome.  However we should all take this as an opportunity for improvement in our online security practices and ensure that we take responsibility for our own security and privacy as far as is possible.  Within the University we are taking the vulnerability very seriously which is demonstrated by the fact that we are investigating the potential impact as thoroughly as possible and, where we see any risk to end-users, taking appropriate action.  We will continue to do so along with all of the other activities we carry out to protect your security and privacy online.

Posted in Information Security | Comments Off