Emerging Denial-of-Service Attacks and You
“Those who cannot remember the past are doomed to repeat it”
With the threat of the Heartbleed crisis steadily diminishing due to a worldwide effort to patch and secure SSL, the attention of the security community must return to the issues displaced by sheer severity of that infamous bug. Shortly before the announcement of the Heartbleed vulnerability, those with exceptional memories may recall a number of increasingly concerning reports centred around certain UDP protocols and their susceptibility to abuse. Denial-of-Service (DoS) attacks of unfamiliar patterns and rapidly expanding capability were witnessed, exploiting holes in long-established and familiar internet protocols to terrible effect. In the wake of several successful attacks against security actors, US-CERT compiled and published Alert TA14-017A; today we explore the conclusions of this report and the nature of the threat it describes.
The DoS problem
CERT teams globally have become painfully aware of the increasing complexity of DoS techniques, as the perpetrators seek to evade or overwhelm the extensive technical countermeasures now in place to mitigate traditional DoS tools. In the wake of many well-publicised (and successful) DoS campaigns against commercial and political entities, the enormous technical focus applied to the DoS problem has done much to limit the effectiveness of traditional exhaustion attacks.
Copyright © 1999 – 2014, Arbor Networks, Inc. All rights reserved.
Even ‘Distributed’-DoS attacks that make use of hundreds or even thousands of compromised ‘bots’ to sustain the assault can be strongly attenuated by cunning use of ‘tarpitting‘ to slow down attacking machines, or via complex VM-based decoy systems. At a higher level, manipulation of Domain Name Service configuration can ‘blackhole’ much of the malicious traffic into digital oblivion, or even reverse the attack back against the systems of the perpetrators.
These techniques have always pivoted upon availability of resources; the side capable of marshalling more was generally more likely to prevail. As with all things, this delicate situation would be destined to change.
The changing face of evil
In a twist of irony, it was the ever-faithful DNS protocol that gave researchers a first glimpse into a new protocol abuse vector of potentially surpassing potency; the method was termed ‘DNS amplification’. A legitimate TCP/IP exchange can be thought of almost as a normal conversation between friends; each person speaks in turn according to understood rules, each makes an effort not to talk over the other nor to monopolise the conversation. Responses are roughly in proportion to questions, “Hey, how are you?” “I’m fine thanks, you?”.
An amplified exchange is more like a police officer asking for “License and registration,” a short inquiry prompting you to hand over a proportionally huge amount of data in return; in the digital realm, an innocent DNS server will respond to a terse inquiry with an immensely involved response. DNS is after all fundamentally an information service, why should it not provide any and all data asked of it?
DNS Protocol Amplification
Further, the more authoritative the DNS server is with respect to the rest of the network, the more data it will return in its responses; the security protocol DNSSEC actually worsens this situation, as a DNSSEC-equipped server will respond with its entire cryptographic profile as well. Fortunately this can be mitigated via rate-limiting of responses by conscientious administrators, but highlights how even systems designed to improve network integrity can be turned to nefarious purpose.
Pictured: early chargen daemon
In an amplification attack, an attacker manipulates a common protocol like DNS into this highly asymmetric exchange, a small transmission of data provoking a much greater proportion in the response. Another classic example is the ‘chargen‘ service, which is highly asymmetric by design.
Chargen responds to a connection with lines upon lines of ASCII, originally intended for network and application testing; viewed through the lens of an ‘amplifier’, chargen multiplies incoming data by a factor of several hundred.
DNS itself will offer a response over fifty times as large as the requesting data, other protocols such as NTP can amplify even more strongly. A UDP service is like a person who simply talks and does not listen, not keeping track of the conversation or the other participants, simply sending the data they believe is needed without acknowledgement or verification; if you say to a UDP speaker “Hey, tell me your life story” they will happily ramble on long after you fall asleep or leave.
Through a glass, darkly
Initially this would seem to be little more than a curiosity; of what benefit is the potential to direct greater traffic against oneself? In an ideal connected world this would remain a peculiarly self-destructive form of DoS with little potential for propagation; unfortunately the systems that underpin the internet are far from perfect. Services that rely upon UDP are much more open to abuse than their TCP cousins, as they offer the opportunity to ‘reflect’ internet traffic via simple source forgery.
UDP Spoofing and DNS amplification
By transmission of a small UDP request with a victim’s IP substituted as the source, a service is compelled to respond in an amplified manner, and direct this enlarged response against an arbitrary victim.
Returning to our conversation analogy, an amplified reflection attack is like ordering fifty pizzas from ten different shops all delivered to the same house. A short phone call is all it takes, the pizza shops do not verify that the house number they deliver to is the one that ordered the pizza, they just show up with a pile of boxes and expect the ‘customer’ to and accept them. Multiply this by a few hundred pizza shops and a few thousand pizzas per second and you have an idea of how disruptive an rDDoS can be.
And they’re all topped with double anchovies and lutefisk
This victim could be an individual, a website or an entire commercial or national entity, with a Denial-of-Service condition as the result. When this task is automated and divided amongst the many thousands of compromised hosts that make up the average ‘botnet’, the result is a Reflected Distributed Denial-of-Service (rDDoS) attack of previously unseen capability, rendering websites and networks all but inaccessible for the duration of the attack with minimal overhead required on the part of the attacker and a strong degree of anonymity thrown into the bargain.
This remains possible because many ISPs and NSPs worldwide still fail to observe best practice; it is not possible to substantively alter the source of an internet packet if the upstream routers apply simple network ingress filtering as described in the IETF’s BCP38 document. By rejecting packets that appear to originate from outside the proper network, the global reach and impact of UDP reflection attacks would be sharply curtailed. Sadly, global compliance with this practice seems far off.
Anatomy of an NTP-Reflection Attack
Weaving together reflection and amplification handily eliminates the extreme resource deficit a lone attacker faces when hoping to DoS a small corporate entity or educational institution; rather than rely upon his own connectivity resources in pitched battle against a better-equipped target, even a minor malcontent is able to leverage significant bandwidth against his prey by co-opting the greater resources of a third party.
Multiply this capability by thousands of attacking machines in a distributed attack, and we witness the immense 400Gb/sec firestorms that so recently slammed CloudFlare and Spamhaus, causing difficulties for even these security supergiants until the traffic could be brought under control.
With great power…
The emergence of Reflected DDoS is of particular concern to organisations with sophisticated backbone infrastructures such as the University; the sad reality is we now live in a connected world where 300Gbps DoS attacks can be considered ‘the norm’. Remember that orange graph at the top of this blog? The bar for 2014 would be over four times higher and the year is far from over. However spirited and effective the defence of our own systems and users may be, this new strain of DDoS co-opts legitimate systems on the University network into ‘attacking’ external organisations and networks. This presents a significant risk to the public-facing image of the University, as well as our reputation with JANET and its constituents, and could result in connectivity issues for the wider University-assigned IP address spaces.
The grim future that awaits us all
A clear burden of responsibility falls upon us to ensure that our significant technological resources are not leveraged to attack unwitting organisations on the wider internet, most of whom will lack the resources to defend themselves from the sheer volume of traffic these attacks can direct.
With the combined bandwidth of our Janet connections, the University systems could easily translate into a multi-Gigabit firehose of UDP DoS traffic if successfully abused. This cannot be allowed to happen.
It seems clear that the risk presented by this form of abuse has risen high enough to merit a proactive mitigation. In accordance with OxCERT’s mandated responsibilities towards the integrity of the University infrastructure, specific traffic blocks will be enacted across all units, services and sponsored connections. Measures are in place to minimise the impact on legitimate traffic and services. OxCERT’s strategy compares with analogous actions taken by ISP- and NSP-level CERT entities as part of a concerted global effort to diminish the effectiveness of rDDoS campaigns, as the true danger of inaction against this threat becomes clearer.
The onus is upon us to act responsibly in the face of the evolving challenges to network and information security, and as ever the priority must lie with overall service integrity and the protection of the University’s good standing. We expect any detriment to service levels to be minor or negligible, particularly in contrast to the benefits realised by strengthening and consolidating the framework upon which those services ultimately depend. It is worth noting that even a few vulnerable machines within a unit – for example NTP servers responding to Mode 6 queries – are fully capable of saturating the connection for that unit, effectively cutting the organisation off from the rest of the internet and the University, while simultaneously propagating a DoS against an external entity.
In a way we as service and network providers must approach this problem with a similar attitude to fire prevention; it is everyone’s problem, and everyone’s responsibility. As Smokey the Bear says, “Only you can prevent Reflected Distributed Denial-of-Service attacks”.
Smokey hates unsecured NTP servers.
As ever, we must strive to strike the proper balance between security and usability; service limitations will be imposed only in cases where we anticipate no adverse impact to legitimate University business. We strongly encourage colleges and departments to limit their externally-accessible services to those which are both necessary and properly secured, and OxCERT will work with units to help ensure everyone’s goals are achieved.