DDoS attacks don’t need to be large scale “server busters” to cause devastation. They are often “hit and run” affairs – coming and going over a prolonged period of time, looking to generate collateral damage, disarray and long term trouble. Dealing with these attack patterns is as difficult as making the transition between peace and wartime.
Hit and Run DDoS Attacks
Hit and Run DDoS is a series of short bursts of high volume network or application attacks which take place periodically and usually last for 20-60 minutes each. These attacks last long enough to bring down servers and then disappear, only to come back again in the next 12-48 hours. The attacks can last for days or weeks, putting IT in turmoil and sending disruptive ripples throughout the organization.
Hit and Run DDoS attacks aim to exploit a soft spot in many anti-DDoS solutions, which are built around the idea of ‘on-demand activation.’ Such solutions work well in countering prolonged attacks, but are impractical for recurring short DDoS bursts due to the overhead required for each activation.
From GRE tunneling, to DNS re-routing, to simply turning on ‘DDoS Protection’ mode – all these services require a manual trigger, which prevents them from being immediately effective. At the beginning of an attack, this setup time translates into downtime, and in some cases can last as long (or longer) than the actual attack itself.
Imagine activating and deactivating your defenses for each salvo and you’ll understand what Hit and Run DDoS is all about. These attacks do not just target server resources. With Hit and Run, the attackers are working to exhaust the people who maintain these servers, their organizational popularity, and even their health and sanity.
Imagine having to onboard and offboard scrubbing servers every 48 hours…
Why Always-On is Not Enough
In theory, Hit and Run DDoS attacks could be mitigated by an ‘Always On’ solution, which would eliminate the need for recurrent setups. This sounds easy enough, but the question is: how does this ‘Always On’ affect user experience? And the answer differs from vendor to vendor – but in most cases you can expect some collateral damage.
This should come as no surprise, as there are fundamental architectural differences between services built to counter DDoS, and services designed to serve large amounts of clean traffic. For one, just by adding another hop between the website and its visitors, you create latency. Typically this is offset by caching, and optimized distribution over widespread PoPs. However, most DDoS protection services are built for protection, not content delivery, and don’t offer such features.
Moreover, by keeping DDoS protection in “active mode,” visitors are generally subject to constant scrubbing, which causes service disruptions as result of both scrubbing challenges and false positives.
One way or another, just by forcing you to migrate to an active scrubbing center, attackers have already achieved their goal – degradation of service, even in the quiet periods between attacks.
Barriers are not often designed for leniency
Rapid Detection for Setup Automation
To effectively counter short-burst DDoS attacks you need to identify an attack as early as possible – automatically turning on the protection mode and then turning it off as the attacks recedes. This early identification is actually the greatest challenge.
For example, in application layer DDoS attacks, most vendors suggest monitoring request rates. In truth, request rates only tell a small part of the story, as a relatively slow “under the radar” resource-consuming page (like a search page) can bring the web server to its knees, without raising any of the request rate alarms.
So, if you see your server breeze through 500 requests per second and assume that it will take another 500 to bring it down, keep in mind that those requests probably include a lot of “light” calls to static resources and cached objects. Attackers are smart and use a variety of rate-limiting techniques. In fact, we’ve seen servers brought down by a relatively modest request rate (30-50 requests per second), when these requests were all aimed at a specific CPU or I/O intensive resource.
Rather than relying on request rates, it is better to profile your traffic and monitor bot visitors. After all, with the exception of an API, bots are not supposed to hit the website at high rates, and since they do not go to sleep or follow marketing campaign links their traffic pattern is usually reliable.
And yet, for truly accurate early warnings, even bot detection is not enough…
Managing the Emergency Button
The potential for accurate early DDoS attack identification lies in your ability to classify and identify anomalies in traffic patterns. For this you need to gather information, identifying normal usage patterns and benchmarking against them to distinguish suspicious behavior.
Experience plays a large role in figuring out which website statistics get skewed under attack, and this is a large part of our DDoS protection solution. By understanding the day-to-day realities of our clients and network, we make it our job to keep our (virtual) hand ready to automatically manage our clients’ DDoS Emergency button.
Eldad Chai, Director of Product Management
Would you like to write for our blog? We welcome stories from our readers, customers and partners. Please send us your ideas: firstname.lastname@example.org