When I was growing up, I was always attracted to arcades. Aren’t we all? I played pinball, beat my friends at Street Fighter and Tekken – unless they used Eddie who was my Tekken weak spot. But one of my favorites was Operation Wolf, a game where you shot the bad guys who kept coming at you with bigger and badder weapons. That is, until you “sustain a lethal injury” and died or couldn’t continue.
In Operation Wolf you got points for shooting the bad guys before they got you, while avoiding the civilians. The civilians, probably hostages running away from the warzone, appeared out of nowhere in a steady stream. If you mistakenly shot them you would lose points. These types of first-person shooter (FPS) games elevate them from just a hand-eye response time test to a game that has a slight mental challenge.
The challenge was to distinguish between the bad guys and the civilians and shooting the bad guys before they shot you. Shooting too fast meant a false positive reaction since you’re killing a civilian, but hesitating too much resulted in a false negative – letting the bad guy attack you.
Over the years I kept on gaming and continued to encounter those friendly fire issues in FPS games – think the hostage rescue levels of cult favorite Counter-Strike. Little did I imagine how much in common my work in information security would be. After all this time, I’m still shooting the bad guys, still needing to do that as fast as possible and still having to watch out for the good guys.
What’s changed is the battlefield itself and the terms. The battlefield is Imperva Incapsula cloud security against web application, automated fraud and distributed denial of service (DDoS) attacks. The bad guys are attack vectors like SQL injection, cross-site scripting and automated fraud attacks executed by malicious bots and DDoS attempts. The civilians are legitimate traffic such as site visitors and good bots like Googlebot. When you block legitimate traffic, you make a false positive mistake.
So what IS a false positive?
Wikipedia states that a false positive error, or false positive, commonly what we call a false alarm, is a result that indicates a given condition has been fulfilled, when it actually hasn’t been fulfilled. In cyber security, wherever you go (and I’ve been in several parts of this industry) you encounter and get to dread them. A few examples of false positives in cyber security are:
- A legitimate file, detected by an antivirus or host intrusion detection system (HIPS) as a threat, and consequently quarantined or worse. Such a file may be flagged as a threat because it uses the same encryption/obfuscation algorithm as a known malicious file. It may contain certain strings which trigger an alert, or uses certain Windows API calls which are identified as malicious by the anti virus computing software (AV) or HIPS.
- A network spike in an anomaly detection system caused by a user who innocently procrastinated and submitted a lot of work at 2 a.m. on Friday. The system is configured to monitor for anomalies, but is not trained enough to distinguish between an attacker who breached the file server, and Jane from accounting who had to work overtime to meet her deadline.
- An email that got flagged as spam or even blocked by an email filtering service aimed at stopping phishing attacks, just because it triggered some of the heuristic rules. It may be using certain keywords which raises the suspicion level, such as “Hi Ben, I’ve got an offer for you,” even though this is a legitimate email from a friend who actually does have an offer for me, which doesn’t involve ordering prescribed drugs online …
False positives in web application security
Let’s look at a simple example of one of the most common web application attacks. The examples I’m showing are simple ones, but we see the same issue with false positive prevention in sophisticated attacks.
Suppose we want to stop SQL injection attacks. When blocking such attacks, we have several vectors that an attacker may use when injecting commands into a SQL query. Let’s assume that the PHP code running is:
And the attacker uses a vector that sends the following parameter value for the “zoneid” POST parameter:
4 UNION SELECT login,password FROM users
This basically injects a command to the server, saying: “In addition to the shop names and phones from the shops table, add the logins and passwords from the users table.”
When blocking attacks it’s also important to note that the amount of variations may be practically infinite. For example, the mentioned attack may also be sent as one of the following:
4 UNION SELECT password,login FROM users
4 UNION SELECT login,CONCAT(password,’’) FROM users
4 UNION /* aaaaaaaa */ SELECT login,/*some comment*/password FROM users
That’s why, when blocking such a vector, we would need to look for a certain pattern and not an exact match. In this case, we may set a rule to be using wildcards, or even better – regular expressions – in order to give us more flexibility when looking for matches. For simplicity, I am leaving the text blank between the commands in this example:
UNION _____ SELECT _____ FROM _____
This “rule” would block any payload containing union .. select .. from .. , which would block most vectors attempted against these types of UNION-based SQL injection attacks. In most cases, it would even work. However, in some, you’re shooting civilians to use the gaming analogy.
How so? For example, let’s say that I’m writing a friend, via a web page, about Union Square in San Francisco. Let’s assume I choose to say that “San Francisco’s Union Square has a great selection of restaurants with food from countless cuisines.” Which is true, and also legitimate. However, checked by the rule we just created, it will be flagged as malicious and may even be blocked:
UNION Square area has a great SELECTion of restaurants, with food FROM countless cuisines.
As you can see, that simple rule would now need to get much more complicated and much more specific. In the process of making it a zero percent false positive rule, it would likely need to also be split into dozens of more specific rules.
False positives in DDoS protection
The last example is a good illustration of string matching and string processing. In DDoS protection, the main problem is distinguishing between legitimate traffic and attacking traffic in real time. For example, let’s say I’m an online retailer selling arcade machines and I have an average request rate of 200 requests per second. To increase traffic I start a campaign in national TV, social media and web promotions. All of the sudden I have spikes of 3K requests per second. Some of this activity may be attributed to attacks, and therefore blocked.
So what do we do?
Back to Operation Wolf. After pumping all my lunch money in arcade machines, there was a plot twist. I eventually got Operation Wolf on my — if I recall correctly — Atari console. As a result, I got to play the game at home with a joystick. This was half the fun, but without paying any coins. With practice, I got much better at it and I learned not to stress myself into prematurely shooting bad guys on sight. I learned that although challenging, I needed to be less trigger happy and more careful when squeezing the trigger, so I wouldn’t shoot civilians and lose precious points.
I learned to recognize certain patterns which helped me make quicker and wiser decisions such as recognizing the AK-47 shapes that the bad guys had, the nurse uniforms of some of the civilians and the different silhouettes of the different characters used. I spent a lot of time learning the differences and improving the time it took me to validate that I’m taking a good shot.
Then, one day, I beat Operation Wolf.
Fast forwarding to today, I now manage the security research team for Incapsula, and a part of my responsibility is to reduce false positive alerts to practically zero. Finding the different patterns we can use to classify the traffic in real time is much like identifying the different patterns of attackers in the video games. A big difference is the learning process. Whereas I’ve spent an enormous amount of time perfecting my Operation Wolf skills, in our services we have traffic flowing to hundreds of thousands of sites, and clusters of heavy duty servers that can take care of most of the training and learning, and require our experts to intervene only in edge cases, new vulnerabilities and when improving the system itself.
Here are a few examples of how we’re working to achieve zero percent false positives:
Tight process — Our process is very thorough. Each rule added to our system gets reviewed by a board of our security experts to make sure nothing slips through.
Testing and crowdsourcing — We never commit any rules such as blocking rules without first testing them as “silent” rules. Given the size of our network, after a relatively short amount of time we get enough data to ensure that all vectors being alerted are certain attack attempts. That means we use crowdsourcing to filter out any false positives.
Iterating — If the rule contains any false positives, it is either modified or split into more specific rules and kept in silent mode. It gets updated to block attacks only when a rule has zero percent false positives. In some cases that takes several iterations, as more false positive data flows in.
Specific cases — Our system of IncapRules consists of very granular rules which can be applied at a specific site/url level to handle certain situations. These rules can be adjusted to address special edge cases where a specific web application of one of our clients is sending highly irregular data, or in cases where an immediate action is needed on a very granular level.
For example, we had a website which used the string “
--“ (double dash) as a delimiter. This may cause false positives in certain cases since this is also a SQL comment meaning “ignore anything after the double dash,” which is useful in SQL injections.
Specific to DDoS
To stop such false positive blocking of attacks in our systems, we first conduct extensive client classification behind the scenes to determine which types of traffic are legitimate and which are not (such as those generated by DDoS bots) even before an actual attack starts. This makes mitigation easier once a real attack starts.
In addition, when mitigating attacks, we gradually increase the level of our challenges by first querying clients suspected to be attacking bots. If they fail those initial challenges we may force them to pass a CAPTCHA test, or even block them.
We also use our massive client reputation database gathered from our CDN, which helps us program automated mitigation by blocking entire botnets when they are attempting an attack.
That leaves as little as possible that we need to block based purely on certain thresholds being met, although using thresholds does help us optimize the mitigation process.
False positives, although extremely hard to distinguish in certain cases, are being driven down to practically zero percent, using a multitude of technologies and procedures. And gaming is fun.
Would you like to write for our blog? We welcome stories from our readers, customers and partners. Please send us your ideas: firstname.lastname@example.org