A few weeks ago I hosted a webinar titled, A Network Administrators Guide to Web Application Security. During the webinar I covered:

  • An overview of recent web security events occurring in 2014
  • Web application threats and common attack types
  • How to defend your website against today’s common threats
  • An overview of automated tools you can use to help simplify your website security

A recorded version of the full webinar is available here.

During the webinar there were a large number of questions, so we decided to create a blog post from the Q&A session. Here’s what was asked, and how I answered.


Question: You mentioned several security solutions during your presentation. Are these available as services or software-as-a-service (SaaS) solutions?

Orion Cassetto: Yes. Many of the suggested solutions are available via SaaS.

Vulnerability scanners deployed in a SaaS model are pretty common and many of the large vendors offer them.

Web application firewalls (WAFs) as-a-service started to emerge four or five years ago and are well vetted at this point. They provide a valuable security service with little in the way of management overhead.

Source code analysis as a service exists but isn’t very popular. This is because many organizations view their source code as a mission critical piece of intellectual property and therefore are reluctant to upload it to a third party for scanning.

Question: If 61% of traffic is automated, why am I not seeing it in my analytic data?

OC: Many people rely on Google Analytics as their sole source of information about their website’s traffic. Google Analytics works by using a JavaScript code snippet placed in the pages to be tracked. Since only around 1% of bots run JavaScript (according to Incapsula’s bot report), the remaining 99% won’t trigger this tracking mechanism and thus will not appear in analytic results.

Question: Which approach do you recommend, white or black box?

OC: I recommend both. White and black box approaches each have their benefits. Black box is cheap and easy to perform; it can help identify issues like misconfiguration, while white box helps provide vulnerability locations in the code itself. A “grey box” approach, where both models are implemented, can help remove inherent false positives from the white-box approach by validating them with the scanning results.

In any event you’ll need to fix these vulnerabilities once you have found them. This takes time and development resources. I recommend deploying a WAF in front of applications for immediate protection, and using a grey box approach to fix the vulnerability afterward. This awards users with an instantly-improved security posture and the luxury of time fix vulnerabilities at their own pace.

Whitebox vs Blackbox

Question: Wouldn’t routing traffic through a service like yours for security add latency because the data has to travel farther?

OC: If the WAF or service in question does not include a content delivery network (CDN), then yes, it would induce latency. Incapsula and similar services are built on top of CDNs which are able to optimize, cache, and serve website content out of local data centers—thus greatly reducing page loading times. Using a service with a CDN built-in should result in a net reduction of latency and faster-loading web pages.

Question: I have a next-gen firewall (NGFW), why would I need a WAF?

OC: Excellent question. A NGFW is basically an evolution of a stateful firewall that is application aware. It has a context of applications, and can identify and block applications based on specific patterns and application fingerprinting techniques. This does not mean that it understands application traffic and application logic.

A WAF, on the other hand, is a security device designed to inspect web traffic (HTTP/S) for any signs of malicious requests and web attacks.

These two solutions should be deployed together. A well-protected application will have a network or NGFW as well as a web application firewall.

Question: My ISP provides DDoS protection for free. Why would I need a third-party solution?

OC: That depends on what they are providing to you. It’s possible that you might not need one. However, it’s really on you to determine if that is the case or not. I urge you to investigate their offering in terms of its capacity to protect and the types of attacks it can defend against.

Attacks fall into two general categories: application layer and network layer. Most ISP and hosting company-provided solutions will protect against network layer (layers 3 and 4 on the OSI model) attacks, but do nothing to protect against application layer attacks (layer 7). Application layer attacks are usually smaller in size, but more complex. And even a small attack of this type could take down your website or application.

You should also identify the level of protection they are providing you in terms of capacity for mitigation. This is usually expressed in gigabits per second (Gbps). Many ISP-provided solutions will guarantee protection against attacks up to 1 Gbps, 2 Gbps, or 4 Gbps. This may seem like a lot, but with today’s average attack size at 10 Gbps, it does very little to actually ensure website availability in the face of even a modest attack.

Question: What are the costs associated with white and black box tools? Is one of them cheaper?

OC: With licensing fees aside, what you need to look at is how much does this tool cost to run? That includes both finding a vulnerability and also remediating it. Black box tools are easy to run, but do not pinpoint vulnerabilities in the code. They also require an application to be in production before they can be run. This means that you have to wait until the end of a software development life cycle (SDLC) to use them.

The white box approach, on the other hand, can begin the day coding begins. For some tools you must wait until code is buildable in order to scan it, but either way you can begin identifying and remediating vulnerabilities sooner in the SDLC when the cost to fix each bug is much lower. For this reason, I feel that white box tools have a lower total cost of ownership.

This graph, by Capers Jones in 1996, may be old, but should help illustrate my point:

Applied Software Measurement, Capers Jones, 1996

Source: Applied Software Measurement, Capers Jones, 1996

Furthermore, I recommend implementing a WAF first, because it is the quickest and easiest way to obtain a healthy security posture. Once it’s deployed, vulnerabilities will be virtually patched, allowing you to be protected while you use a white or black box approach to identify and remediate vulnerabilities.

Question: Can Incapsula be used with cloud environments like Amazon Web Services (AWS)?

OC: Absolutely. We have hundreds of thousands of enterprise websites protected by our service. These are spread across all types of environments including AWS, shared hosting, managed hosting, on premise hosting, etc. AWS deployments are a great candidate for our service. We can even gracefully handle sites which elastically scale with Amazon ELB.

Question: You mentioned cleaning input before sending it into an application (to deal with XSS
and SQLI ). How do you do that?

OC: Ultimately you need to sanitize user-supplied information before sending it to your application. More information about how to validate and sanitize against SQLI and XSS can be found here.

Question: How do you tell the difference between good and bad bots?

OC: The first step is to identify that it is a bot. This can be done by looking at things like the HTTP request headers-in which order and at what frequency is a visitor accessing information? Which browser and client is it using? Does it support cookies or JavaScript? Can it pass a CAPTCHA? Is it interacting with the web server in a way that is indicative of human behavior?

The easiest way to determine what that bot is there for is to use a third-party tool. There are many low-cost options available which can help you do this. Many security tools like application delivery controllers or WAFs include bot mitigation as part of their service.

Bad Bots vs Good Bots

Question: Does the cyber security space mimic the physical security space, where people are often only prompted to invest in a security system after an incident has occurred?

OC: We definitely see some of that, especially with DDoS attacks. Some organizations believe that they will not be a target and thus investing in a solution would be unnecessary.

Unfortunately, security through obscurity is not a very good security practice. DDoS attack motivations range from political activism, ransom letters, and gaining a competitive edge, to masking other hacking activities, and even plain old malice. For this reason, it’s possible to be a target, even if your website is relatively unknown.

Based on findings from our recent DDoS impact survey, the average cost of a DDoS attack is $40,000 per hour. This number is made up of lost revenue, reputational damage, support costs, etc.

In most cases it’s cheaper for you to plan for DDoS attacks ahead of time than to succumb to one. It’s quite common for organizations to end up being burnt by hackers and to immediately launch a project to find a DDoS mitigation tool.

Question: You have quantified how much it costs to be the victim of a DDoS attack. How much does it cost to protect a business from these attacks?

OC: Solutions range from free or cheap network layer protections, which may be included as part of an existing product or service, to several hundreds of thousands of dollars per year for large, multi-national companies with expansive online estates. It is almost always cheaper to purchase a solution than to deal with website and infrastructure outages during DDoS attacks.


If you missed the presentation and would like to view a recorded version of the full webinar, it is available here.


Would you like to write for our blog? We welcome stories from our readers, customers and partners. Please send us your ideas: blog@incapsula.com