HyperScale Data Centers Featured Article


Web Content Filtering at 100Gbps: Which Architecture is Right for You?


July 27, 2017
By Special Guest
Sven Olav Lund, senior product manager at Napatech -

People around the world watch a billion hours of YouTube content every single day. That’s an astronomical amount of data and a hugely diverse group of audiences. It’s just one example of the easy accessibility of any kind of content you could want. However, organizations may not want their members gaining access to certain kinds of content, so they use web filters to block content they deem inappropriate or productivity-draining.

However, because speed remains king in the internet world, organizations are faced with the prospect of filtering at 100G. What options are available today, and which is right for your organization?

The Challenge: Filtering at Internet Speed

The internet requires higher-speed networks to ensure service level and capacity as internet traffic increases. In telecom networks, to serve hundreds of thousands of users, 100 Gbps network links are introduced to keep up with the demand. Today, the market has reached a state of maturity regarding solutions for web content filtering at 1 Gbps and 10 Gbps, but filtering at 100 Gbps poses a whole set of new challenges.

It takes an incredible amount of processing power to filter web content at this speed. Furthermore, there is a need for distribution of traffic across available processing resources. This is usually achieved with hash-based 2-tuple or 5-tuple flow distribution on subscriber IP addresses. In telecom core networks, subscriber IP addresses are carried inside GTP tunnels and, consequently, support for GTP is required for efficient load distribution when filtering traffic in telecom core networks.

Two Types of Filtering Architecture

To address these challenges, there are two types of architecture available for processing resources and providing load distribution. The first method is a stacked, distributed server solution.

In this method, three COTS servers equipped with a dozen 2 x 10 Gbps standard NICs are paired with a high-end load balancer, along with 24 cables for 10 Gbps links. The load balancer connects in-line with the 100 Gbps link and load distributes traffic to 10 Gbps ports on the standard servers. The load balancer must support GTP and flow distribution based on subscriber IP addresses. Because the load balancer cannot guarantee 100 percent even load distribution, there is a need for overcapacity on the distribution side. A reasonable solution comprises 24 x 10 Gbps links. For this solution, three standard servers, each equipped with four 2×10 Gbps standard NICs, in total provides the 240 Gbps traffic capacity (3 x 4 x 2 x 10 Gbps).

The downside of this method is the expense of the load balancer, but this is offset by the reasonable cost of the standard COTS servers and standard NICs. The solution involves many components and complex cabling. Furthermore, the rack space required is relatively large, and system management is complex due to the multi-chassis design.

The second method is a single, consolidated server solution. This entails consolidating load distribution, 100G network connectivity and the total processing power in a single server. This solution requires a COTS server and two 1 x 100G Smart NICs. Since up to 200 Gbps traffic needs to be processed within the same server system, the server must be equipped with multiple cores for parallel processing. For example, a server with 48 CPU cores can run up to 96 flow processing threads in parallel using hyper-threading. To fully use CPU cores, the Smart NIC must support load distribution to as many threads as the server system provides. Also, to ensure balanced use of CPU cores, the Smart NIC must support GTP tunneling. The Smart NIC should also support these features at full throughput and full duplex 100 Gbps traffic load, for any packet size.

This consolidated method has several advantages. First, the cabling is simple due to single component usage. Second, it provides a one-shop system management, where there are no complex dependencies between multiple chassis. Third, the footprint in the server rack is very low, thereby reducing rack space hosting expenses.

Comparing Solutions

The technical aspect of a web content filtering solution for 100 Gbps is clearly important, but the total cost of ownership should be a serious consideration. Here are some significant parameters for operations expenditure (OPEX) and capital expenditure (CAPEX) calculations:

OPEX

  • Support and warranty
  • Rackspace hosting expenses
  • Power usage, including cooling, for:
    • NICs
    • Servers
    • Load balancers

CAPEX

  • Price of smart NICs or standard NICs
  • Price of software
  • Price of servers

Which solution is right for your organization? It depends on your use case. Consider the application CPU requirements when making the decision, because the two solutions have significant cost differences. As content consumption rises exponentially, web content filtering at 100 Gbps is a necessity today. If you are interested in a consolidated model that is simpler and less expensive, consider the load distribution and full throughput that Smart NICs offer.

About the author: Sven Olav Lund is a Senior Product Manager at Napatech and has over 30 years of experience in the IT and Telecom industry. Prior to joining Napatech in 2006, Sven Olav was a Software Architect for home media gateway products at Triple Play Technologies. From 2002 to 2004 he worked a Software Architect for mobile phone platforms at Microcell / Flextronics ODM and later at Danish Wireless Design / Infineon AG. As a Software Engineer, Sven Olav started his career architecting and developing software for various gateway and router products at Intel and Case Technologies. He has an MSc degree in Electrical Engineering from the Danish Technical University.




Edited by Maurice Nagle









Click here to share your opinion – Would color of equipment influence your purchasing decision, one way or another?





Featured Blog Entries

Day 4, Cisco Live! - The Wrap

Day 4 was the final day of our first ever Cisco Live! We had a great show, with many great conversations and new connections with existing and potential end users, resellers, partners and job hunters.

Day 3, Cisco Live!

Day 3 of Cisco Live is history! For Fiber Mountain, we continued to enjoy visits from decision makers and influencers who were eager to share their data center and structured cabling challenges.

Day 2, Cisco Live!

Tuesday was Day 2 of Cisco Live for Fiber Mountain and we continued to experience high levels of traffic, with many high value decision makers and influencers visiting our booth. One very interesting difference from most conferences I attend is that there are no titles on anyone's show badges. This allows open conversations without people being pretentious. I think this is a very good idea.

Day 1, Cisco Live!

Fiber Mountain is exhibiting at Cisco Live! In Las Vegas for the first time ever! Our first day was hugely successful from just about any perspective - from quantity and quality of booth visitors to successful meetings with customers.

Industry News