Replacing Central Router with a Next-Generation Firewall?

One of my readers sent me this question:

After reading this blog post and a lot of blog posts about zero trust mode versus security zones, what do you think about replacing L3 Data Center core switches by High Speed Next Generation Firewalls?

Long story short: just because someone writes about an idea doesn’t mean it makes sense. Some things are better left in PowerPoint.

Let’s start with raw numbers (and Fermi estimates). You probably need an order of magnitude more bandwidth within the data center than going out of the data center. If you have a 10GE WAN connection, you probably need 50+ Gbps of core bandwidth in your data center (web hosting companies are an obvious exception).

A single ToR switch can give you 1+ Tbps of linerate layer-3 forwarding performance. In reality, if you’d redesign most data centers with state-of-the-art equipment, you’d probably be left with 2 ToR switches and a single rack of servers.

I repeated my “Who has more than 2000 VMs” poll @ Interop Las Vegas. The result: a few people in a packed room (way less than 10% of the audience). Afterwards, I did a reality check with Chris Wahl, and he told me most of his customers are below 3000 VMs.

In comparison, you can get next-generation firewalls that works at 100 Gbps speeds, but it would probably be more expensive than the rest of your data center (Palo Alto needs 400 processors to get that performance, which is probably more cores than most companies need to run their application workload).

Next, placing a firewall in the middle of your data center makes absolutely no sense from the security perspective. You need a full-blown next-generation firewall either at the edge of your data center (the traditional architecture), or close to every VM (microsegmentation approach), but not between internal VLANs – if an intruder breaks into a VLAN shared by multiple applications, it’s game over anyway.

Also consider that you’ll probably gain nothing by deep inspection of backup traffic, SQL queries or application JSON/RPC calls? Who will configure all that stuff? Also, don’t forget that most times packets filters might be good enough for intra-DC traffic generated by applications that use static port numbers (get lost, Microsoft Outlook).

Finally, if you really need full visibility into traffic within your data center (which you won’t get with central firewall anyway, because you’d be missing all intra-VLAN traffic), deploy Netflow or Sflow on your virtual switches… or, if you have a really big budget, go for Gigamon’s Visibility Fabric (hint: they run tapping VMs in promiscuous mode on every ESXi host).

Want to know more about microsegmentation?

Listen to the NSX Microsegmentation podcast with Brad Hedlund and Palo Alto Virtual Firewalls podcast with Christer Swartz, or watch my Virtual Firewalls webinar.

11 comments:

  1. True, surveillance and visibility is as important as authenticity and access control.

    The problem that with this is that the Infosec Mordac's claim the above is a risk. Much like an ostrich.
  2. If E/W forwarding performance is not an issue, why doesn't it make sense from the security perspective? Each application in a separate VLAN/subnet/security zone and FW as the default gateway for each. How is this different from the "every application an independent tenant" model that you advocate?
    Replies
    1. If you deploy each tier of each application in an individual VLAN, then centralized firewall makes perfect sense. How often have you seen that? ;) ... and how many VLANs would you need in a typical enterprise data center?
    2. It depends ;)

      I don't think many people have seen each tier of each application be deployed in it's own VLAN, but I have seen the dedicated VLAN/security zone per app deployments. Also, similar, non-critical apps can be grouped in the same security zone if the FW rules get too complex.
  3. And then there is the asymmetric routing situation if not done correctly. Flows could get dropped if not following the exact same path in both directions all the time, including redundancy scenarios.
  4. I realize this is petty BUT, optional in microsoft exchange 2010, and required in exchange 2013, all microsoft outlook traffic is a single port, tcp/443. So no need to flip ms the finger anymore for their of rpc/tcp port mappings
    Replies
    1. This is not petty, this is AWESOME NEWS! Thanks for sharing ;)
  5. That's all correct and very educating, thanks!

    However it looks like your reader wasn't attentive enough or just thinking out-loud, as original blog-post stated exactly: In smaller or branch offices, it might actually cost less to use the same device for Internet traffic and internal segmentation.
    -- which I fully support personally form both operational complexity and cost efficiency standpoints.
    Replies
    1. As always, the answer is "it depends", in this case on the E-W (intra-site) versus N-S (inter-site) traffic. I would definitely agree with that assessment for most small offices, but the firewall-only design isn't exactly new in that particular scenario (we've been using Cisco ASA to do that for years).
  6. I work for a company with about 250 Prod VMs, 350 Dev VMs, 1200 Desktops. I am working on a Network segmentation project which includes a NGFW in the data center but we will only be running IPS / IDS with Application Controls for security policy. At the Internet edge we will run full Service NGFW stack with Advanced threat / URL , etc. etc. Our Info sec group has been quick to suggest a Zone for just about every server which essentially moves VLAN routeing to the firewall away from the cores.

    I am fighting this suggesting, 3 Zones with a "small" number of Zones within those Zones. For example USER ZONE( Layer3 Switch includes MPLS routers for all User branch offices) routed connection to Data Center NGFW / DEV ZONE(L3 Switch ) routed connection to Data Center NGFW / PRODUCTION ZONE( L3 Switch) routed connection to Data Center NGFW.
    This would give us 3 Large Zones and the ability to write policy between any combination of them - USER <--> DEV , DEV <--> PROD, PROD <--> USER.

    Servers deemed sensitive enough for their own smaller zones will all be dual homed with L2 backup network with Access to media servers.


  7. Hi Ivan,
    Quoting from the article
    "
    Also consider that you’ll probably gain nothing by deep inspection of backup traffic, SQL queries or application JSON/RPC calls? Who will configure all that stuff? "
    "
    Even with microsegmentation we need to configure the policies at the VM NIC level. So the challenge is to have these rules configured in the ToR's/Spines vs configuring the rules in the VM's themselves.

    isn' it?

    Sincerely,
    Sudarsan.D
Add comment
Sidebar