Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

6 week online course

Start now!
back to overview

Using DNS Names in Firewall Rulesets

My friend Matthias Luft sent me an interesting tweet a while ago:

All I could say in 160 characters was “it depends”. Here’s a longer answer.

Read this first

I’m looking at this challenge from the data center perspective: how do you build firewall rules that protect hosts within your data center (for north-south or east-west traffic)?

There’s another more interesting (= harder) challenge: how do you allow your clients controlled access to external services. I would not use DNS in that scenario as I don’t trust third-party DNS servers enough to use their data in my security rules. For more details, see the comments.

The Problem

Application developers and architects don’t think in IP addresses and port numbers (one would hope), they want to express their needs in terms of “these hosts have to talk to those other hosts using this software package/library”.

On the other hand, all firewall products have packet filters working on IP addresses and TCP/UDP ports hidden deep down within their bowels.

Some firewalls go way beyond that and inspect session content as well, but in the end, if you dig deep enough you’ll always find a packet filter and/or a session table.

The fundamental questions to ask are thus:

  • Who does the mapping between groups of hosts and IP addresses?
  • How is that mapping performed?
  • When is the mapping done? In real time or offline?
  • And finally, what/where is your single source of truth?

In this blog post we’re focusing exclusively on the mapping of host groups into IP address sets. Mapping software packages into TCP/UDP port numbers or URL patterns is a totally different can of worms.

The Extremes

In traditional firewall management the mapping is done manually by the firewall administrator often using dubious sources of truth (Excel spreadsheets, assumed knowledge, guesswork, traffic traces…).

On the other extreme, most decent cloud management platforms perform the mapping automatically, using cloud orchestration system as the single source of truth. For more details, listen to the excellent podcast with Brad Hedlund explaining how VMware NSX distributed firewall does its job.

TL&DL summary: Looking through the NSX Manager GUI it looks like the NSX distributed firewall is using VM groups or portgroups to enforce security policies. In reality, these definitions are compiled into sets of IP address using vCenter data and pushed into distributed firewalls as packet filters that are changed dynamically every time a VM is started, stopped, or assigned to a different security group.

OpenStack security groups are doing the same operations behind the scenes using iptables and ipset when implemented on Linux.

Best of both worlds?

Is there a middle ground? Could you use DNS names to translate human-readable rules into packet filters? The traditional answer was “no, because I don’t trust DNS”. OK, let’s look at some details:

  • Most decently-managed enterprise environments have some sort of IPAM solution serving as single source of addressing truth (note I said “decently managed” ;)
  • The same IPAM solution is used to generate data populating DNS zones;
  • DNS server is using a read-only set of IPAM data to answer queries;
  • DNS server is (hopefully) running in a highly protected zone on a redundant set of servers using anycast IP addresses for seamless failover.
  • In Windows environments, the DNS server is getting its data straight from AD.

Based on all of the above, you still don’t trust DNS data? OK, you can stop reading.

Does it make sense to use DNS data in real time and build IP address sets on a firewall based on DNS queries? Definitely not in the data plane (on-the-fly), but the control plane approach is perfectly doable: the firewall could recheck DNS mappings when TTLs expire and adjust the firewall rule sets. But what if you want to be even more static than that?

My recent work with Ansible (while creating scenarios for my network automation workshop) gave me an interesting idea that might work for traditional non-cloudy environments:

  • Define the security policies in human-readable terms;
  • Transform those policies into a YAML model (or define them as YAML objects, they are pretty readable);
  • Use Ansible DNS lookups to convert hostnames into IP addresses;
  • Create firewall rules from security policies and DNS data;
  • Compare new firewall rules with existing ones and report the changes (including changes in DNS lookup results);
  • When a security engineer approves the changes, push them into firewalls.

Would that work? Would your security policy allow you to do that? Do you think this is better than managing firewall rules in Notepad? Please write a comment!

15 comments:

  1. I think this would only work if a firewall will keep track of all DNS entries in configuration,regularly refresh cached DNS entries honoring the TTL setting and update IP flow rules every time the change happens. Otherwise you will have traffic blockholing every time someone changes the IP address of the website.
    As a side node, since Cisco's ASAs follow the similar approach (resolve hostnames when rules are created and install IP addresses), it makes it really difficult to create firewall rules for SaaS services (O365, Salesforce). And the official workaround is to include huge ranges (/18, /19) that those SaaS services use(hoping that they use them exclusively). Needless to say that it requires update every time they add a new hosting location.

    ReplyDelete
    Replies
    1. There's another cool trick you can use there (assuming your firewall supports it): extract server name from TLS certificate sent to the client during TLS key exchange. Not sure how many firewall vendors support that.

      Delete
    2. I know a firewall vendor that at last check (~2 years ago) was able to do that, however they were not able to combine it with certificate validation (CA-based). I'll let you see the issue with that....

      Delete
    3. -Brett Wolmarans10 October, 2016 16:34

      F5 can do this. The ADC part of f5 sees this naturally and can push it to f5 network firewall.

      Delete
  2. Hi Ivan,
    How would this approach work in case DNS load-balancing is used in the environment?

    ReplyDelete
    Replies
    1. Use a different hostname that lists all potential IP addresses used for DNS-based load balancing. Obviously this doesn't work with third-party services (see comment by Michael Kashin)

      Delete
  3. The scenario you describe is pretty similar to what I built in the past. In our case we didn't use DNS though, we used instead puppet classes (we were using puppet for server mgmt). So developers could express what they needed in the form of "src: frontend_svca, dst: backend_b, port: tcp/12345". Ansible then would expand frontend_svca and backend_b using the data in puppetdb and deploy the address-book and the policy in the firewall. Every time a new frontend or backend was deployed/decomissioned the playbook would be run and the address-book would be updated. In this scenario only new policies had to be approved. If a new server was deployed everything could be deployed automatically.

    ReplyDelete
  4. Palo Alto do this with FQDN objects. A DNS name is configured in the FQDN object in a security policy. Once committed the management plane performs the DNS lookup and the the resulting IP address(es) are pushed to the data plane (PAN-OS 7.1 allows 32 IP addresses for each FQDN object). The result is then checked every 30 mins by default.

    https://www.google.com/patents/US8621556

    ReplyDelete
  5. What happen if multiple websites using same IP (I think it's common in cloud service today) and we want block one while allow another one website..?

    ReplyDelete
    Replies
    1. Then you need a firewall with L7 DPI capabilities so it can look into TLS certificate or HTTP GET request.

      Delete
  6. I see 2 main points here.

    - Ivan's point is to manage ``on site'' security policy with names rather than IP address. It's definitively the way to go. I'll argue that now the best way I see is container micro-services where developers are responsible to define their policies for IP communications. It's not perfect because devs were big fans of "chmod -R 777" on unix, so I fear they are not to be completely trusted for defining security policies.

    - On another side, most comments are asking for filtering "*.download.windowsupdate.com" on the firewall. This particular issue, filtering websites domains, can be easily managed with a good old web proxy. It's way better than any firewall tricks. Moreover, you can authenticate the requests at the proxy level.

    ReplyDelete
  7. Do you happen to have any hint for dealing with people that don't trust DNS ? I'm not talking only about data, there's a population that doesn't trust DNS at all (to the point they don't publish data into DNS - reliability is part of their argument).

    ReplyDelete
    Replies
    1. None, apart from hoping they'll go down the route of dinosaurs (together with COBOL apps and a few other things). I know a large global company that a perfect scale-out application infrastructure and destroyed the whole thing by using IP addresses in configuration files spread across all hosts "because you can't trust an internal DNS server". Makes you cry...

      Delete
    2. s/that a perfect/that HAS a perfect/

      Delete
  8. I think using DNS to update firewall rules always lacks accuracy. Even if see your DNS as a source of truth. You have to take DNS propagation time into account. But how long does it takes? 4 hours? 24 hours? More? On the one hand, client's TTL may times out faster than firewall control plane's TTL. Clients will try to connect to new IP - and fail! On the other hand it's an issue vice-versa, too. Anyway, the DNS approach ends in a multifaceted issue for changes.
    I guess the only valid "dynamic" way for internal destinations is to use a service discovery system like Consul etc. These directories have an clear view of instances available for a service (plus used tcp/udp port). If an instance is removed, firewall control plane can update the rule.
    In case of external destinations looking into certificate details is a valid approach but only for TLS/SSL secured connections.

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Sidebar