Using DNS Names in Firewall Rulesets
My friend Matthias Luft sent me an interesting tweet a while ago:
@ioshints What’s your take on firewall rule sets & IP addresses vs. hostnames?
— Matthias Luft (@uchi_mata) August 16, 2016
All I could say in 160 characters was “it depends”. Here’s a longer answer.
Read this first
I’m looking at this challenge from the data center perspective: how do you build firewall rules that protect hosts within your data center (for north-south or east-west traffic)?
There’s another more interesting (= harder) challenge: how do you allow your clients controlled access to external services. I would not use DNS in that scenario as I don’t trust third-party DNS servers enough to use their data in my security rules. For more details, see the comments.
The Problem
Application developers and architects don’t think in IP addresses and port numbers (one would hope), they want to express their needs in terms of “these hosts have to talk to those other hosts using this software package/library”.
On the other hand, all firewall products have packet filters working on IP addresses and TCP/UDP ports hidden deep down within their bowels.
Some firewalls go way beyond that and inspect session content as well, but in the end, if you dig deep enough you’ll always find a packet filter and/or a session table.
The fundamental questions to ask are thus:
- Who does the mapping between groups of hosts and IP addresses?
- How is that mapping performed?
- When is the mapping done? In real time or offline?
- And finally, what/where is your single source of truth?
In this blog post we’re focusing exclusively on the mapping of host groups into IP address sets. Mapping software packages into TCP/UDP port numbers or URL patterns is a totally different can of worms.
The Extremes
In traditional firewall management the mapping is done manually by the firewall administrator often using dubious sources of truth (Excel spreadsheets, assumed knowledge, guesswork, traffic traces…).
On the other extreme, most decent cloud management platforms perform the mapping automatically, using cloud orchestration system as the single source of truth. For more details, listen to the excellent podcast with Brad Hedlund explaining how VMware NSX distributed firewall does its job.
TL&DL summary: Looking through the NSX Manager GUI it looks like the NSX distributed firewall is using VM groups or portgroups to enforce security policies. In reality, these definitions are compiled into sets of IP address using vCenter data and pushed into distributed firewalls as packet filters that are changed dynamically every time a VM is started, stopped, or assigned to a different security group.
OpenStack security groups are doing the same operations behind the scenes using iptables and ipset when implemented on Linux.
Best of both worlds?
Is there a middle ground? Could you use DNS names to translate human-readable rules into packet filters? The traditional answer was “no, because I don’t trust DNS”. OK, let’s look at some details:
- Most decently-managed enterprise environments have some sort of IPAM solution serving as single source of addressing truth (note I said “decently managed” ;)
- The same IPAM solution is used to generate data populating DNS zones;
- DNS server is using a read-only set of IPAM data to answer queries;
- DNS server is (hopefully) running in a highly protected zone on a redundant set of servers using anycast IP addresses for seamless failover.
- In Windows environments, the DNS server is getting its data straight from AD.
Based on all of the above, you still don’t trust DNS data? OK, you can stop reading.
Does it make sense to use DNS data in real time and build IP address sets on a firewall based on DNS queries? Definitely not in the data plane (on-the-fly), but the control plane approach is perfectly doable: the firewall could recheck DNS mappings when TTLs expire and adjust the firewall rule sets. But what if you want to be even more static than that?
My recent work with Ansible (while creating scenarios for my network automation workshop) gave me an interesting idea that might work for traditional non-cloudy environments:
- Define the security policies in human-readable terms;
- Transform those policies into a YAML model (or define them as YAML objects, they are pretty readable);
- Use Ansible DNS lookups to convert hostnames into IP addresses;
- Create firewall rules from security policies and DNS data;
- Compare new firewall rules with existing ones and report the changes (including changes in DNS lookup results);
- When a security engineer approves the changes, push them into firewalls.
Would that work? Would your security policy allow you to do that? Do you think this is better than managing firewall rules in Notepad? Please write a comment!
As a side node, since Cisco's ASAs follow the similar approach (resolve hostnames when rules are created and install IP addresses), it makes it really difficult to create firewall rules for SaaS services (O365, Salesforce). And the official workaround is to include huge ranges (/18, /19) that those SaaS services use(hoping that they use them exclusively). Needless to say that it requires update every time they add a new hosting location.
How would this approach work in case DNS load-balancing is used in the environment?
https://www.google.com/patents/US8621556
- Ivan's point is to manage ``on site'' security policy with names rather than IP address. It's definitively the way to go. I'll argue that now the best way I see is container micro-services where developers are responsible to define their policies for IP communications. It's not perfect because devs were big fans of "chmod -R 777" on unix, so I fear they are not to be completely trusted for defining security policies.
- On another side, most comments are asking for filtering "*.download.windowsupdate.com" on the firewall. This particular issue, filtering websites domains, can be easily managed with a good old web proxy. It's way better than any firewall tricks. Moreover, you can authenticate the requests at the proxy level.
I guess the only valid "dynamic" way for internal destinations is to use a service discovery system like Consul etc. These directories have an clear view of instances available for a service (plus used tcp/udp port). If an instance is removed, firewall control plane can update the rule.
In case of external destinations looking into certificate details is a valid approach but only for TLS/SSL secured connections.
Apologies for bringing up an old topic, but I am curious if your thinking on this has changed?
We are seeing more business applications that require cloud based services (such as licensing servers) that dont have static IP addresses, so rely on firewall rules that use DNS hostnames.
Nothing fundamental has changed since 2016 ;) It still all depends on whether you trust DNS.
If you want to do things fast, you have to filter on IP addresses, and maybe use DNS to change the ACLs in the background.
However, if you're already doing deep packet inspection, then of course you could use TLS negotiation to figure out the real server name (and maybe even check its certificate) or as someone wrote in the comments "use the good old web proxy"