Having spent my career in various roles in IT security, Ivan and I always bounced thoughts on the overlap between networking and security (and, more recently, Cloud/Container) around. One of the hot challenges on that boundary that regularly comes up in network/security discussions is the topic of this blog post: microsegmentation and host-based firewalls (HBFs).
New technologies like NSX-T, Tetration, or security group-functionality in public clouds make this topic come up even more often recently. This post will not discuss the details of individual products (for more information watch the NSX, AWS and Azure webinars), but different aspects of firewall rule design based on your chosen firewall technology (centralized vs. microsegmentation). As always, your mileage/requirements/risk appetite will vary, and I hope this post provides relevant input for your evaluation.
RFC7288 provides reflections on host firewalls and describes the security benefits of using firewalls in general. When it comes to security benefit from technology, I usually like to introduce my view on security benefit and its implications: Security Benefit = Security Posture + Security Functionality.
The security posture of any technology is a prerequisite to gaining security benefits from it. There are plenty of examples where security technology, which was supposed to provide security benefits, actually resulted in a net reduction of the overall security posture. OpenSSL (providing transport encryption) with Heartbleed or the various RCE vulnerabilities in endpoint protection solutions (1,2,3) come to mind as some of many examples.
The analysis of the security posture of a product/technology is a topic beyond this post (we covered some aspects in Building Network Automation Solutions). Still, make sure to look at the security posture of the products on your evaluation shortlist. Even in recent years, security researchers found vulnerabilities in all types of networking products - from firewalls to Host TCP stacks.
The security benefit from network filtering solutions (and we are purely talking filtering on Layer 3/4) is based on restricting access to and from systems. There is comprehensive guidance on which specific types of traffic your firewall (or packet filter) should be filtering (e.g. RFC4890 or NIST SP 800-41)… but in my opinion, RFC1825 contains the most relevant quote for firewall rules: “[…] many dislike their presence because they restrict connectivity”. The fact that systems need to communicate and that it is tough to distinguish legitimate from malicious communication results in several fundamental truths which result in intrinsically hard challenges:
- An allowlist approach, only allowing defined communication relationships, provides the most security benefit.
- Numerous systems need to communicate with many external services.
- Various modern software engineering/deployment/operation approaches require broad Internet access.
- IP addresses, the building blocks of network filtering, are more ephemeral than ever.
- Network filtering is implemented by network/firewall engineers, while the knowledge about required communication relationships lies with application engineers.
Those truths apply to host-based and central firewalls with only slightly different extents. They also account for the fact that I have barely ever performed/participated in/heard of a firewall rule audit that did not result (quickly) in (a high number of) findings of outdated rules that nobody knew anymore what they were supposed to do.
Based on those truths, I find these guiding principles for firewall ruleset design very helpful:
- Start with little filtering granularity.
- Design the rule change process with a high degree of automation in mind. Modern approaches to automated firewall management can bring network engineering and application engineering closer together while improving documentation and traceability of rules (examples here and here)
- Automation can drive self-service, expiration, and automated approval processes for rule changes.
- Automation and increasing granularity require a high degree of overall IT engineering maturity. Often firewalls are thought of as a way to bring structure and order to an environment. That will not work when the existing business processes are not in order and require unknown communication relationships.
To bring it all together:
- Start with little filtering granularity
- Implement automated processes around that
- Improve overall process and engineering maturity to support automation and filtering granularity.
These considerations result in an essential requirement for your filtering technology of choice: administrative interfaces that allow a proper level of automation and central management - whether it is vendor-provided or home-grown.
And finally, circling back to the host-based focus of this post: While host-based firewalls inherently allow for a greater level of filtering granularity, the more relevant question is whether the management of your HBF solution and your operational maturity will allow you to leverage/reach this greater granularity.
I have seen successful large-scale deployments of host-based firewalls on client/endpoint systems. The rules there are often quite simple and rarely change:
- Block any inbound access
- Allow outbound access
- Allow inbound access from your endpoint administration segments (which will most likely be necessary and at least should be well known and not change often).
Server environments usually don’t conform to this communication pattern, making firewall rule design more difficult. With the fundamentals covered in this post, we will dive into more details in the future, such as common network security patterns for servers or the challenge that an attacker can disable a host-based firewall on a compromised system.
We would be very excited to hear your comments, disagreements, get links to other sources, and receive more questions! Finally, you might want to watch this presentation that provides a more network engineering-focused perspective large-scale firewall operation.