Considerations for Host-based Firewalls (Part 1)

This is a guest blog post by Matthias Luft, Principal Platform Security Engineer @ Salesforce, and a regular ipSpace.net guest speaker.

Having spent my career in various roles in IT security, Ivan and I always bounced thoughts on the overlap between networking and security (and, more recently, Cloud/Container) around. One of the hot challenges on that boundary that regularly comes up in network/security discussions is the topic of this blog post: microsegmentation and host-based firewalls (HBFs).

New technologies like NSX-T, Tetration, or security group-functionality in public clouds make this topic come up even more often recently. This post will not discuss the details of individual products (for more information watch the NSX, AWS and Azure webinars), but different aspects of firewall rule design based on your chosen firewall technology (centralized vs. microsegmentation). As always, your mileage/requirements/risk appetite will vary, and I hope this post provides relevant input for your evaluation.

RFC7288 provides reflections on host firewalls and describes the security benefits of using firewalls in general. When it comes to security benefit from technology, I usually like to introduce my view on security benefit and its implications: Security Benefit = Security Posture + Security Functionality.

Security Posture

The security posture of any technology is a prerequisite to gaining security benefits from it. There are plenty of examples where security technology, which was supposed to provide security benefits, actually resulted in a net reduction of the overall security posture. OpenSSL (providing transport encryption) with Heartbleed or the various RCE vulnerabilities in endpoint protection solutions (1,2,3) come to mind as some of many examples.

The analysis of the security posture of a product/technology is a topic beyond this post (we covered some aspects in Building Network Automation Solutions). Still, make sure to look at the security posture of the products on your evaluation shortlist. Even in recent years, security researchers found vulnerabilities in all types of networking products - from firewalls to Host TCP stacks.

Security Functionality

The security benefit from network filtering solutions (and we are purely talking filtering on Layer 3/4) is based on restricting access to and from systems. There is comprehensive guidance on which specific types of traffic your firewall (or packet filter) should be filtering (e.g. RFC4890 or NIST SP 800-41)… but in my opinion, RFC1825 contains the most relevant quote for firewall rules: “[…] many dislike their presence because they restrict connectivity”. The fact that systems need to communicate and that it is tough to distinguish legitimate from malicious communication results in several fundamental truths which result in intrinsically hard challenges:

  • An allowlist approach, only allowing defined communication relationships, provides the most security benefit.
  • Numerous systems need to communicate with many external services.
  • Various modern software engineering/deployment/operation approaches require broad Internet access.
  • IP addresses, the building blocks of network filtering, are more ephemeral than ever.
  • Network filtering is implemented by network/firewall engineers, while the knowledge about required communication relationships lies with application engineers.

Those truths apply to host-based and central firewalls with only slightly different extents. They also account for the fact that I have barely ever performed/participated in/heard of a firewall rule audit that did not result (quickly) in (a high number of) findings of outdated rules that nobody knew anymore what they were supposed to do.

Based on those truths, I find these guiding principles for firewall ruleset design very helpful:

  • Start with little filtering granularity.
  • Design the rule change process with a high degree of automation in mind. Modern approaches to automated firewall management can bring network engineering and application engineering closer together while improving documentation and traceability of rules (examples here and here)
  • Automation can drive self-service, expiration, and automated approval processes for rule changes.
  • Automation and increasing granularity require a high degree of overall IT engineering maturity. Often firewalls are thought of as a way to bring structure and order to an environment. That will not work when the existing business processes are not in order and require unknown communication relationships.

To bring it all together:

  • Start with little filtering granularity
  • Implement automated processes around that
  • Improve overall process and engineering maturity to support automation and filtering granularity.

These considerations result in an essential requirement for your filtering technology of choice: administrative interfaces that allow a proper level of automation and central management - whether it is vendor-provided or home-grown.

And finally, circling back to the host-based focus of this post: While host-based firewalls inherently allow for a greater level of filtering granularity, the more relevant question is whether the management of your HBF solution and your operational maturity will allow you to leverage/reach this greater granularity.

Conclusion

I have seen successful large-scale deployments of host-based firewalls on client/endpoint systems. The rules there are often quite simple and rarely change:

  • Block any inbound access
  • Allow outbound access
  • Allow inbound access from your endpoint administration segments (which will most likely be necessary and at least should be well known and not change often).

Server environments usually don’t conform to this communication pattern, making firewall rule design more difficult. With the fundamentals covered in this post, we will dive into more details in the future, such as common network security patterns for servers or the challenge that an attacker can disable a host-based firewall on a compromised system.

We would be very excited to hear your comments, disagreements, get links to other sources, and receive more questions! Finally, you might want to watch this presentation that provides a more network engineering-focused perspective large-scale firewall operation.

Blog posts in Firewalls on End Hosts series

8 comments:

  1. In my opinion one can start protecting critical network infrastructure from the very beginning.

    We know what are DNS servers for protected host, don’t we? We know as well that usually local DNS server are forwarders.

    So it is possible to filter in host security policy outgoing DNS to permitted servers only. The same applies to NTP servers.

  2. Good blog!

    I have a cynical comment to make: I have NEVER found a customer application team that can tell me all the servers they are using, their IP addresses, let alone the ports they use. The few times I've gotten actual documentation produced by the vendor who installed the app or the in-house team who wrote the app, the alleged documentation covers things like DB schemes, but not network flows.

    It's worse than that. My coworkers tell me that actually tracking down app owners is all too often excruciatingly painful, in part because their is no list of owners and contact info. And often they don't know much other than their next support or license renewal date.

    This not only hinders making FW rules, but it really hinders troubleshooting app problems. Yet no site I've seen is pro-active about it.

    This has made me a big believer in products like Tetration and getting the actual flows documented to secure the servers and be ready in advance of application troubleshooting.

  3. My company have customer. They had external audit and auditors reported lack of OS based firewall (multiple types of OSes). Nowadays customer have security rules at OS level but almost any application issue during new implementations etc leads in first step to blame network team for communication problems.

  4. I personally don't think that built-in or AV/Host-based firewall solution is reliable.i saw that how the System admin/Application developer turn off the OS/Host-based firewall because he/she think that it is the reason that some App-Stack is not working and think that any error in their APP is because of host firewall.also any breach to the OS can lead to turning of the built-in firewall by the intruder (i saw how intruder cripple the AV/HIPS agent on the OS that even re-installing the agent didn't work and we re-install the whole OS from the backup).for me the firewall should be always implemented in the network and out-of-the-box.again for me the micro-segmentation is better to be implemented out-of the hypervisor as threat like VM-Escape is possible and common (many of them are zero-day and unknown) .if technology like VEPA (Ivan wrote about it years ago) supported on hypervisor we can put back security in Network-switch hardware (I know the TCAM limitation but the performance is much much better than OVS+DPDK and other fancy/impossible to implement Kernel-Bypass/Offload methods) .i think the vendor like Vmware don't like the technology like VEPA because they want to sell NSX.i tired 2 months to find a decent micro-segmentation solution for Vpshere without any success.everything is VMware world end with NSX.

  5. In my experience using the Flow Data like NETFLOW can help to find how those App-Stack are communicating whit each other and then place the ACL for enforcement. last week i was tasked to put some VACL in Web-APP DMZ VLAN.i asked the App-Developers and those who support Applications that is there any connectivity between those Web-Servers ? they told me there is not any connection between those Web-server for sure and they only need to talk with their back-end DB that is on another VLAN/Segment protected by a classical Multi-Context-Firewall.i checked my Netflow logs for the last 3 months and i find too many API calls between those Web-servers on DMZ.if i trust those guys , i cloud create full disaster that could make the whole business down.after telling them that there are too many API calls , they said really ?

  6. Perhaps a paradigm shift is due for firewalls in general? I'm thinking quickly here but wondering if we perhaps just had a protocol by which a host could request upstream firewall(s) to open access inbound on their behalf dynamically, the hosts themselves would then automatically inform the security device what ports they need / want opened upstream. The firewall would have the ability to permit or deny the traffic based on its larger policy definition. This would resolve both the issue of admins not knowing what ports are needed for an application and could help resolve the issue of "stale" policies left on the firewalls.

    This higher level firewall policies could be more of a grouped concept similar to how web filtering and IPS is managed by site, application, or signature category. OR alternatively it could be more granular by specific app / port...

    Permission time limits could also be requested by the hosts and negotiated etc.. timeouts would still apply unless the permission is renewed thus these timeouts not only close the "session" but also closes the permission for that application and thus the policy rule itself.

    Host firewalls capable of negotiating via this protocol could be set to mirror allowed ports inbound once approval is granted OR could just be set to wide open trusting upstream for all security. Of course identity verification is important between host and firewall in this model as you wouldn't want false requests made from false hosts on another's behalf, but if the host only opens its own firewall ports after it has requested and received approval for the policy by the external firewall you would have an added layer of protection.

    Perhaps some OS level simple application-to-port verification could be done too...

    I'm sure there are many drawbacks to this idea not the least of which is scale but I wonder if we're just getting to the point where we need to get away from having to think in terms of lists of ports / apps and statically creating rules in general.

    Too crazy? :)

  7. I kept saying "ports" but really it could additionally and optionally include a standards-based application identifier in the request that can be matched by a firewall to their own app signatures.... perhaps a bit of an expandable model like YANG where it could exchange capability information to allow that.

    Application developers would be motivated to properly document and register application network requirements with this model as not all organizations may allow ports only in their upstream higher level policy frameworks. Falsified requests would be caught by vendor signature matching if the application does not look like what they claimed they wanted to allow.

    Firewall logs would then also show not only traffic itself but the policy permission requests and thus give you an idea of what applications have been actually running on machines and how those policy needs change as services spin up and down and as they request renewals of their policies.... DHCP options / multicast groups could help connect host to firewall, etc... as long as the requests are supported by the higher level policy a host can be reasonably assured their policies will be permitted and traffic flows....

    This may actually just be Rule 11 proving itself true because this isn't all too different than 802.1X with Radius accounting.... maybe we just call it FAC :)

  8. I think Illumios approch is an interesting way to tackle micro-segmentation. Previously they did Datacenter segmentation but it appears they are dipping their toes in the host pool as well according to their website.

    What I found was so great with their DC solution is that if a admin decided to turn off the local firewall the agent software turned it right back on and if the admin turned off the agent the other hosts immediately blocked you until you are compliant again.

Add comment
Sidebar