Before we start: if you’re new to my blog (or stumbled upon this blog post by incident) you might want to read the Considerations for Host-Based Firewalls for a brief overview of the challenge, and my explanation why flow-tracking tools cannot be used to auto-generate firewall policies.
As expected, the “you cannot do it” post on LinkedIn generated numerous comments, ranging from good ideas to borderline ridiculous attempts to fix a problem that has been proven to be unfixable (see also: perpetual motion).
Blog posts in this series
- Considerations for Host-based Firewalls (Part 1)
- Why Don't We Have Dynamic Firewall Policies
- Using Flow Tracking to Build Firewall Rulesets... and Halting Problem
- Fixing Firewall Ruleset Problem For Good (this post)
- Considerations for Host-based Firewalls (Part 2)
You could use flow-tracking tools for discovery purposes. Absolutely true. Is it worth the price of a Tetration installation? You tell me…
You could use flow-tracking tools to find unexpected flows. Another good one. Assuming your desired firewall policy is documented in a machine-readable way, you could automatically check whether the observed flows should be permitted, and point out the discrepancies. Is this idea practical? As always it depends.
You could use observed flows as the starting point to build your firewall ruleset. In theory, yes. In practice, you wouldn’t know whether all the observed flows are legitimate, and you’d still need an in-depth understanding of application architecture to transform the observed flows into firewall rules that would be able to deal with failure scenarios like the ones I described in the original blog post. Let me remind you that the only reason we started walking down this intractable path was
I have NEVER found a customer application team that can tell me all the servers they are using, their IP addresses, let alone the ports they use.
At this point, you could go back to the drawing board and try to add another layer of convolution to your perpetual motion machine (after all, ingenious people tried to solve the original problem for over a thousand years), or you could admit that you have a people/process problem that cannot be solved by throwing heaps of magic technology at it.
Insanity: doing the same thing over and over again and expecting different results.
The only way (I can see) to have sane, consistent, and up-to-date firewall rules is to fix the application deployment process and embed the required security rules in application deployment recipes (example).
But the application teams have no idea what they need. No problem. You have network security experts on hand, and they can work with the application team to come up with the required security rules. All you need to make this work is a firm rule enforced by top-level IT management: “no automated deployment recipe, no deployment”.
But they will put “permit any any” in the security rules to make it work. I hope your application development process includes code review (and if it doesn’t, you have bigger problems on your hands anyway). Make deployment recipe review mandatory part of code review process.
But everyone will scream at the OPS team, and the applications will be deployed anyway. If you’re big enough to have this problem (and it cannot be solved over a beer and a pizza), you probably have some sort of risk assessment and management in place. Maybe it’s time to submit a report to that team and make it their problem?
But even though you could eventually clean up the Augean Stables of application deployment, what should you do with the legacy applications? Letting them gradually disappear would be the ideal solution. If that doesn’t work ask a simple question: “Is that problem worth solving?”, and if you think it is, the next question should be “What would be the simplest good-enough solution?”. Without the answers to these two questions you’ll be an easy mark for the next snake-oil vendor with a glitzy slide deck.