Make Every Application an Independent Tenant
Traditional data centers are usually built in a very non-scalable fashion: everything goes through a central pair of firewalls (and/or load balancers) with thousands of rules that no one really understands; servers in different security zones are hanging off VLANs connected to the central firewalls.
Some people love to migrate the whole concept intact to a newly built private cloud (which immediately becomes server virtualization on steroids) because it’s easier to retain existing security architecture and firewall rulesets.
The traditional security architecture with few large security zones is inherently flawed – we’re usually forced to group servers from different applications in the same security zone regardless of the applications’ security requirements, its coding quality, or the level of server hardening.
The worst application in the set becomes the weakest link – once the intruder breaks into that application server, he usually has a free lunch within the security zone (unless you deployed strict layer-2 security measures – you did implement all of them, didn’t you?).
Why would we ever agree to use such a stupid architecture? Who said we agreed – we were forced by the physical limits of the VLAN-based architecture (4K VLANs, sometimes as low as 256) and firewalls (number of security zones and logical interfaces). Throw the usual IT silos in the mix and it’s obvious why it’s easier to add another (somewhat misplaced) server into an existing security zone than to ask the networking and security teams to create a new application-specific set of zones.
What you really should do when deploying applications in a private or public cloud is to make every application an independent tenant. The actual terminology and data objects you use don’t really matter – it’s important that each application gets its own independent set of security zones and its own firewall(s) with its own set of easy-to-understand rules.
The divide-and-conquer approach to cloud-based application security has obvious benefits. Each application becomes totally isolated from all other applications (apart from well-controlled inter-application dependencies); someone breaking into one application would have limited opportunities to attack other unrelated applications.
The drawbacks? There’s a “slight” management issue, as you’ll have to deal with tens or hundreds of small firewalls instead of a single hard-to-understand monstrosity. Also, don’t even try to go down this route with physical firewalls – you need virtual appliances that you can deploy under reasonable licensing terms (a license for a Palo Alto virtual firewall costs just a few K$), ideally based on the total consumed bandwidth, not on the number of instances.
It's a similar argument to who controls the vswitch, the network guys or the application/virtualization/etc. guys? In the end it requires policy enforcement controlled by the SMEs. Not a application guy deciding he wants the VM wide open or a ESX admin deciding it's easiest to just trunk all 4096 VLANs to a port than isolate things and being able to act on that.
This is just the opinion of a network architect who has seen the network support staff constantly called about connectivity issues that usually are firewall related. Your experience may be different.
The main benefit of application specific firewalls is that they are applied over a smaller scope, so they are more understandable /and/ understandable in isolation (assuming source and destinations are referenced by zone/tenancy and not by IP address - see below). I'd argue that the majority of centralised firewall policies today are not only full of security holes, but because the scope they cover is so big (and because an IP address is only a loose binding to a machines identity) they are also impossible to audit and discover those holes.
Additionally I think a huge amount of firewall additions and changes are due to the deployment and movement of machines. These only cause firewall rule changes because the rules reference the machine by IP address and not by security zone/tenancy. In a zone/tenancy based setup all that would be required would be to assign the new machine to the correct zone. Re-addressing a machine should require no more work than simply re-addressing it.
Finally I think this sort of model can lead to a new way of applying security rules. At the moment it's very ad-hoc, particularly when moving from a loosely firewalled dev environment into production. If this security model was already applied in the dev environment then the zone based rules would be built as part of the development process. When it comes to move it into production the "ruleset" to be audited would already exist, and could look like this:
WebServers -> Application Servers: HTTP, HTTPS
Application Servers -> Database Servers: MySqL
+generic admin rules
The policy is easily auditable without any reference to physical infrastructure or IP addresses. Even security professionals with limited networking knowledge can track the flows. If the production policy later requires a change then this request is no longer an ad hoc request to a firewall support team who implement it blindly. Instead it's a request to the security team for an amendment to production policy. Addition, removal, or movement of machines requires no policy amendments at all.
Why wouldn't we want this architecture? I've been waiting for it for _years_.
Needed your clarification on a couple of queries i had
1. When you mean each application as a tenant, i believe that you are mentioning about firewalls and security policies.
2. Elsewhere in this site, you have kind of indicated that each tenant will have a separate routing table. Now if that's the case how will the different tiers talk to each other. For example if the web server is in a VRF and the database server/application server is in another VRF, then how will the web server talk with the other servers? The reason being IP routing happens in the context of the VRF only. So for the above to happen, we may need a device (Switch/Router) which is shared among all these tenants and can do routing between them. Basically so the benefit is just in the isolation of having a separate routing table for that tier, but eventually somewhere in the network that isolation has to go away for routing to happen.
Please let me know your thoughts.
Sincerely,
Sudarsan.D
#2 - Yes, you'll need something to either link the tenant networks to external networks, or provide access to common services. There are several ways to make that secure.