Make Every Application an Independent Tenant

Traditional data centers are usually built in a very non-scalable fashion: everything goes through a central pair of firewalls (and/or load balancers) with thousands of rules that no one really understands; servers in different security zones are hanging off VLANs connected to the central firewalls.

Some people love to migrate the whole concept intact to a newly built private cloud (which immediately becomes server virtualization on steroids) because it’s easier to retain existing security architecture and firewall rulesets.

The traditional security architecture with few large security zones is inherently flawed – we’re usually forced to group servers from different applications in the same security zone regardless of the applications’ security requirements, its coding quality, or the level of server hardening.

The worst application in the set becomes the weakest link – once the intruder breaks into that application server, he usually has a free lunch within the security zone (unless you deployed strict layer-2 security measures – you did implement all of them, didn’t you?).

Why would we ever agree to use such a stupid architecture? Who said we agreed – we were forced by the physical limits of the VLAN-based architecture (4K VLANs, sometimes as low as 256) and firewalls (number of security zones and logical interfaces). Throw the usual IT silos in the mix and it’s obvious why it’s easier to add another (somewhat misplaced) server into an existing security zone than to ask the networking and security teams to create a new application-specific set of zones.

What you really should do when deploying applications in a private or public cloud is to make every application an independent tenant. The actual terminology and data objects you use don’t really matter – it’s important that each application gets its own independent set of security zones and its own firewall(s) with its own set of easy-to-understand rules.

The divide-and-conquer approach to cloud-based application security has obvious benefits. Each application becomes totally isolated from all other applications (apart from well-controlled inter-application dependencies); someone breaking into one application would have limited opportunities to attack other unrelated applications.

The drawbacks? There’s a “slight” management issue, as you’ll have to deal with tens or hundreds of small firewalls instead of a single hard-to-understand monstrosity. Also, don’t even try to go down this route with physical firewalls – you need virtual appliances that you can deploy under reasonable licensing terms (a license for a Palo Alto virtual firewall costs just a few K$), ideally based on the total consumed bandwidth, not on the number of instances.

More information

10 comments:

  1. great post! so eventually this is the thing people wanted from private vlans, besides the vlan scaling limitation, the administrative night mare, the finer granularity etc...
  2. The big question/argument is around the control of these firewalls. Will the application/infrastructure teams finally support the security of their applications? There will be a significant turf war for the control of the firewall/security instances.
    Replies
    1. In many cases this is already happening. For large scale datacenters and SOA type setups with lots of communication between internal applications, they don't use hardware firewalls for that, they just rely on host-based firewalling. It's all automated via normal systems automation tools like Puppet/Chef/etc. The security guys can be involved and "approve" the methodologies but then it's automated. I know quite a few guys working at large scale web companies who have no security people and no dedicated hardware security devices apart from those required for PCI compliance.

      It's a similar argument to who controls the vswitch, the network guys or the application/virtualization/etc. guys? In the end it requires policy enforcement controlled by the SMEs. Not a application guy deciding he wants the VM wide open or a ESX admin deciding it's easiest to just trunk all 4096 VLANs to a port than isolate things and being able to act on that.
  3. it's not clear to me whether this is a lob at automated policy enforcement (and a discussion of vendor products in this space) or whether this is a half opening for a discussion around actual application level hosting and virtualization technologies that go to finer levels of virtualization granularity than VMs.
    Replies
    1. It's neither - you're trying to read too much between the lines ;) It's just my view of the long-term direction we should be taking (until PaaS takes over and IaaS becomes history, at which point we'll probably have the same arguments one layer higher in the stack).
    2. i didn't read anything into it. i'm not that bright. i just didn't get the gist of the post. ;)
  4. A competent security team would agree with this approach. I don't see it as scalable or sustainable. I've seen business units using the cloud simply because they don't need to go through the hurdles of requesting firewall rules between components in their app and services provided by existing apps on the network. In a world where everyone is demanding auto-provisioning and self-service, I don't see how you could have every tiered app in its own security silo. My new app, having multiple dependencies on multiple services, would need a dozen drop-down selections to make sure that the orchestrator opens the access through multiple security groups. Then, I imagine each of those security requests will need approval before the orchestrator deploys? In a perfect security world, everything is protected; but that almost ALWAYS means process and approval overhead, which frustrates the business units that are generating revenue. The other end of the spectrum is wide-open and vulnerable everywhere... I think most agree that's bad. The best balance I've seen is protecting the databases and anything categorized under regulatory compliance - the keys to the kingdom - then a relaxed policy between web/app/storage/backup tiers. Yes, a compromised system has access to other systems, but they haven't gotten to the keys. DDOS is an issue, and that can be mitigated without an insane number of firewalls everywhere. If you have the US nuclear codes on your systems, yes, this approach makes sense.

    This is just the opinion of a network architect who has seen the network support staff constantly called about connectivity issues that usually are firewall related. Your experience may be different.
    Replies
    1. Who says the firewall rules would need to be that complex? It sounds to me like you've leapt to the idea that an application specific firewall would need to totally secure that application, but that needn't be the case if it isn't today.

      The main benefit of application specific firewalls is that they are applied over a smaller scope, so they are more understandable /and/ understandable in isolation (assuming source and destinations are referenced by zone/tenancy and not by IP address - see below). I'd argue that the majority of centralised firewall policies today are not only full of security holes, but because the scope they cover is so big (and because an IP address is only a loose binding to a machines identity) they are also impossible to audit and discover those holes.

      Additionally I think a huge amount of firewall additions and changes are due to the deployment and movement of machines. These only cause firewall rule changes because the rules reference the machine by IP address and not by security zone/tenancy. In a zone/tenancy based setup all that would be required would be to assign the new machine to the correct zone. Re-addressing a machine should require no more work than simply re-addressing it.

      Finally I think this sort of model can lead to a new way of applying security rules. At the moment it's very ad-hoc, particularly when moving from a loosely firewalled dev environment into production. If this security model was already applied in the dev environment then the zone based rules would be built as part of the development process. When it comes to move it into production the "ruleset" to be audited would already exist, and could look like this:

      WebServers -> Application Servers: HTTP, HTTPS
      Application Servers -> Database Servers: MySqL
      +generic admin rules

      The policy is easily auditable without any reference to physical infrastructure or IP addresses. Even security professionals with limited networking knowledge can track the flows. If the production policy later requires a change then this request is no longer an ad hoc request to a firewall support team who implement it blindly. Instead it's a request to the security team for an amendment to production policy. Addition, removal, or movement of machines requires no policy amendments at all.

      Why wouldn't we want this architecture? I've been waiting for it for _years_.
  5. Hi Ivan,
    Needed your clarification on a couple of queries i had
    1. When you mean each application as a tenant, i believe that you are mentioning about firewalls and security policies.
    2. Elsewhere in this site, you have kind of indicated that each tenant will have a separate routing table. Now if that's the case how will the different tiers talk to each other. For example if the web server is in a VRF and the database server/application server is in another VRF, then how will the web server talk with the other servers? The reason being IP routing happens in the context of the VRF only. So for the above to happen, we may need a device (Switch/Router) which is shared among all these tenants and can do routing between them. Basically so the benefit is just in the isolation of having a separate routing table for that tier, but eventually somewhere in the network that isolation has to go away for routing to happen.

    Please let me know your thoughts.

    Sincerely,
    Sudarsan.D
    Replies
    1. #1 - You might also need a separate routing domain for each tenant. Search my blog for related blog posts.

      #2 - Yes, you'll need something to either link the tenant networks to external networks, or provide access to common services. There are several ways to make that secure.
Add comment
Sidebar