Cisco Virtual Security Gateway (VSG) 101
I was talking about Cisco’s VSG in numerous webinars and presentations, but never managed to write a blog post about this interesting product. Let’s fix that, starting with a short video from the Cloud Computing Networking webinar.
- VSG is a NIC-level firewall. It’s (logically) inserted between a VM NIC and a vSwitch.
- Being a NIC-level firewall, VSG has to be a transparent (layer-2) firewall. It has no routing or NAT functionality.
- VSG is running in a VM (or two of them for redundancy) that can be deployed on vSphere or on Nexus 1010 appliance.
- The VSG VM doesn’t have to be running in the same hypervisor as the VM it’s protecting (VMware’s vShield App or Juniper’s vGW require a per-hypervisor VM).
- VSG depends on Nexus 1000V – Nexus 1000V is the only hypervisor switch (at this moment) that can insert a remote service between a VM NIC and a port group to which the VM NIC is connected.
- The technology used to insert a service offered by a remote VM between a VM NIC and a port group is called vPath.
- vPath 1.0 uses layer-2 transport and thus requires a dedicated VLAN between the hypervisor switches and the service VM. vPath 2.0 supposedly runs over VXLAN as well.
- Initial packets of every session are always redirected to the service VM. After inspecting and approving the session, the service VM can install a 5-tuple shortcut into the hypervisor switch. Subsequent packets of the same session no longer traverse the service VM.
- vPath service insertion is configured on a Nexus 1000V port-profile (equivalent to vSwitch port group) with the vn-service configuration command.
- vPath is Cisco’s proprietary technology. It seemed Cisco started to make the technology available to third parties, but let’s wait to see whether Imperva WAF uses vPath or not.
Check out my virtualization webinars (all of them are included in the yearly subscription) and VSG blog posts on Cisco’s web site. Also, three Packet Pushers podcasts covered VSG or vPath: Show 49 (Nexus 1000V), Show 74 (Cisco ASA 1000V) and PQ Show 12 (vPath 2.0)
That'd probably be the "vShield App"?
> Initial packets of every session are always redirected to the service VM. After inspecting and approving the session
Wonder how the trade-offs play here - use up resources on each host for a local service gw copy, or save ram/cpu but lug traffic around (which could be a fair bit, depending on traffic profile)
Another thing that's interesting here is how well (if at all) is this integrated into vCloud Director interface...
The trade-offs depend primarily on what you're protecting. If you're protecting all VMs, and you have a high-end server with tens of VMs, it might make sense to have a per-server firewall VM. If you're protecting only a few VMs per server, less so.
Another question is the manageability of VM-per-server solution versus a somewhat more centralized firewall VM.
Finally, there's no integration with vCloud Director. vCD doesn't support any of the NIC-level firewalls (including vShield App).
In your sample webinar's dialog, you state that a vGW solution does not scale in multi-tenant solutions, stating a need for one firewall per tenant per host. Not exactly true.
vGW can be deployed in a multi-tenant solution, with the following limitations:
*No built-in separation of logging for tenants
*No RBAC in the management interface
*Common IDS Policy
The above two limitations can be either worked around with a front-end system that separates logging and allows a RBAC'd interface to vGW with the decently robust API. Not ideal, but certainly an option avoiding the tenancy problem.
vGW 5.5, released weeks ago, states you can "split" the interface to service multi-tenancy customers, but I have not tested this yet.
Of course you can always add another layer of abstraction, in our case your own orchestration system (see also RFC 1925, truth 6a).
If you can use a centrally-configured policy for all VMs on the same hypervisor host, you need a single vGW VM per host.
See also the comment made by Jonathan.