Carlos Asensio was facing an “interesting” challenge: someone has sold a layer-2 extension into their public cloud to one of the customers. Being a good engineer, he wanted to limit the damage the customer could do to the cloud infrastructure and thus immediately rejected the idea to connect the customer straight into the layer-2 network core ... but what could he do?
Overlay virtual networks just might be a solution if you have to solve a similar problem:
- Build the cloud portion of the customer’s layer-2 network with an overlay virtual networking technology;
- Install an extra NIC in one (or more) physical host and run a VXLAN-to-VLAN gateway in a VM on that host – the customer’s VLAN is thus completely isolated from the data center network core;
- Connect the extra NIC to WAN edge router or switch on which the customer’s link is terminated. Whatever stupidity the customer does in its part of the stretched layer-2 network won’t spill further than the gateway VM and the overlay network (and you could easily limit the damage by reducing the CPU cycles available to the gateway VM).
The diversity of overlay virtual networking solutions available today gives you plenty of choices:
- You could use Cisco Nexus 1000V with VXLAN or OVS/GRE/OpenStack combo at no additional cost (combining VLANs with GRE-encapsulated subnets might be an interesting challenge in current OpenStack Quantum release);
- VMware’s version of VXLAN comes with vCNS (a product formerly known as vShield), so you’ll need a vCNS license;
- You could also use VMware NSX (aka Nicira NVP) with a layer-2 gateway (included in NSX).
Hyper-V Network Virtualization might have a problem dealing with dynamic MAC addresses coming from the customer’s data center – this is one of the rare use cases where dynamic MAC learning works better than a proper control plane.
VXLAN-to-VLAN gateway linking the cloud portion of the customer’s network with the customer’s VLAN could be implemented with Cisco’s VXLAN gateway or a simple Linux or Windows VM on which you bridge the overlay and VLAN interfaces (yet again, one of those rare cases where VM-based bridging makes sense). Arista’s 7150 or F5 BIG-IP is probably an overkill.
And now for a bit of totally unrelated trivia: once we solved the interesting part of the problem, I asked about the details of the customer interconnect link – they planned to have a single 100 Mbps link and thus a single path of failure. I can only wish them luck and hope they’ll try to run stretched clusters over that link.