Building Fabric Infrastructure for an OpenStack Private Cloud

An attendee in my Building Next-Generation Data Center online course was asked to deploy numerous relatively small OpenStack cloud instances and wanted select the optimum virtual networking technology. Not surprisingly, every $vendor had just the right answer, including Arista:

We’re considering moving from hypervisor-based overlays to ToR-based overlays using Arista’s CVX for approximately 2000 VLANs.

As I explained in Overlay Virtual Networking, Networking in Private and Public Clouds and Designing Private Cloud Infrastructure (plus several presentations) you have three options to implement virtual networking in private clouds:

Hypervisor-based overlays. This solution is the most scalable one, but requires decent software on the hypervisors (and speaking of OpenStack, OVS isn’t known as the fastest virtual switch on the market - please write a comment if you have recent performance data). It also decouples virtual networking from physical infrastructure, reducing the number of interdependent moving parts, and making the physical infrastructure totally stable.

ToR-based overlays usually using VXLAN and EVPN these days. This approach works well for environments with relatively few statically-configured VLANs, but might not scale well when faced with thousands of dynamic VLANs.

As the cloud orchestration system might deploy workload belonging to any virtual network on any hypervisor the typical design includes configuring all VLANs on all ToR-to-Server links, effectively turning the whole fabric into a single broadcast (and thus failure) domain.

ToR-based overlays coupled with cloud orchestration system. Networking vendors try to “solve” the scalability challenges I just described by tightly coupling ToR switches with orchestration systems: every time a VM is deployed the orchestration system tells the relevant ToR switches they need a new VLAN on particular server-facing ports.

Sometimes the ToR switches connect directly to the orchestration system (see the description of VM Tracker/Tracer in Data Center Fabrics webinar), sometimes the networking vendor inserts another controller in the middle. The end result is always the same: a conundrum of too-many tightly coupled moving parts. All you need is a single weak link (like a failing REST API service) and the whole house of cards collapses.

Moral of the story: As always, there is no right answer - you have to figure out what matters most: scalability, forwarding performance, or ease-of-deployment… but whatever you do, try to keep the number of moving parts and interdependent components to the bare minimum. Your network (and your business) will eventually be grateful.

1 comments:

  1. I agree with Ivan that having OpenStack reach into the switches is asking for trouble.

    AFAIK all commercial OpenStacks require multiple VLANs with L2 spanning the cloud so you have to use EVPN in the switches. Then you end up using Neutron VXLAN mode to enable self-service. It's double VXLAN but that's why we have 9216 MTU and at least both of the encapsulation steps are (supposed to be) offloaded.
Add comment
Sidebar