Iwan Rahabok’s open-source VMware Operations Guide is now also available in Markdown-on-GitHub format. Networking engineers support vSphere/NSX infrastructure might be particularly interested in the Network Metrics chapter.
- NSX-T manager virtual machines
- NSX-T uplink profiles and IP pools
- Transport zones and transport nodes (NSX-T modules on ESXi hypervisors)
- Edge clusters including BGP, EVPN and BFD
Once the infrastructure is set up, his solution uses a Terraform configuration file to deploy multiple tenants: external VLANs, tier-0 gateways, BGP neighbors, tier-1 gateways, and application segments.
While the infrastructure part of his solution might be fully reusable, the tenant deployments definitely aren’t, but they provide a great starting point if you decide to build a fully automated provisioning system.
A friend of mine sent me a link to a lengthy convoluted document describing the 17-step procedure (with the last step having 10 micro-steps) to follow if you want to run NSX manager on top of N-VDS, or as they call it: Deploy a Fully Collapsed vSphere Cluster NSX-T on Hosts Running N-VDS Switches1.
You might not be familiar with vSphere networking and the way NSX-T uses that (in which case I can highly recommend vSphere and NSX webinars), so here’s a CliffsNotes version of it: you want to put the management component of NSX-T on top of the virtual switch it’s managing, and make it accessible only through that virtual switch. What could possibly go wrong?
One of ipSpace.net subscribers wanted to see a real-life examples in the Overlay Virtual Networking webinar:
I would be nice to have real world examples. The webinar lacks of contents about how to obtain a fully working L3 fabric overlay network, including gateways, vrfs, security zones, etc… I know there is not only one “design for all” but a few complete architectures from L2 to L7 will be appreciated over deep-dives about specific protocols or technologies.
Most ipSpace.net webinars are bits of a larger puzzle. In this particular case:
Initial implementation of Noël Boulene’s automated provisioning of NSX-T distributed firewall rules changed NSX-T firewall configuration based on Terraform configuration files. To make the deployment fully automated he went a step further and added a full-blown CI/CD pipeline using GitHub Actions and Terraform Cloud.
Not everyone is as lucky as Noël – developers in his organization already use GitHub and Terraform Cloud, making his choices totally frictionless.
Noël Boulene decided to automate provisioning of NSX-T distributed firewall rules as part of his Building Network Automation Solutions hands-on work.
What makes his solution even more interesting is the choice of automation tool: instead of using the universal automation hammer (aka Ansible) he used Terraform, a much better choice if you want to automate service provisioning, and you happen to be using vendors that invested time into writing Terraform provisioners.
When VMware NSX-T 3.0 came out, I planned to do an update session of the VMware NSX Technical Deep Dive webinar along the lines of what I did for AWS Networking a few weeks ago. However, it turned out that most of the new features didn’t take more than a bullet or two on an existing slide, or at most a new slide.
Covering them in a live session and then slicing-and-dicing the resulting recording simply didn’t make sense, so I updated the videos in summer 2020 (the last batch was published in early August).
The mission of ipSpace.net is very simple: explain new networking technologies and products in a no-nonsense marketing-free and hopefully understandable way.
Sometimes we’re probably way off the mark, but every now and then we get it just right as evidenced by this feedback from one of our subscribers:
I was given short notice to present a board-level overview of VMWare NSX-T for an urgent virtualization platform change from Microsoft. Tech execs needed to understand NSX-T’s position in the market, in its product lifecycle, feature advantages, possible feature deficits, and an idea of the level of effort for implementation.
There’s one thing no cloud vendor ever managed to change: virtual machines running on top of cloud infrastructure expect to have Ethernet interfaces.
It doesn’t matter if the virtual Ethernet Network Interface Cards (NICs) are implemented with software emulation of actual hardware (VMware emulated the ancient Novell NE1000 NIC) or with paravirtual drivers - the virtual machines expect to send and receive Ethernet frames. What happens beyond the Ethernet NIC depends on the cloud implementation details.
I published a blog post describing how complex the underlay supporting VMware NSX still has to be (because someone keeps pretending a network is just a thick yellow cable), and the tweet announcing it admittedly looked like a clickbait.
[Blog] Do We Need Complex Data Center Switches for VMware NSX Underlay
Martin Casado quickly replied NO (probably before reading the whole article), starting a whole barrage of overlay-focused neteng-versus-devs fun.
I’m running two workshops in Zurich in the next 10 days:
- Comparing VMware NSX and Cisco ACI (and how EVPN and VXLAN fit into the big picture) on Thursday, November 28th;
- Explaining how you could use VXLAN with EVPN to build infrastructure for active-active data centers on Tuesday, December 3rd.
I published the slide deck for the NSX versus ACI workshop a few days ago (and you can already download it if you have a paid ipSpace.net subscription) and it’s full of new goodness like ACI vPod, multi-pod ACI, multi-site ACI, ACI-on-AWS, and multi-site NSX-V and NSX-T.
A Network Artist left a lengthy comment on my Brief History of VMware NSX blog post. He raised a number of interesting topics, so I decided to write my replies as a separate blog post.
Using Geneve is an interesting choice to be made and while the approach has it’s own Pros and Cons, I would like to stick to VXLAN if I were to recommend to someone for few good reasons.
The main reason I see for NSX-T using Geneve instead of VXLAN is the need for additional header fields to carry metadata around, and to implement Network Services Header (NSH) for east-west service insertion.
A while ago I had an interesting discussion with someone running VMware NSX on top of VXLAN+EVPN fabric - a pretty common scenario considering:
- NSX’s insistence on having all VXLAN uplink from the same server in the same subnet;
- Data center switching vendors being on a lemming-like run praising EVPN+VXLAN;
- Non-FANG environments being somewhat reluctant to connect a server to a single switch.
His fabric was running well… apart from the weird times when someone started tons of new VMs.
Last year when I was creating the first version of VMware NSX Deep Dive content, NSX-V was mainstream and NSX-T was the new kid on the block. A year later NSX-V is mostly sidelined, and all the development efforts are going into NSX-T. Time to adapt the webinar to new reality… taking the usual staged approach:
- The new slide deck covering NSX-V and NSX-T is ready. It includes early information about NSX-T release 2.5; I’ll fill in the details once the documentation becomes public.
- I’ll use the slide deck in day-long workshop in Zurich on September 10th.
- The live webinar sessions (including updated NSX-T 2.5 content) will start on November 14th.
I spent a lot of time during this summer figuring out the details of NSX-T, resulting in significantly updated and expanded VMware NSX Technical Deep Dive material… but before going into those details let’s do a brief walk down the memory lane ;)
You might remember a startup called Nicira that was acquired by VMware in mid-2012… supposedly resulting in the ever-continuing spat between Cisco and VMware (and maybe even triggering the creation of Cisco ACI).