Category: virtualization
Implementing Layer-2 Networks in a Public Cloud
A few weeks ago I got an excited tweet from someone working at Oracle Cloud Infrastructure: they launched full-blown layer-2 virtual networks in their public cloud to support customers migrating existing enterprise spaghetti mess into the cloud.
Let’s skip the usual does everyone using the applications now have to pay for Oracle licenses and I wonder what the lock in might be when I migrate my workloads into an Oracle cloud jokes and focus on the technical aspects of what they claim they implemented. Here’s my immediate reaction (limited to the usual 280 characters, because that’s the absolute upper limit of consumable content these days):
Repost: VMware Fault Tolerance Woes
I always claimed that VMware Fault Tolerance makes no sense. After all, the only thing it does is protect a VM against a server hardware failure… in the world where software crashes are way more common, and fat fingers cause most of the outages.
But wait, it gets worse, the whole thing is incredibly complex – you might like this description Minh Ha left as a comment to my Fifty Shades of High Availability blog post.
Making LLDP Work with Linux Bridge
Last week I described how I configured PVLAN on a Linux bridge. After checking the desired partial connectivity with ios_ping I wanted to verify it with LLDP neighbors. Ansible ios_facts module collects LLDP neighbor information, and it should be really easy using those facts to check whether port isolation works as expected.
---
- name: Display LLDP neighbors on selected interface
hosts: all
gather_facts: true
vars:
target_interface: GigabitEthernet0/1
tasks:
- name: Display neighbors gathered with ios_facts
debug:
var: ansible_net_neighbors[target_interface]
Alas, none of the routers saw any neighbors on the target interface.
Implement Private VLAN Functionality with Linux Bridge and Libvirt
I wanted to test routing protocol behavior (IS-IS in particular) on partially meshed multi-access layer-2 networks like private VLANs or Carrier Ethernet E-Tree service. I recently spent plenty of time creating a Vagrant/libvirt lab environment on my Intel NUC running Ubuntu 20.04, and I wanted to use that environment in my tests.
Challenge-of-the-day: How do you implement private VLAN functionality with Vagrant using libvirt plugin?
There might be interesting KVM/libvirt options I’ve missed, but so far I figured two ways of connecting Vagrant-controlled virtual machines in libvirt environment:
Vendor Marketectures in Real Life
Remember my rants about VMware and firewall vendors promoting crazy solutions that work best in PowerPoint and cause more headaches than anything else (excluding increased vendor margins and sales team bonuses, of course)?
Here’s another we-don’t-need-all-that-complexity real-life story coming from one of my long-term subscribers:
Are Business Needs Just Excuses for Vendor Shenanigans?
Every now and then I call someone’s baby ugly (or maybe it was their third cousin’s baby and they nonetheless feel offended). In such cases a common resort is to cite business or market needs to prove how ignorant and clueless I am. Here’s a sample LinkedIn comment talking about my ignorance about the need for smart NICs:
The rise of custom silicon by Presando [sic], Mellanox, Amazon, Intel and others confirms there is a real market need.
Now let’s get something straight: while there are good reasons to use tons of different things that might look inappropriate, irrelevant or plain stupid to an outsider, I don’t believe in real market need argument being used to justify anything without supporting technical facts (tell me why you need that stuff and prove to me that using it is the best way of solving a problem).
Disaster Recovery: a Vendor Marketing Tale
Several engineers formerly working for a large virtualization vendor were pretty upset with me when I claimed that the virtualization consultants promote “disaster recovery using stretched VLANs” designs instead of alternatives that would implement proper separation of failure domains.
Guess what… it’s even worse than I thought.
Here’s a sequence of comments I received after reposting one of my “disaster recovery doesn’t need stretched VLANs” blog posts on LinkedIn sometime in late 2019:
Do We Need Complex Data Center Switches for VMware NSX Underlay
Got this question from one of ipSpace.net subscribers:
Do we really need those intelligent datacenter switches for underlay now that we have NSX in our datacenter? Now that we have taken a lot of the intelligence out of our underlying network, what must the underlying network really provide?
Reading the marketing white papers the answer would be IP connectivity… but keep in mind that building your infrastructure based on information from vendor white papers usually gives you the results your gullibility deserves.
The Cost of Disruptiveness and Guerrilla Marketing
A Docker networking rant coming from my good friend Marko Milivojević triggered a severe case of Deja-Moo, resulting in a flood of unpleasant memories caused by too-successful “disruptive” IT vendors.
Imagine you’re working for a startup creating a cool new product in the IT infrastructure space (if you have an oversized ego you would call yourself “disruptive thought leader” on your LinkedIn profile) but nobody is taking you seriously. How about some guerrilla warfare: advertising your product to people who hate the IT operations (today we’d call that Shadow IT).
Building Fabric Infrastructure for an OpenStack Private Cloud
An attendee in my Building Next-Generation Data Center online course was asked to deploy numerous relatively small OpenStack cloud instances and wanted select the optimum virtual networking technology. Not surprisingly, every $vendor had just the right answer, including Arista:
We’re considering moving from hypervisor-based overlays to ToR-based overlays using Arista’s CVX for approximately 2000 VLANs.
As I explained in Overlay Virtual Networking, Networking in Private and Public Clouds and Designing Private Cloud Infrastructure (plus several presentations) you have three options to implement virtual networking in private clouds:
Automating NSX-T
An attendee of our Building Network Automation Solutions online course decided to automate his NSX-T environment and sent me this question:
I will be working on NSX-T quite a lot these days and I was wondering how could I automate my workflow (lab + production) to produce a certain consistency in my work.
I’ve seen that VMware relies a lot on PowerShell and I’ve haven’t invested a lot in that yet … and I would like to get more skills and become more proficient using Python right now.
Always select the most convenient tool for the job, and regardless of personal preferences PowerShell seems to be the one to use in this case.
High-Speed IPsec on Snabb Switch on Software Gone Wild
In previous Software Gone Wild episodes we covered Snabb Switch and numerous applications running on it, from L2VPN to 4over6 gateway and integration with Juniper vMX code.
In Episode 98 we focused on another interesting application developed by Max Rottenkolber: high-speed VPN gateway using IPsec on top of Snabb Switch (details). Enjoy!
Last Week on ipSpace.net (2019W4)
The crazy pace of webinar sessions continued last week. Howard Marks continued his deep dive into Hyper-Converged Infrastructure, this time focusing on go-to-market strategies, failure resiliency with replicas and local RAID, and the eternal debate (if you happen to be working for a certain $vendor) whether it’s better to run your HCI code in a VM and not in hypervisor kernel like your competitor does. He concluded with the description of what major players (VMware VSAN, Nutanix and HPE Simplivity) do.
On Thursday I started my Ansible 2.7 Updates saga, describing how network_cli plugin works, how they implemented generic CLI modules, how to use SSH keys or usernames and passwords for authentication (and how to make them secure), and how to execute commands on network devices (including an introduction into the gory details of parsing text outputs, JSON or XML).
The last thing I managed to cover was the cli_command module and how you can use it to execute any command on a network device… and then I ran out of time. We’ll continue with sample playbooks and network device configurations on February 12th.
You can get access to both webinars with Standard ipSpace.net subscription.
VMware NSX: The Good, the Bad and the Ugly
After four live sessions we finished the VMware NSX Technical Deep Dive webinar yesterday. Still have to edit the materials, but right now the whole thing is already over 6 hours long, and there are two more guest speaker sessions to come.
Anyways, in the previous sessions we covered all the good parts of NSX and a few of the bad ones. Everything that was left for yesterday were the ugly parts.
VXLAN and EVPN on Hypervisor Hosts
One of my readers sent me a series of questions regarding a new cloud deployment where the cloud implementers want to run VXLAN and EVPN on the hypervisor hosts:
I am currently working on a leaf-and-spine VXLAN+ EVPN PoC. At the same time, the systems team in my company is working on building a Cloudstack platform and are insisting on using VXLAN on the compute node even to the point of using BGP for inter-VXLAN traffic on the nodes.
Using VXLAN (or GRE) encap/decap on the hypervisor hosts is nothing new. That’s how NSX and many OpenStack implementations work.