Please Wait While We're Preparing Your Interfaces
Once a virtual machine running a network operating system boots, you’d expect its data-plane interfaces to be operational, right? Some vendors disagree. It takes over a minute for some network operating systems to figure out they have this thing called interfaces.1
I would love to figure out what takes them so long (a minute is an eternity on modern CPUs), but I guess we’ll never know.
Behind the Scenes
netlab uses two device provisioning mechanisms: it can start virtual machines with Vagrant or containers with containerlab. Some of those containers might use KVM/QEMU to run a hidden virtual machine (see also: RFC 1925 rule 6a).
Arista cEOS Containers Run on Apple Silicon
A few days ago, someone mentioned Arista released a cEOS EFT image running on Arm. Of course, I had to test whether it would run on Apple Silicon.
TL&DR: YES 🎉 🎉
Here’s what you have to do to make the Arista cEOS container work with netlab running on an Ubuntu VM on Apple silicon:
Links in Virtual Labs
There are three major ways to connect network devices in the physical world:
- Point-to-point links between devices (usually using some variant of Ethernet)
- Multi-access layer-1 networks running some IEEE 802.x encapsulation on top of that (GPON, WiFi, Ethernet hubs)
- Multi-access switched layer-2 network (dumb switches, hopefully running some STP variant)
Implementing these connections in virtual labs is a bit harder than one might think, as all virtualization solutions assume you plan to run virtual servers connected to Ethernet segments.
… updated on Monday, February 3, 2025 09:04 +0100
netlab 1.9.4: Bug fixes, VRRPv3 on Junos
During the last three weeks, we were busy squashing bugs (device configuration fixes, other bug fixes). Some were recent; others were ancient pests uncovered by better integration tests. The end result: netlab release 1.9.4.
netlab release 1.9.4 passed hundreds of integration tests and should be a better choice than the previous 1.9 releases. To upgrade, execute pip3 install --upgrade networklab
.
Update: 2025-02-03
We still missed a few quirks :( Release 1.9.4-post1 addresses those (and, unfortunately, I’m pretty sure there will be more).
The Curious Case of the BGP Connect State
I got this question from Paul:
Have you ever seen a BGP peer in the “Connect” state? In 20 years, I have never been able to see or reproduce this state, nor any mention in a debug/log. I am starting to believe that all the documentation is BS, and this does not exist.
The BGP Finite State Machine (FSM) (at least the one defined in RFC 4271 and amended in RFC 9687) is “a bit” hard to grasp but the basics haven’t changed from the ancient days of RFC 1771:
Cisco Modeling Labs and Infrastructure-as-Code
Dalton Ortega, Cisco Modeling Labs Product Manager, sent me the following email as a response to my Configuring IP Addresses Won't Make You an Expert blog post:
First, your statement on Autonetkit is indeed correct. We had removed that from the product due to lack of popularity. That being said, in our roadmap we are looking at methods to reintroduce on-the-fly configuration as well as enhancing our sample labs library to make getting started with CML easier.
Secondly, CML can be run in full IaC mode because of the API-first build. In fact, many of our customers are using CML as an automated test/validation bed for their CI/CD pipelines. Tools like Ansible and Terraform are available to facilitate this inside CML too. For more details, read:
Worth Reading: Drunken Plagiarists
George V. Neville-Neil published a fantastic, must-read summary of the various code copilots’ usefulness on ACM Queue: The Drunken Plagiarists.
It pretty much mirrors my experience (plus, I got annoyed when the semi-relevant suggestions kept kicking me out of the flow) and reminds me of the early days of OpenFlow, when nobody wanted to listen to old grunts like myself telling the world it was all hype and little substance.
Cisco VRRPv3 IPv6 Configuration Sucks
I spent way too much time ironing out the VRRPv3 quirks on the dozen (or so) platforms supported by netlab. This is the second blog post describing some of the ridiculous stuff I had to deal with.
This is how you configure the basic VRRPv3 parameters for IPv4 on a Cisco IOS/XE device:
interface GigabitEthernet0/1
vrrp 217 address-family ipv4
address 172.16.33.42
You would expect something similar for IPv6, right? You’d be right if you were working with Arista EOS:
Use BGP Outbound Route Filters (ORF) for IP Prefixes
When a BGP router cannot fit the whole BGP table into its forwarding table (FIB), we often use inbound filters to limit the amount of information the device keeps in its BGP table. That’s usually a waste of resources:
- The BGP neighbor has to send information about all prefixes in its BGP table
- The device with an inbound filter wastes additional CPU cycles to drop many incoming updates.
Wouldn’t it be better for the device with an inbound filter to push that filter to its BGP neighbors?
Sturgeon's Law, VRRPv3 Edition
I just wasted several days trying to figure out how to make the dozen (or so) platforms for which we implemented VRRPv3 in netlab work together. This is the first in a series of blog posts describing the ridiculous stuff we discovered during that journey
The idea was pretty simple:
- Create a lab with the tested device and a well-known probe connected to the same subnet.
- Disable VRRP (or interface) on the probe and check IPv4 and IPv6 connectivity through the tested device (verifying it takes over ownership of VRRP MAC and IP addresses).
- Reenable VRRP on the probe and change its VRRP priority several times to check the state transitions through INIT/BACKUP(lower priority)/MASTER(change in priority)/BACKUP(preempting after a change in priority).
The Ethernet/802.1 Protocol Stack
The believers in the There Be Four Layers religion think everything below IP is just a blob of stuff dealing with physical things:

People steeped in a slightly more nuanced view of the world in which IP is not the centerpiece of the universe might tell you that the blob of stuff we need is two things:
IBGP Is the Better EBGP
Whenever I was explaining how one could build EBGP-only data center fabrics, someone would inevitably ask, “But could you do that with IBGP?”
TL&DR: Of course, but that does not mean you should.
Anyway, leaving behind the land of sane designs, let’s trot down the rabbit trail of IBGP-only networks.
Concise Link Descriptions in netlab Topologies
One of the goals we’re always trying to achieve when developing netlab features is to make the lab topologies as concise as possible1. Among other things, netlab supports numerous ways of describing links between lab devices, allowing you to be as succinct as possible.
A bit of a background first:
- In the end, netlab collects all links in the links list before starting the data transformation process.
- Every entry in the links list is a dictionary. That dictionary can contain link attributes and must contain a list of interfaces connected to the link.
- Every interface must have a node (specifying the lab device it belongs to) and could contain additional interface attributes.
Public Videos: Leaf-and-Spine Fabric Design
The initial videos of the Leaf-and-Spine Fabric Architectures webinar are now public. You can watch the Leaf-and-Spine Fabric Basics, Physical Fabric Design, and Layer-3 Fabrics sections without an ipSpace.net account.
Lab: Level-1 and Level-2 IS-IS Routing
One of the recipes for easy IS-IS deployments claims that you should use only level-2 routing (although most vendors enable level-1 and level-2 routing by default).
What does that mean, and why does it matter? You’ll find the answers in the Optimize Simple IS-IS Deployments lab exercise.
