Use BGP Outbound Route Filters (ORF) for IP Prefixes
When a BGP router cannot fit the whole BGP table into its forwarding table (FIB), we often use inbound filters to limit the amount of information the device keeps in its BGP table. That’s usually a waste of resources:
- The BGP neighbor has to send information about all prefixes in its BGP table
- The device with an inbound filter wastes additional CPU cycles to drop many incoming updates.
Wouldn’t it be better for the device with an inbound filter to push that filter to its BGP neighbors?
Sturgeon's Law, VRRPv3 Edition
I just wasted several days trying to figure out how to make the dozen (or so) platforms for which we implemented VRRPv3 in netlab work together. This is the first in a series of blog posts describing the ridiculous stuff we discovered during that journey
The idea was pretty simple:
- Create a lab with the tested device and a well-known probe connected to the same subnet.
- Disable VRRP (or interface) on the probe and check IPv4 and IPv6 connectivity through the tested device (verifying it takes over ownership of VRRP MAC and IP addresses).
- Reenable VRRP on the probe and change its VRRP priority several times to check the state transitions through INIT/BACKUP(lower priority)/MASTER(change in priority)/BACKUP(preempting after a change in priority).
The Ethernet/802.1 Protocol Stack
The believers in the There Be Four Layers religion think everything below IP is just a blob of stuff dealing with physical things:

People steeped in a slightly more nuanced view of the world in which IP is not the centerpiece of the universe might tell you that the blob of stuff we need is two things:
IBGP Is the Better EBGP
Whenever I was explaining how one could build EBGP-only data center fabrics, someone would inevitably ask, “But could you do that with IBGP?”
TL&DR: Of course, but that does not mean you should.
Anyway, leaving behind the land of sane designs, let’s trot down the rabbit trail of IBGP-only networks.
Concise Link Descriptions in netlab Topologies
One of the goals we’re always trying to achieve when developing netlab features is to make the lab topologies as concise as possible1. Among other things, netlab supports numerous ways of describing links between lab devices, allowing you to be as succinct as possible.
A bit of a background first:
- In the end, netlab collects all links in the links list before starting the data transformation process.
- Every entry in the links list is a dictionary. That dictionary can contain link attributes and must contain a list of interfaces connected to the link.
- Every interface must have a node (specifying the lab device it belongs to) and could contain additional interface attributes.
Public Videos: Leaf-and-Spine Fabric Design
The initial videos of the Leaf-and-Spine Fabric Architectures webinar are now public. You can watch the Leaf-and-Spine Fabric Basics, Physical Fabric Design, and Layer-3 Fabrics sections without an ipSpace.net account.
Lab: Level-1 and Level-2 IS-IS Routing
One of the recipes for easy IS-IS deployments claims that you should use only level-2 routing (although most vendors enable level-1 and level-2 routing by default).
What does that mean, and why does it matter? You’ll find the answers in the Optimize Simple IS-IS Deployments lab exercise.

Comparing IGP and BGP Data Center Convergence
A Thought Leader1 recently published a LinkedIn article comparing IGP and BGP convergence in data center fabrics2. In it, they3 claimed that:
iBGP designs would require route reflectors and additional processing, which could result in slightly slower convergence.
Let’s see whether that claim makes any sense.
TL&DR: No. If you’re building a simple leaf-and-spine fabric, the choice of the routing protocol does not matter (but you already knew that if you read this blog).
Weird Junos IS-IS Metrics
As part of the netlab development process, I run almost 200 integration tests on more than 20 platforms (over a dozen operating systems), and the amount of weirdness I discover is unbelievable.
Today’s special: Junos is failing the IS-IS metrics test.
The test is trivial:
- The device under test is connected to two IS-IS routers (X1 and X2)
- It has a low metric configured on the link with X1 and a high metric configured on the link with X2
The validation process is equally trivial:
netlab: Multi-Site VLANs
Imagine you want to create a simple multi-site network with netlab:
- The lab should have two sites (A and B).
- Each site has a layer-3 switch, a single VLAN (VLAN 100), and two hosts connected to that VLAN.
- As you don’t believe in the magic powers of stretched VLANs, you have a layer-3 (IPv4) link between sites.

Network diagram
New IPv6 Documentation Prefix
After three and a half years of haggling (the IETF draft that became the RFC was written in May 2021; the original discussions go back to 2013), Nick Buraglio & co managed to persuade pontificators bikeshedding in the v6ops working group that we might need an IPv6 documentation prefix larger than the existing 2001:db8::/32
.
With the new documentation prefix (3fff::/20
) (defined in RFC 9637), there’s absolutely no excuse to use public IPv6 address space in examples anymore.
netlab 1.9.3: MLAG, Static Routes, Node Cloning
netlab release 1.9.3 brings these new features:
- Multi-chassis Link Aggregation (MLAG) on Arista EOS, Aruba CX, Cumulus NVUE, and Dell OS10
- VRF and VLAN groups
- Additional OSPF interface parameters (hello and dead timers, cleartext passwords, and DR priority) implemented on Arista EOS, Aruba CX, Cisco IOS/IOS-XE, Cisco Nexus OS, Cumulus Linux, Dell OS10, and FRRouting
- Static routes with direct or indirect next hops implemented on Arista EOS, Cisco IOS/IOS-XE, FRRouting, and Linux
- Node cloning plugin for users who want to build detailed digital twins of their networks.
- Consistent selection of default address pools based on the number of nodes attached to a link (this could change addressing in multi-provider topologies)
- Support for vjunos-router and Cisco NSO tool.
Other new features include:
Configuring IP Addresses Won't Make You an Expert
A friend of mine recently wrote a nice post explaining how netlab helped him set up a large network topology in a reasonably short timeframe. As expected, his post attracted a wide variety of comments, from “netlab is a gamechanger” (thank you 😎) to “I prefer traditional labs.” Instead of writing a bunch of replies into a walled-garden ecosystem, I decided to address some of those concerns in a public place.
Let’s start with:
Increase the Stability of your Network
The introduction of real-time mission-critical applications into data networks has prompted many network designers to tune their routing protocols for faster convergence. While the resulting network can quickly detect failures and reroute around them, it usually becomes highly susceptible to repetitive failures (for example, a flapping interface), which can cause recurring instabilities in large parts of the network. A flapping interface can also cause significant data loss, as the data streams are constantly rerouted across the network following a routing protocol adjacency establishment and subsequent loss.
OSPFv3 on Bird Needs IPv6 LLA on the Loopback Interface
Wanted to share this “too weird to believe” SNAFU I found when running integration tests with the Bird routing daemon. It’s irrelevant unless you want Bird to advertise the IPv6 prefix configured on the main loopback interface (lo
) with OSPFv3.
Late last year, I decided to run netlab integration tests with the Bird routing daemon. It passed most baseline netlab OSPFv3 integration tests but failed those that checked the loopback IPv6 prefix advertised by the tested device (test results).