Arista EOS Hates a Routing Instance with No Interfaces
I always ask engineers reporting a netlab bug to provide a minimal lab topology that would reproduce the error, sometimes resulting in “interesting” side effects. For example, I was trying to debug a BGP-related Arista EOS issue using a netlab topology similar to this one:
defaults.device: eos
module: [ bgp ]
nodes:
a: { bgp.as: 65000 }
b: { bgp.as: 65001 }
Imagine my astonishment when the two switches failed to configure BGP. Here’s the error message I got when running the netlab’s deploy device configurations Ansible playbook:
TASK [eos_config: deploying bgp from /netsim/ansible/templates/bgp/eos.j2] **********************************
fatal: [b]: FAILED! => changed=false
data: |-
bgp advertise-inactive
% Invalid input
b(config-s-ansible_17)#
msg: |-
bgp advertise-inactive
% Invalid input
b(config-s-ansible_17)#
fatal: [a]: FAILED! => changed=false
data: |-
bgp advertise-inactive
% Invalid input
a(config-s-ansible_17)#
msg: |-
bgp advertise-inactive
% Invalid input
a(config-s-ansible_17)#
Fortunately, the error was easy to reproduce within an interactive session, but the results totally stumped me.
$ netlab connect a
Connecting to clab-X-a using SSH port 22
Last login: Thu Sep 18 05:28:50 2025 from 192.168.121.1
a#conf t
a(config)#router bgp 65000
% Unavailable command (not supported on this hardware platform)
a(config)#
Following the when in doubt, reboot approach familiar to anyone who contacts a vendor’s TAC often enough, I rebooted the server, thinking it might be a container error. Of course, that did not help1.
Even worse, the device refused to enable IP routing in the global routing instance, producing the same confusing “not on this hardware platform” error message. The behavior was consistent across various EOS releases running as containers or virtual machines.
$ netlab connect a
Connecting to clab-X-a using SSH port 22
Last login: Thu Sep 18 05:30:31 2025 from 192.168.121.1
a#conf t
a(config)#ip routing
% Unavailable command (not supported on this hardware platform)
a(config)#
After a pretty long fruitless head-banging, I decided to start a lab topology I knew worked in the past, and it started flawlessly. Now I was onto something: I “only” had to identify the differences between the two topologies… and then it hit me like a ton of bricks. The “test” lab topology had no links, so the default VRF had no interfaces:
a#show vrf default
VRF Protocols State Interfaces
------------- --------------- ---------------- ----------
default IPv4 no routing Lo0
default IPv6 no routing
Arista EOS obviously hates having no interfaces in the default VRF (I can’t blame it; it must feel really lonely), but its error message could be a bit more on-target.
Ivan, does the container in netlab have any dataplane interfaces? I can reliably reproduce this in a ceos-lab image running in gns3, as opposed to netlab, when i instantiate a container with 0 or 1 interfaces. (e0 would be int ma0, with e1+ being dataplane ports.
I believe this issue isn't with a vrf with no interfaces, it's that there are no interfaces at all so the container image doesn't spin up the virtual asic.
I do agree though that perhaps the error could be refined a little.
The container and the VM always have a management interface (eth0 in a container) in a management VRF, but there are no other interfaces. Here's the printout from an Arista EOS container exhibiting this issue (running in containerlab using netlab-generated clab.yml file):
To reproduce this in netlab, start a topology with no links element.
ah ok. so yes. it's not an issue with a vrf with no interfaces, it's that there is no dataplane interface and the container isn't starting the virtual asic and should go away if you add a single link to the device and restart it