Mr. A. Anonymous left this comment on my BGP in the data centers blog post:
BGP is starting to penetrate into servers as well. What are your thoughts on having BGP running from the servers themselves?
Finally some people got it. Also, welcome back to the '90s (see also RFC 1925 section 2.11).
Running a routing protocol on servers (or IBM mainframes) is nothing new – we’ve been doing that 30 years ago, using either RIP or OSPFv2 – and it’s one of the best ways to achieve path redundancy.
Later it became unfashionable to have any communication between the server silo and the network silo, resulting in the unhealthy mess we have today where everyone expects the other team to solve the problem. Unfortunately, the brown substance tends to flow down the stack.
However, even though the mainstream best practices focused on link bonding, MLAG and similar kludges, I know people who were running BGP on their servers (with good results) for years if not decades.
The old ideas resurfaced in the mainstream networking as means of connecting the virtual (overlay) world with the physical world, first with routing protocol support on VMware NSX Edge Services Router (ESR), later with BGP support in Hyper-V gateways… and I was really glad VMware decided to implement BGP on ESR because BGP establishes a clean separation between two administrative domains (virtual and physical).
Lately, I’ve seen very smart full-stack engineers (read: sysadmins who understand networking) use FRR to run BGP across unnumbered links between servers and ToR switches totally simplifying both BGP configurations as well as deployment procedures (not to mention turning the whole fabric into pure L3 fabric with no VLANs on ToR switches).
Want to know more? Dinesh Dutt described the idea in the Leaf-and-Spine Fabric Architectures webinar.