During the discussion of the On Applicability of MPLS Segment Routing (SR-MPLS) blog post on LinkedIn someone made an off-the-cuff remark that…
SRv6 as an host2host overlay - in some cases not a bad idea
It’s probably just my myopic view, but I fail to see the above idea as anything else but another tiny chapter in the “Solution in Search of a Problem” SRv6 saga1.
There are two well-known reasons one might want to use a host-to-host overlay:
- Implement virtual networks
- Implement service insertion
On the virtual networks front, we had GRE for decades. We got VXLAN almost a decade ago, and GENEVE a few years later. GRE and VXLAN address a specific use case – GRE is primarily used for some-L3-over-IP transport2, while VXLAN excels when you have to transport Ethernet frames over IP.
GENEVE extends VXLAN with multi protocol capabilities and TLV-encoded metadata. It’s not Turing-complete, but it’s probably pretty close to an overlay kitchen sink3. SRv6 adds nothing to the table apart from the one protocol to rule them all Kool-Aid4 and larger headers.
Maybe we’re looking at the wrong problem. Watching various SRv6 (marketing) presentations, one gets the impression that SRv6 shines in the Service Insertion arena, so maybe that’s why we should use it instead of VXLAN or GENEVE. This is how the service insertion fairy tale is usually told:
- A controller figures out what needs to be done
- The controller programs a stack of entries listing all the services a packet must visit in the ingress node
- The ingress node adds that stack of entries to the incoming packet, ensuring the packet will traverse all the required services.
- Every service in the list receives the packet, removes itself from the list of services, processes the packet, and sends the packet to the next service.
Ignoring for the moment the stupendous complexity of real-life service insertion (anyone remembers Cisco’s Virtual Security Gateway?), there’s a tiny detail usually glossed over: all the services have to be aware of the “service processing” header and handle that header together with the user packet. That’s why Network Services Header idea never took off. For more details, watch the Service Insertion part of SDN Use Cases webinar.
Now ask yourself: how many commercial network appliances can do something along those lines? Let me help you: all those that are integrated with AWS Gateway Load Balancer, Azure Gateway Load Balancer, or VMware NSX east-west packet inspection. How many of those use SRv6? None.
It’s not surprising that VMware chose GENEVE as the east-west service insertion transport protocol5 – GENEVE is the default overlay protocol in VMware NSX-T. It’s more interesting that AWS Gateway Load Balancer uses GENEVE even though they use VXLAN for Transit Gateway Connect. Finally, there’s Azure Gateway Load balancer using two VXLAN tunnels between the load balancer and each appliance, proving the age-old wisdom that as long as service insertion means VLAN stitching, you can do it with VXLAN and EVPN6. Is there a shipping implementation of service insertion using SRv6? I’m not aware of one.
Back to the original SRv6 as host-to-host overlay idea:
- I see no good reason to use SRv6 instead of VXLAN or GENEVE to implement overlay virtual networks. It seems no commercial data center overlay virtual networking product is using it7.
- Large-scale commercial service insertion implementations use VXLAN or GENEVE.
As always, I might be missing something obvious, in which case I’d appreciate your comments.
- Service insertion challenges are described in the SDN Use Cases webinar.
- VMware NSX-T east/west and north/south service insertion is covered in Firewalling and Security part of VMware NSX Technical Deep Dive
- AWS Gateway Load Balancer and AWS Transit Gateway Connect are part of Amazon Web Services Networking webinar.
- Azure Gateway Load Balancer will get a brief mention8 in autumn 2022 update of Microsoft Azure Networking webinar.
All four webinars are part of Standard ipSpace.net Subscription.
Still not as bad as “we could use LISP to implement global VM mobility” idea followed by a demo of a single VM moved across Europe. ↩︎
Although we did use GRE for bridging decades ago, and one could always considered NVGRE just a variant of GRE. ↩︎
As in “you can throw anything into it without clogging it too much” ↩︎
… and the awesome opportunity to enhance your resume ↩︎
North-South service insertion in VMware NSX-T is simple VLAN stitching. ↩︎
I’m not implying that Azure uses EVPN, just that you can do VLAN stitching with EVPN control plane. ↩︎
The documentation is approximately two pages long and mostly says “we’re working with our integration partners to bring you the best possible experience.” ↩︎