Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

9 module online course

Start now!
back to overview

VNFs and Containers: Heptagonal Pegs and Triangle Holes

One of my readers sent me this question:

It would be nice to have a blog post or a webinar describing how to implement container networking in case when: (A) application does not tolerate NAT (telco, e.g. due to SCTP), (B) no DNS / FQDN, is used to find the peer element and (C) bandwidth requirements may be tough.

The only thing I could point him to is the Advanced Docker Networking part of Docker Networking Fundamentals webinar (available with free subscription) where macvlan and ipvlan are described.

However, I couldn’t help adding…

I would be very skeptical about running something that cannot tolerate IP address changes or use DNS in a container. It’s like trying to fit a heptagonal peg with spikes into a triangle hole.

His response didn’t surprise me in the least:

In mobile networks, pushing a square peg through a round hole is a common and repetitive task. A while ago, special hardware was used to get performance. Now, we put everything into a cloud and use commodity HW. No wonder we lose performance, and as a result, we need special accelerators and other tricks… thus we end up with something that’s not scalable nor commodity any more.

That’s why I opened a bag of popcorn when the whole “NFV/VNF will allow service providers to use free software on commodity hardware” hype started, and I’m still enjoying the show.

Please read our Blog Commenting Policy before writing a comment.


  1. Thought that containers were originally developed for specific functions (micro services, distributed systems comes to mind) not server virtualization. So it's the "right tool for the right job" problem.

    1. When everyone is talking about shiny new hammers, all your problems start to resemble nails... ;)

    2. ... which you could all knock in at a single blow :D gorgeous!

  2. Well, to be honest, the (A) problem is not that difficult to solve. Just connect your containers to IPVLAN/MACVLAN interfaces and all of a sudden you factored out all possible virtual NATs and bridges. The (B) problem is also easy. The current generation of VNFs are simply BNFs (physical servers) wrapped as VMs, so they will use the same service discovery mechanism they used before, which usually is statically assigned IPs with maybe something like DNS SRV on top, but again nothing new and definitely not container-native service discovery. The (C) problem is solvable with a combination of SR-IOV, MACVLAN and DPDK inside the container or various smart NICs with hardware offloads.
    In my experience, the biggest issue with the whole VNF story is that they still use the same (legacy) assumptions and HA mechanisms they did in the baremetal world. So things like VRRP between container/VMs is a common thing. One of the worst things you can see though is some dataplane VNFs requiring dynamic routing peering with the network underlay. Think OSPF+BFD with BGP on top that need to peer with your TOR switch. In the end, after you've satisfied all their requirements, the VNF gets pinned to a single server, which effectively turns it back into a baremetal NF, with a few intermediate container and VM layers in between. Let's call this 5G.

  3. Sure, it makes sense that doing things on commodity hardware is cheaper than doing things on bespoke hardware. But in practice, NFV purists in general (and Telcos in particular) often (incorrectly) translate this into "everything should be done in software on general purpose CPUs". There are many things that can be done orders of magnitude more efficiently and cheaply in hardware specificically designed for that task rather than in general purpose hardware. Forwarding packets is an obvious example. Machine learning is a more recent one. That said, the non-general-purpose hardware designed specifically for the task can still be commodity. "Commodity hardware" does not necessarily imply "general purpose CPUs".

  4. Just use carrier-grade k8s with a /24 of public IPs on each host!


Constructive courteous comments are most welcome. Anonymous trolling will be removed with prejudice.