Predicting the IPv6 BGP Table Size
One of my readers sent me an interesting question:
Are you aware of any studies looking at the effectiveness of IPv6 address allocation policies? I'm specifically interested in the affects of allocation policy on RIB/FIB sizes.
Well, we haven’t solved a single BGP-inflating problem with IPv6, so expect the IPv6 BGP table to be similar to IPv4 BGP table once IPv6 is widely deployed.
All You Ever Wanted to Know About IPv6-over-IPv4 Tunnels
Sander Steffann, Iljitsch van Beijnum and Rick van Rein recently published an amazing IETF draft comparing IPv6-over-IPv4 tunneling mechanisms. If you’re even remotely interested in this topic, the draft is an absolute must-read (and if you want to know about other transitional mechanisms, check out this webinar).
Quality of Service in ProgrammableFlow Networks
OpenFlow is not exactly known for its quality-of-service features (hint: there are none), but as I described in the ProgrammableFlow Technical Deep Dive webinar NEC implemented numerous OpenFlow extensions in their edge switches and the ProgrammableFlow controller to give you a robust set of QoS features.
Evolution of IP Model
I stumbled upon a fantastic RFC - Evolution of IP Model (RFC 6250) - that should be made mandatory reading for everyone remotely involved with networking. It describes numerous "truths" (politely called misconceptions) that everyone from programmers to network designers still rely upon. Some of my favorites: reachability is symmetric and transitive, loss is rare, addresses are stable, each host has a single interface and a single IP address ... Enjoy!
Upcoming Events and Presentations
Finally found time to list all the events I’m scheduled to present at in the next few months. Hope you understand why I might occasionally miss one of the daily blog posts … but if you’re attending one of those events, please do send me an email or look for me there and we’ll have a nice cup of conversation (as one of my old friends used to say).
Example: Multi-Stage Clos Fabrics
Smaller Clos fabrics are built with two layers of switches: leaf and spine switches. The oversubscription ratio you want to achieve dictates the number of uplinks on the leaf switch, which in turn dictates the maximum number of spine switches and thus the fabric size.
You have to use multi-stage Clos architecture if you want to build bigger fabrics; Brad Hedlund described a sample fabric with over 24.000 server-facing ports in the Clos Fabrics Explained webinar.
Hot and Cold VM Mobility
Another day, another interesting Expert Express engagement, another stretched layer-2 design solving the usual requirement: “We need inter-DC VM mobility.”
The usual question: “And why would you want to vMotion a VM between data centers?” with a refreshing answer: “Oh, no, that would not work for us.”
Virtual Tenant Networks with NEC ProgrammableFlow
Virtual tenant networks are one of the best features of NEC ProgrammableFlow solution – you can build virtual layer-2 subnets (based on VLANs, edge ports or port/VLAN combos), connect them with a virtual router, and implement packet filters and traffic steering ... while treating the whole data center fabric as a single device.
Even better, the ingress edge switch performs all the operations you configure (ACLs, L2 lookup, L3 lookup, source/destination MAC rewrite), resulting in optimal end-to-end forwarding.
Daylight – Internet Explorer or Linux of the SDN World?
You’ve probably heard that the networking hardware vendors decided to pool resources to create an open-source OpenFlow controller. Just in case you’re wondering whether they lost their mind (no, they didn’t), here’s my cynical take.
WAN Routing in Data Centers with Layer-2 DCI
A while ago I got an interesting question:
Let's say that due to circumstances outside of your control, you must have stretched data center subnets... What is the best method to get these subnets into OSPF? Should they share a common area at each data center or should each data center utilize a separate area for the same subnet?
Assuming someone hasn’t sprinkled the application willy-nilly across the two data centers, it’s best if the data center edge routers advertise subnets used by the applications as type-2 external routes, ensuring one data center is always the primary entry point for a specific subnet. Getting the same results with BGP routing in Internet is a much tougher challenge.