Blog Posts in March 2014
A few years ago I lambasted the lack of STP support in Brocade’s VCS fabric. It took Brocade over two years to solve the problem, but they finally came up with an interesting end-to-end solution.
Last June Tore Anderson talked about his IPv6-only data center deployment (the idea made very popular recently after Facebook’s presentation @ V6 World Congress) in one of my free webinars. In case you missed the videos explaining the technical details, watch them or view Tore’s slide deck.
Short summary: vSphere starts using an uplink as soon as its physical layer becomes operational, which might happen during ToR switch startup phase, or before a ToR switch port enters forwarding state.
Whenever I mention the idea of IPv6-only data centers, I get the usual question: “Sounds great, but is anyone actually using it?” So far, my answer was: “Yeah, I know a great guy in Norway that runs this in production” As of last week, the answer is way more persuasive: “Facebook is almost there.”
I spent the whole last week immersed into security-spiced atmosphere of Troopers, a fantastic boutique security conference (like last year, they limited the number of attendees and sold out weeks before the conference).
I admit they totally spoiled me last year, but they managed to make the conference and all the accompanying events even better.
Talking about OpenFlow (and poking holes in it) is fun, but are there any real-life deployments (apart from highly-publicized Google’s internal network)? I tried to describe a few of them in my SDN 101 webinar.
When Enno Rey mentioned RFC 6106 support (why does it matter?) on Cisco IOS during the opening presentation of Troopers 2014 IPv6 security summit I got interested but remained a bit skeptical. When Eric Vyncke (sitting in the audience) started nodding, I knew it must be there. Finding the feature in IOS documentation turned out to be mission impossible.
A reader left the following comment on my Does Multipath TCP Matter blog post: “Why would I use MP-TCP in a data center? Couldn’t you use packet spraying at each hop and take care of re-ordering at the destination?”
Short answer: You could, but you might not want to.
I was listening to excellent opening presentation Enno Rey had at Troopers 2014 IPv6 security summit (he claimed he was ranting, but it sounded more like some of my polite blog posts) and when I’ve seen this slide I could literally hear a blog post clicking together in my head.
In short: IPv6 has many shortcomings, but this might not be one of them.
Cristiano sent me an interesting question:
I saw that to configure BGP as the routing protocol running over DMVPN I have to configure BGP neighbors on the hub site router. Do I really have to configure all the neighbors on the hub site? How many neighbors could I configure? How can I scale that?
According to Cisco Live presentations, BGP-over-DMVPN scales to several thousand spoke sites (per hub router), so you shouldn’t be too worried about the protocol scalability. Configuring all those neighbors is a different issue.
The best description of fundamental problems of IT industry I found recently comes from @cloud_borat. Evidently SDN is no different.
When OpenFlow was still fresh and exciting, someone made quite a name for himself by proposing a global load-balancing solution that would install per-session OpenFlow entries in every core switch around the world. Clearly a great idea, mimicking the best experiences we had with ATM SVCs.
Meanwhile some people started using OpenFlow in real-life networks for coarse-grained load balancing that improves the scalability of stateful network services. For more details, watch the video recorded during the Real Life OpenFlow-based SDN Use Cases webinar.
I heard the following pretty bold statement while listening to an episode of my favorite podcast: “Bringing MPLS into the data center is impractical because MPLS requires custom silicon.” Really? How about checking the Intel FM 6000 product brief first?
Broadcom Trident chipset supposedly also supports MPLS. I couldn’t verify that because Broadcom considers the capabilities of their hardware highly confidential (but if you know more, do write a comment). Absolutely refreshing for a chipset that you get in almost every ToR switch you buy.
TL&DR summary: it depends.
DMVPN networks still confuse some engineers, particularly the true differences between Phase 2 and Phase 3 DMVPN. Here’s the explanation that worked for an engineer that sent me a question along these lines.
If you plan to attend the Troopers 2014 conference in two weeks, don’t forget to include my full-day SDN workshop on Tuesday in your agenda (the Troopers conference is sold out, but you can still register for the workshop). The topics of the workshop will include:
- Why do we need SDN and what is it?
- OpenFlow, its advantages, drawbacks and scalability challenges;
- Typical OpenFlow and SDN deployment considerations;
- Real-life SDN use cases, both OpenFlow- and non-OpenFlow ones;
- Network function virtualization;
- Software-defined data centers.
What could a small ISP do to limit failure domains? Metro Ethernet and MPLS Virtual Private LAN service are all the rage, and offers customers the promise of being able to connect all their branch offices together, and use the same set of VLANs with free Layer 2 connectivity between their sites. It's either: extend the failure domains, or lose out in selling the service, b/c the customer will buy from another ISP.
On the very same day that I published the CLI is Not the Problem post I stumbled upon an interesting discussion on the v6ops mailing list. It all started with a crazy idea to modify BGP to use 128-bit router ID to help operators that think they can manually configure large IPv6-only networks without any centralized configuration/management authority that would assign 32 bit identifiers to their routers.
When Apple launched the new release of iOS last autumn, networking gurus realized the new iOS uses MP-TCP, a recent development that allows a single TCP socket (as presented to the higher layers of the application stack) to use multiple parallel TCP sessions. Does that mean we’re getting closer to fixing the TCP/IP stack?
TL&DR summary: Unfortunately not.