Blog Posts in September 2012
Now imagine none of those APIs would be standardized (various mutually incompatible dialects of Tcl used by Cisco IOS come to mind) – that’s the situation we’re facing in the SDN land today.
The current explosion of SDN hype (further fueled by recent VMworld announcement of Software-Defined Data Centers) made some networking engineers understandably nervous. This is the question I got from one of them:
I have 8 plus years in Cisco, have recently passed my CCIE RS theory, and was looking forward to complete the lab test when this SDN thing hit me hard. Do you suggest completing the CCIE lab looking at this new future of Networking?
RIPE65 meeting is starting in Amsterdam today and my OpenFlow/SDN presentation has been accepted as part of the Monday plenary session. You can view the presentation on my webinar demo web site and watch it in RIPE65 archives.
Autumn must be a perfect time for data center product launches: last week Brocade launched its core VDX switch and yesterday Arista and Cisco launched their new low-latency switches (yeah, the simultaneous launch must have been pure coincidence).
I had the opportunity to listen to Cisco’s and Arista’s product briefings, continuously experiencing a weird feeling of déjà vu. The two switches look like twin brothers … but there are some significant differences between the two:
Arista is launching a new product line today shrouded in mists of SDN and cloud buzzwords: the 7150 series top-of-rack switches. As expected, the switches offer up to 64 10GE ports with wire speed L2 and L3 forwarding and 400 nanosecond(!) latency.
Also expected from Arista: unexpected creativity. Instead of providing a 40GE port on the switch that can be split into four 10GE ports with a breakout cable (like everyone else is doing), these switches group four physical 10GE SFP+ ports into a native 40GE (not 4x10GE LAG) interface.
But wait, there’s more...
Just in case you enjoyed truly magnificent Internet-free holidays and returned to overflowing Inbox and RSS feeds, here are the most popular posts from July, starting with the future of SDN:
Just in case you haven’t noticed: RIPE region ran out of unallocated IPv4 addresses last Friday. RIPE members (regional registries) can get a single /22 each, enterprises that want to be IPv4-multihomed cannot get provider-independent addresses any more. It just might be time to start considering IPv6 in your data center. Let’s see whether the vendors agree with me.
The Data Center Fabric presentations from EuroNOG 2011 and RIPE 64 meetings are available in my webinar demo web site. Video of the EuroNOG presentation is on YouTube and RIPE has its own video archives.
A few days ago the title of this post would be one of those “find the odd word out” puzzles. How can you build large L3 fabrics when you have to work with ToR switches with no L3 support, and you can’t connect more than 24 of them in a fabric? All that has changed with the announcement of VDX 8770 – a monster chassis switch – and new version of Brocade’s Network OS with layer-3 (IP) forwarding.
Another great question I got from David Le Goff:
So far, SDN is relying or stressing mainly the L2-L3 network programmability (switches and routers). Why are most of the people not mentioning L4-L7 network services such as firewalls or ADCs. Why would those elements not have to be SDNed with an OpenFlow support for instance?
To understand the focus on L2/L3 switching, let’s go back a year and a half to the laws-of-physics-changing big bang event.
The Nexus-focused Packet Pushers were discussing a great question during Cisco Nexus Deep Dive part 2 podcast: do we need LACP on top of UDLD?
Short answer: absolutely.
A few days ago Kurt Bales and Cooper Lees gave me access to a test QFabric environment. I always wanted to know what was really going on behind the QFabric curtain and the moment Kurt mentioned he was able to see some of those details, I was totally hooked.
Short summary: QFabric works exactly as I’d predicted three months before the user-facing documentation became publicly available (the behind-the-scenes view described in this blog post is probably still hard to find).
A while ago I described the need for BPDU guard in hypervisor switches, and not surprisingly got a number of “it’s there” tweets seconds after vSphere 5.1 (which includes BPDU filter) was launched. Rickard Nobel also did a magnificent job of replicating the problem my blog post is describing and verifying vSphere 5.1 stops a BPDU denial-of-service attack.
Unfortunately, BPDU filter is not the same feature as BPDU guard. Here’s why.