Category: bridging
Would You Use Avaya's SPBM Solution?
Got this comment to one of my L2-over-VXLAN blog posts:
I found the Avaya SPBM solution "right on the money" to build that L2+ fabric. Would you deploy Avaya SPBM?
Interestingly, I got that same question during one of the ExpertExpress engagements and here’s what I told that customer:
Stretched ACI Fabric Is Sometimes the Least Horrible Solution
One of my readers sent me a lengthy email asking my opinion about his ideas for new data center design (yep, I pointed out there’s a service for that while replying to his email ;). He started with:
I have to design a DR solution for a large enterprise. They have two data centers connected via Fabric Path.
There’s a red flag right there…
Some People Don’t Get It: It Will Eventually Fail
Mark Baker left this comment on my Stretched Firewalls across Layer-3 DCI blog post:
Strange how inter-DC clustering failure is considered a certainty in this blog.
Call it experience or exposure to a larger dataset. Anything you build will eventually fail; just because you haven’t experienced the failure yet doesn’t mean that the system will never fail but only that you were lucky so far.
Shortest Path Bridging (SPB) and Avaya Fabric on Software Gone Wild
A few months ago I met a number of great engineers from Avaya and they explained to me how they creatively use Shortest Path Bridging (SPB) to create layer-2, layer-3, L2VPN, L3VPN and even IP Multicast fabrics – it was clearly time for another deep dive into SPB.
It took me a while to meet again with Roger Lapuh, but finally we started exploring the intricacies of SPB, and even compared it to MPLS for engineers more familiar with MPLS/VPN. Interested? Listen to Episode 54 of Software Gone Wild.
Reader Comments: Spanning Tree Woes
My latest spanning tree protocol (STP) posts generated numerous comments, some of them so relevant that I decided to summarize them into another blog post.
Weird Things Happen
The unidirectional link scenario mentioned by Antonio is pretty well known:
Spanning Tree Protocol (STP) and Bridging Loops
Continuing our bridging loops discussion Christoph Jaggi sent me another question:
Theoretically STP should avoid bridging loops, and yet you claim they cause data center meltdowns. What am I missing?
In theory, STP avoids bridging loops. In practice, there are numerous reasons STP got a bad name.
VLANs and Failure Domains Revisited
My friend Christoph Jaggi, the author of fantastic Metro Ethernet and Carrier Ethernet Encryptors documents, sent me this question when we were discussing the Data Center Fabrics Overview workshop I’ll run in Zurich in a few weeks:
When you are talking about large-scale VLAN-based fabrics I assume that you are pointing towards highly populated VLANs, such as VLANs containing 1000+ Ethernet addresses. Could you provide a tipping point between reasonably-sized VLANs and large-scale VLANs?
It's not the number of hosts in the VLAN but the span of a bridging domain (VLAN or otherwise).
Sometimes You Have to Decide How You Want to Fail
Another week, another ExpertExpress session, as is often the case focusing on two data centers with stretched VLANs spanning both of them. However, this one was particularly irksome, as the customer ran a firewall cluster stretched across two locations.
I gave the customer engineers my usual recommendations:
Another Spectacular Layer-2 Failure
Matjaž Straus started the SINOG 2 meeting I attended last week with a great story: during the RIPE70 meeting (just as I was flying home), Amsterdam Internet Exchange (AMS-IX) crashed.
Here’s how the AMS-IX failure impacted ATLAS probes (world-wide monitoring system run by RIPE) – no wonder, as RIPE uses AMS-IX for their connectivity.
Rearchitecting L3-Only Networks
One of the responses I got on my “What is Layer-2” post was
Ivan, are you saying to use L3 switches everywhere with /31 on the switch ports and the servers/workstation?
While that solution would work (and I know a few people who are using it with reasonable success), it’s nothing more than creative use of existing routing paradigms; we need something better.
More Layer-2 Misconceptions
My “What Is Layer-2 and Why Do You Need It?” blog post generated numerous replies, including this one:
Pretend you are a device receiving a stream of bits. After you receive some inter-frame spacing bits, whatever comes next is the 2nd layer; whether that is Ethernet, native IP, CLNS/CLNP, whatever.
Not exactly. IP (or CLNS or CLNP) is always a layer-3 protocol regardless of where in the frame it happens to be, and some layer-2 protocols have no header (apart from inter-frame spacing and start-of-frame indicator).
What Is Layer-2 and Why Do We Need It?
I’m constantly ranting against large layer-2 domains; recently going as far as saying “we don’t really need all that stuff.” Unfortunately, the IP+Ethernet mentality is so deeply ingrained in every networking engineer’s mind that we rarely ever stop to question its validity.
Let’s fix that and start with the fundamental question: What is Layer-2?
MPLS P-Router, Router or Layer-3 Switch?
One of my readers is struggling with the aftermath of marketing gimmicks:
We will implement a new network soon, and we're discussing P-routers versus regular routers versus switches. I'm looking for arguments to go one way or the other.
VXLAN and OTV: The Saga Continues
Randall Greer left a comment on my Revisited: Layer-2 DCI over VXLAN post saying:
Could you please elaborate on how VXLAN is a better option than OTV? As far as I can see, OTV doesn't suffer from the traffic tromboning you get from VXLAN. Sure you have to stretch your VLANs, but you're protected from bridging failures going over your DCI. OTV is also able to have multiple edge devices per site, so there's no single failure domain. It's even integrated with LISP to mitigate any sub-optimal traffic flows.
Before going through the individual points, let’s focus on the big picture: the failure domains.
Finally: a Virtual Switch Supports BPDU Guard
Nexus 1000V release 5.2(1)SV3(1.1) was published on August 22nd (I’m positive that has nothing to do with VMworld starting tomorrow) and I found this gem in the release notes:
Enabling BPDU guard causes the Cisco Nexus 1000V to detect these spurious BPDUs and shut down the virtual machine adapters (the origination BPDUs), thereby avoiding loops.
It took them almost three years, but we finally have BPDU guard on a layer-2 virtual switch (why does it matter). Nice!