Brocade VCS fabric has one of the most flexible multichassis link aggregation group (LAG) implementation – you can terminate member links of an individual LAG on any four switches in the VCS fabric. Using that flexibility is not always a good idea.
I documented some pitfalls of too much flexibility in a blog post almost three years ago, and spent significant time describing the caveats of using haphazard vLAG during the Data Center Fabrics update session last May.
The challenges described in the blog post and the video are not Brocade’s fault – they just give you plenty of rope to hang yourself if you believe it’s a good idea to implement a network without spending time on a proper design.
Getting more precise
One of my readers sent me a very valid observation:
Now my question is why you assume 50% of traffic from Server A goes to one link to 1 and 50% of traffic goes to the other link to 2? With vLAG there is still LAG algorithm deciding which frame will go where on the NIC card of the server itself.
He's absolutely correct, so let's be more precise. The load on each link is not exactly 50% (or 25% or 75%) but half of the ECMP/LAG buckets are pointing to one outbound link and half of the buckets are pointing to the other link. Until VCS Fabric has LAG/ECMP traffic distribution based on end-to-end link utilization, the expected results aren’t too far off from what I explained in the video assuming a large number of mice flows. All bets are off if you're sending only a few elephant flows over the fabric.
Want to know more?
If you need…
Introduction to data center networking technologies
Watch Data Center 3.0 webinar
Overview of data center fabric solutions from leading networking vendors
Watch Data Center Fabrics webinar
Design guidelines when building leaf-and-spine fabric
Even more design guidelines
Read the Data Center Design Case Studies book
|... all of the above?||Buy the ipSpace.net subscription.|
Architectural or design advices, design validation of second opinion
Use my ExpertExpress service