Multi-Chassis Link Aggregation (MLAG) Basics

If you ask any networking engineer building layer-2 fabrics the traditional way about his worst pains, I’m positive Spanning Tree Protocol (STP) will be very high on the shortlist. In a well-designed fully redundant hierarchical bridged network where every device connects to at least two devices higher in the hierarchy, you lose half the bandwidth to STP loop prevention whims.

You REALLY SHOULD not build layer-2 data center fabrics with spanning tree anymore1, but if you’re still stuck in STP land, you can try to dance around the problem:

  • Push routing as close to the network edge as you can. This works if your environment already uses overlay virtual networking (for example, VMware NSX or Docker/Kubernetes), otherwise you might get a violent kickback from the server admins when they realize they cannot move the VMs at will anymore;
  • Play with per-VLAN costs in PVST+ or MSTP, ensuring the need for constant supervision and magnificent job security;

Considering the alternatives, multi-chassis link aggregation just might be what you need.

Link aggregation is an ancient technology that allows you to bond multiple parallel links into a single virtual link (link aggregation group – LAG). With parallel links being replaced by a single virtual link, STP detects no loops and all the physical links can be fully utilized.

Vendors like to use snazzy marketing terms instead of link aggregation. You’ll hear about port channel, Etherchannel, link bonding or multi-link trunking.

I was also told that link aggregation makes your bridged network more stable2. Every link state change triggers a Topology Change Notification in spanning tree, potentially sending a large part of your network through the STP listening-learning-forwarding state diagram. A link failure of a LAG member does not change the state of the aggregation group (unless it was the last working link in the group), and since STP rides on top of aggregated interfaces, it does not react to the failure.

Imagine you could pretend two physical boxes use a single control plane and coordinated switching fabrics. The links terminated on two physical boxes would seem to terminate within the same control plane and you could aggregate them using LACP. Welcome to the wonderful world of Multi-Chassis Link Aggregation (MLAG).

MLAG nicely solves two problems:

  • When used in the network core, multiple links appear as a single link. No bandwidth is wasted due to STP loop prevention while you still retain almost full redundancy – just make sure you always read the smallprint to understand what happens when one of the two switches in the MLAG pair fails.
  • When used between a server and a pair of top-of-rack switches, it allows the server to use the full aggregate bandwidth while still retaining resiliency against a link or switch failure.

Standardization? Not Really

MLAG could be a highly desirable tool in your design/deployment toolbox, but few vendors have taken the pains to start the standardization effort, the notable exception being Juniper with ICCP (RFC 7275). EVPN is also supposed to be able to solve most of the MLAG control-plane challenges, but it’s questionable whether the resulting complexity is worth the effort.

Architectural Approaches

If you want to make two (or more) devices act like a single device, you have to build some sort of a clustering solution. Doing it from scratch shouldn’t be too hard, but when you have to bolt this functionality on top of a decade-old heap of real-time kludges3 the challenges become interesting.

The architectural approaches used by individual vendors are widely different:

  • A common early approach was to turn all but one of the control planes of MLAG cluster members into half-comatose state. Cisco VSS could do that with two devices, Juniper (Virtual Chassis) or HP (IRF) reused their stackable switch technologies.
  • Modern implementations use cooperative control planes (Cisco vPC, Arista or Cumulus MLAG)
  • Centralized control plane was also a popular solution in the heydays of orthodox SDN.

More information

You’ll get a high-level overview of network virtualization, LAN reference architectures, multi-chassis link aggregation, port extenders and large-scale bridging in Data Center 3.0 for Networking Engineers webinar.

Revision History

2022-05-08
Significant rewrite, removing references to old technologies and obsolete marketing gimmicks, and added a brief mention of ICCP and EVPN.

  1. You should also forget about the “wonderful” proprietary fabric technologies promised by the networking vendors during the early 2010s. ↩︎

  2. Considering we’re discussing bridged fabrics, I should probably say less unstable↩︎

  3. Called Network Operating System because marketing ↩︎

Latest blog posts in Multi-Chassis Link Aggregation series

29 comments:

  1. I think Nortel was the first to provide this feature with what they called "Split Multi-Link Trunking" on the 8600.

    The concept is easy to sell to management and it works very well - until something goes wrong that is - then all hell breaks loose. As you rightly said "understand what happens if the switch hosting the control plane fails" or even a downstream switch for that matter. A number of years ago I rolled it out on a large campus and, a few catastrophic failures later, remove it all.
  2. You hit all the big points perfectly. Network Engineers like to push routing down to the TOR. Spanning tree is annoying. VSS is a hot mess. VPC isn't horrible, but it isn't a standard. There isn't an RFC or IEEE standard to read to make sure it is working correctly. The Juniper matrix is awesome, but expensive.

    Maybe I am a curmudgeon. I'd like to think that it's just healthy paranoia. :)
  3. still, NOBODY does as better and as fast as Alcatel-Lucent when you run it over mpls
  4. Please help me understand: how does MPLS fit into this picture?
  5. First off, you have to think of it as you are the SP providing a redundant service (VPLS/PW)through MCLA (or MC-LAG in ALU language). ALU use a proprietary protocol to sync the 2 nodes (acting as one node). this protocol, which runs over IP, should run over a protected (redundant and FRRed ) network as the nodes must always be in snyc (this is true for every MCLA solution). so MPLS fits nicely into the picture here also. there are much more details. try looking for MC-LAG.
  6. While by no means do I think that I have a large environment, Nortel's Split Multilink Trunk has been working just fine in my main campus. 17 data closets, a handful of physical servers, and a dozen VMWare hosts are all link aggregated to two 8600 switches with two to eight gig links in a Split Multilink Trunk

    I feel that someone needs to go to bat for Nortel / Avaya, since they were doing SMLT way before Cisco's VSS came out to play.
  7. And it would seem now that 3Com(then H3C, now HP) did this for long time, called IRF I believe - if we take the CISCO blindfold down, it would be interesting to see if there is enough of collective experience to compare different 'MCLS' solutions...
  8. Could you point me to a (hopefully deeply) technical paper explaining how it works? I would love to compare solutions from various vendors.
  9. As above, would you have a link to a technical document describing IRF? I got a whitepaper during the TechFieldDay, but based on its technical level, it was probably targeted @ Gartner&Co.
  10. Here's a start:
    http://www.trcnetworks.com/nortel/Data/Swiches/8600/pdfs/Split_multi_Link_Trunking.pdf

    ...and a highlight:
    "Spanning Tree Protocol is disabled on the SMLT ports"

    Not relying on STP for redundancy is one thing. Switching it off is a whole other thing. Deploy SMLT and hope that nobody ever loops two edge switches? No thanks.

    Nortel has an extra twist: R(outed)SMLT, which is kind of like VPC + HSRP. Except there's no virtual router IP. Give your routers x.x.x.1 and x.x.x.2. Configure .1 as the gateway on end systems. If .1 fails, .2 assumes the dead router's address.

    If there's a power outage, and only .2 boots back up? You're done. (though there's a write-status-to-nvram fix for this)

    Come to think of it, it's a lot like vPC in that regard!
  11. The best that I can do right now is user guides, but they have done a good job for me in explaining how the process works.


    Basic deployment scenarios:
    http://www116.nortel.com/docs/bvdoc/ene_tech_pubs/SMLT_and_RSMLT_Deployment_Guide_V1.1.pdf

    Campus design guide (outlines link aggregation and loop detection deployment):
    http://www142.nortelnetworks.com/mdfs_app/enterprise/TCGs/pdf/NN48500-575_2.0_Large_Campus_TSG.pdf

    Configuration Guide for SMLT (includes some of the better technical information):
    http://www142.nortelnetworks.com/mdfs_app/enterprise/ers8600/5.1/pdf/NN46205-518_02.01_Configuration-Link-Aggregation.pdf

    Configuration Guide for RSMLT (both chassis share layer 3 information like OSPF/BGP state)
    http://www142.nortelnetworks.com/mdfs_app/enterprise/ers8600/5.1/pdf/NN46205-523_02.02_Configuration_IP_Routing.pdf
  12. Question about VSS style connections, is there any limitation to the distance (latency) that the 2 chassis can be apart from each other? Say you had a GigE ring between 3 sites and wanted to not route, but instead have 1 site use MCLA to the other 2 sites (the other 2 sites would be a VSS pair), would that work?
  13. Don't even think about that. If the link between the VSS sites falls down, one of the boxes is dead. vPC would be somewhat usable for something like that (and OTV would be perfect), but definitely not VSS.
  14. Indeed. I have been hit by that very vPC design, umm, "choice". Power failure, only 1 Nexus5K came back up, no vPC. I've been told that NX-OS v5, coming in 2011, will resolve this.
  15. hm, not used to getting around HP site, sounds they did not rebrand H3C yet, so it seems a lot of guides are still on H3C sites.. Probably the best to start here (according Google :-)
    http://h3c.com/portal/Technical_Support___Documents/Technical_Documents/
    for all equipmen, then move to each swicth model if needed - seems IRF is supported on many models - 12k, 9500E, 7500E,58xx's. Could not see if IRF between different models is possible.

    Fairly thin on
    http://h3c.com/portal/Products___Solutions/Technology/IRF/

    One of config guides can be found on
    http://h3c.com/portal/download.do?id=1038276

    There seem to be no restriction on STP, in fact it seems this supports even MPLS and many other features.. I haven't had a chance to lay my hands on any of these products, above is only by reading documents :-)
  16. +1 for SMLT. Having said that, it has taken a very long time to make it work properly between all the different products in the Avaya nee nortel lineup. For a while, there were many bugs affecting one MLT flavour or another. Likewise, VSS has many things to improve upon and fix.

    There was a prolonged marketing guy catfight @ networkworld between cisco's VSS and nortel SMLT/RSMLT.
  17. You mention that STP loop prevention mechanims can suck up half of the bandwidth. I almost spit out my coffee when I read this. Really? That seems like an extremely high number.
  18. Can I ask what the issue is with VMotion in top of rack? When you point ESX/i at a default gateway you can VMotion over different subnets just fine. Is there something I'm missing?

    Thanks!
  19. I said you lose half of your bandwidth, not that STP sucks it up 8-)

    STP loop prevention turns off (sets them to "blocking") half of the links in a dual-tree design displayed in the first diagram (blocked links are grayed out in the diagram). STP itself uses very little bandwidth.
  20. vMotion works fine across subnets, but the VM you move across a L3 boundary (from the perspective of VM NIC) has to acquire a different IP address (or you have to use routing tricks). See also the "Routing implications" part of my vMotion post:

    http://blog.ioshints.info/2010/09/vmotion-elephant-in-data-center-room.html
  21. Hrmm, I see your point. I mistook the issue to be vMotion itself. Assuming no need for layer 2 adjacency couldn’t VRFs or some other technology that solves overlapping IP address space work? Perhaps limiting to one cluster per rack kind of thing?
  22. Can't do a thing if you want to retain established sessions. Without that requirement, you don't need vMotion either.
  23. Nice article and STP Loops can be a nightmare and worrisome. That is why I love etherchannels aka portchannels. Its a great feature to take multiple physical links and make them logically as one, no bw is wasted and redundancy is achieved!!

    Cisco VSS is the way of the future as it will do away with STP. ;)
  24. VSS, vPC (or any other MCLA technology) can only solve the STP problems in dual-tree designs. If you have a less nicely-structured network (or uplinks to more than two boxes), you need TRILL or an equivalent to get rid of STP.

    BTW, VSS is just stacking-on-steroids; I prefer vPC.
  25. Downstream loops can be prevented with two features:

    1) Simple Loop Protection Protocol - SMLT switches send probes down their SMLT links to the closet switch. If you see the SLPP hello packet return on another interface you know that you have a loop condition

    2) Control Plane limiting for Broadcast / Multicast traffic can be configured on a per port basis. I configure it on my SMLT links to down the interface and/or VLAN that is sending excess broadcast or multicast traffic

    In both solutions, this is configured on both SMLT switches with the trigger thresholds set to different levels (5 SLPP hello probes vs 50) so that only one side of the SMLT should be disabled during a downstream loop.
  26. Hey, with regards to IRF, basically you can cluster any switch with a 10Gb/s interface on it, with the following general guidelines:
    Chassis (12500, 9500, 7500): Currently up to 2 devices can be clustered. Rumor is that will be increased to 4 in the future.
    5820: Up to 9 devices can be clustered
    5800, 5500, 5120: Up to 8 devices can be clustered

    Certain mixed devices can be clustered using IRF, specifically 5820's and 5800's.

    IRF Clustering is fully stateful, and supports basically all the regular switch featuresets. With regards to STP, an environment that uses all IRF on the Core or Aggregation devices can remove STP from the environment, and use LACP to provide path redundancy instead.

    I believe that the MSR routers support IRF as well, but I haven't configured it myself.
  27. I believe that Arista's MLAG works very in a very similar fashion to Cisco's vPC, right down to aligning L3 gateway selection to avoid hairpinning routed traffic. It's supposed to work with any LACP-capable host or switch downstream, but I don't if the control-plane communication between the MLAG peers is proprietary or not. In practice though I can't see a big advantage for a standards-based approach there as you're unlikely to ever have a MLAG/vPC/etc. landing on dissimilar switches from the same vendor, let alone different vendors.
  28. it should be called DCLAG as in Dual.
    Replies
    1. There are solutions (HP IRF, stackable switches, Juniper VCF, Brocade, maybe Avaya, not sure about EVPN implementations) that allow the LAG to be terminated on more than two switches. Not that it would make sense in most cases...
Add comment
Sidebar