Is MLAG an Alternative to Stackable Switches?

Alex was trying to figure out how to use Catalyst 3850 switches and sent me this question:

Is MLAG an alternative to use rather than physically creating a switch stack?

Let’s start with some terminology.

Link Aggregation Group (LAG) is the ability to bond multiple Ethernet links into a single virtual link. LAG (as defined in 802.1ax standard) can be used between a pair of adjacent nodes. While that’s good enough if you need more bandwidth it doesn’t help if you want to increase redundancy of your solution by connecting your edge device to two switches while using all uplinks and avoiding the shortcomings of STP. Sounds a bit like trying to keep the cake while eating it.

Enter the magical world of MLAG (Multichassis LAG) – the ability to present two or more physical boxes (members of MLAG cluster) as a single logical device from LACP perspective.

Most MLAG implementations (apart from those based on ICCP or EVPN) are totally proprietary, and vendors use numerous tricks to get the job done. Some use central control plane with all other devices acting as stupid packet forwarders (aka SDN or stackable switches), in which case they're sort-of still running traditional LAG/LACP. Others use traditional distributed control plane with additional control-plane protocols between MLAG cluster members.

Long story short: stackable switches implement MLAG with centralized control plane.

To make matters even more confusing, vendors use different names for their MLAG implementations. Cisco vPC, Cisco VSS, HP IRF, Arista MLAG… do more-or-less the same thing from the perspective of an edge device.

To further confuse the innocent, some vendors call centralized control plane stacking, while getting the same results with the distributed control plane is called MLAG. Go figure.

Want to know more?

MLAG is covered in numerous webinars:

All these webinars are available with standard ipSpace.net webinar subscription. SDN101 webinar is also available with free subscription.

Need more than just technology discussion?

Latest blog posts in Multi-Chassis Link Aggregation series

11 comments:

  1. Another uninformative blog post, another advertisement. Your blog get's kinda boring. Why not deleting my inconvenient comment?
    Replies
    1. Disagreed, I learned something. You can't judge the value of content for other people.
    2. Hey my Anonymous friend. So nice to see you again ;)

      "Why not deleting my inconvenient comment?"

      I understand that everyone has their unique way of sharing their experience and accumulated wisdom, and giving back to the community that helped them grow. If this is the best way for you to create value, who am I to stop that?
  2. I draw a big distinction on mlag types on whether you can operate/upgrade individual members independently or not (ie: shared control plane)

    If you intend to connect hosts with mlag as a redundancy measure, an mlag with a shared control place device that you can't do maintenance on individual devices like a 3850 stack is pointless and wasteful as you'll need to reboot the whole stack to upgrade. Mlag on a 3850 stack is only useful as a mean to augment available bandwidth. VSS is slightly better but not much.

    VPC on nx-os is better in that regard; control plane of the switches stay distinct and you can generally do maintenance including upgrades on one switch at a time while keeping one side of the mlag up all the time.
    Replies
    1. Thank you! Absolutely agree... and as I was thinking about this, I stumbled upon a whole new can of worms. More in another blog post.
    2. If I remember correctly using vPC requires the same code on both sides and if you are using FEX dual homed... Good luck (using FEX is dumb it is just laziness ) You can try ISSU but there are a lot of caveats with it (not from any version to any version, need to meet a lot of requirements) and it doesn't works all the time.
    3. They don't require the same version.
      Also, we don't do dual homed FEXs; dual home fexs, as you state, lead to awful and useless complexity and brittleness. We dual home (lacp as much as possible) all our servers/hosts to single homed fex pairs. As so we have a fairly plain topo that is resilient both for outages and maintenance and provides all active links... and thanks to the stability (or lack thereof...) of nx-is code (built in caps monkey as I call it) we get to test loss of a switch regularly without impact to the service (only the server admins notice they temporaires lost a server uplink)
    4. Sneaking suspicion: the can of worms will contain L2 v.s. L3. While most MLAG implementations (including Cisco's vPC) will by now have fixed some of the 'interesting' L3 issues that came with them, it's still best to view MLAG as tool for L2 architectures.
      It is however perfectly feasible to design & build a L3-only fabric, including redundant ToR/leaf switches and end host connectivity.
  3. I would compare mlag with disk-raid systems. It's more about uptime and surviving single hardware failures than anything else. As with raid the server/switch needs rebooting once in a while to upgrade but hopefully that may be done in scheduled service.
  4. I know it's splitting hairs, but VSS is the acronym used to describe the cluster of switches where MEC (Multichassis EtherChannel) is the acronym used to actually describe a VSS MLAG.

    You can obfuscate the answer even more if you get into talking about chassis devices (i.e. Cisco 4500/6500/6800/9300, etc) that support multiple management cards (supervisor engines, route processors, what have you) and multiple interface line cards all within one chassis.
Add comment
Sidebar