Would You Use Avaya's SPBM Solution?

Got this comment to one of my L2-over-VXLAN blog posts:

I found the Avaya SPBM solution "right on the money" to build that L2+ fabric. Would you deploy Avaya SPBM?

Interestingly, I got that same question during one of the ExpertExpress engagements and here’s what I told that customer:

TL&DR: If you’re OK with being locked into a single vendor, Avaya’s SPBM would be a perfect fit. Are you OK with that?

Before going into the details: I love some of the things Avaya did, and most of them make perfect sense. Would I recommend Avaya’s fabric to a customer? I might, after carefully explaining the implications of the rest of this blog post.

Avoiding lock-in (or not)

To be fair, it’s really hard to avoid some sort of lock-in, and you’d be in exactly the same position when considering Cisco’s ACI or FabricPath, Juniper’s Junos Fusion or Virtual Chassis Fabric, or Arista’s CVX/CloudVision.

However, while you can’t avoid some sort of lock-in, you could try to minimize it (if that’s your goal), sometimes trading lock-in for complexity. In this particular example, you could use EVPN over MPLS or VXLAN to get similar results that you’d get with Avaya’s fabric (apart from ease of deploying IP multicast).

For more details on SPBM and Avaya’s extensions, listen to the podcast I did with Roger Lapuh and watch his part of the Data Center Fabrics or Leaf-and-Spine Fabric Architectures webinar.

EVPN used with MPLS or VXLAN is definitely more complex than running a single routing protocol (IS-IS within SPB) that carries core topology as well as L2VPN and L3VPN address families, but it’s a standard solution, scales beyond a single IS-IS area, and you can deploy it over any MPLS or IP core. Avaya’s fabric (like any of the solutions from other vendors I mentioned above) requires an end-to-end Avaya network.


  • SPBM is a standard technology, and you could build a multi-vendor SPBM fabric. However, all the interesting features (IP routing, L3VPN, distributed router functionality) are Avaya’s extensions to SPBM and are currently not implemented by any other vendor (that I’m aware of).
  • One might argue that it would be theoretically possible to build a multi-vendor SPBM core and deploy Avaya switches only at the edges. I don’t think that would work in all cases - if I understood their approach, sometimes they need the core switches to set up multicast distribution trees based on Avaya-specific IS-IS information.

Using well-known technologies

Sometimes you have to look beyond technology and consider soft factors, for example readily-available skills. There are zillions of engineers familiar with IP and IP routing protocols, and thousands of engineers familiar with MPLS. Fewer people had in-depth exposure to PBB (SPBM data plane) and only a few have hands-on SPBM experience (not to mention experience with Avaya’s extensions).

Also, all other vendors are moving to L2-over-VXLAN-over-IP with EVPN control plane. It remains to be seen whether that’s a wise technology decision or a lemming reflex, but regardless of implementation differences the skills gained working with gear from Vendor A remain somewhat relevant even if you move to Vendor B. Working in an VXLAN+EVPN environment is thus better for your career prospects than working in an SPBM environment.

There might be a whole alternate universe out there that I’m not seeing that relies heavily on PBB and SPBM. If you happen to be living in that universe and reading my blog please write a comment.

More on VXLAN transport

You could build an Avaya fabric on top of IP fabric using VXLAN as the transport mechanism, but you wouldn’t get line-rate performance (going from PBB to VXLAN encapsulation cannot be done in a single pass through the Broadcom’s Trident-2 chipset), and you’d have an interesting tunneling challenge.

While most VXLAN-based solutions build automatic tunnels based on egress VTEP IP address, Avaya’s SPBM-over-VXLAN solution uses what looks like point-to-point VXLAN tunnels and runs IS-IS on top of them, and is thus ideal when you want to link SPBM islands across an IP core, but not when you’d like to connect edge switches across an IP transport network.

To use or not to use?

Sometimes it makes sense to use a well-integrated proprietary product, particularly if you’re building smaller islands connected to a standards-based core. Sometimes it makes sense to build a network based on open standards that is easily extended with gear from multiple vendors. The choice is yours, and if you need a second opinion beyond the generic thoughts outlined in this blog post, there’s always ExpertExpress online consulting service.


  1. Something confuses me about the debate between VXLAN and other overlay technologies. VXLAN is typically initiated by hosts on the network, e.g., ESXi or HyperV. It is an *overylay* network. It creates a Layer-2 segment that sits on top of the underlay network. Great. That gives us a huge amount of flexibility. Millions of networks could be created/removed as services are created/removed. It can support micro-segmentation, ephemeral networks and multi-tenancy. Those services could be VMs, containers, whatever.

    That's fantastic and completely eliminates any limitations in the underlay network. Also, it's controlled by the end-host, so VMWare and Microsoft are free to innovate as quickly as they like without waiting for the underlay architecture to catch up.

    But what technology do we build the underlay with?

    1. A routed IP network is a mature technology and works great except there are a number of limitations:
    1a. There is (possibly) a lot of manual configuration. Do we use a truck load of /30 networks all over the place?
    1b. There is no built-in support for multi-tenancy (for the underlay) unless we deploy VRF, MPLS, RFC2547, etc. Those features are not available unless you start buying much more expensive gear.

    2. You can use regular VLANs with Spanning Tree. It's 2016. STP stinks. We all want out of that dungeon.

    3. SDN is an option where I continue to have doubts. My old network with standard routing protocols was distributed; failures were localized. If I have a pair of controllers working to orchestrate everything than I have a centralized system with a single point of failure.

    4. We have SPB and TRILL. They support millions of segments. They support multi-tenancy. They support ECMP. We have chipsets in inexpensive gear that can move these types of Ethernet frames at line rate.

    I agree that vendor lock-in should be a consideration. I agree there are not very many engineers that know SPB and TRILL. But I also know the IT field is staffed by capable people that can accommodate change in technology better than any other segment of the population!

    Can't I ask for a world where VXLAN sits on top of something sane?
    1. underlay with unnumbered BGP(or any other routing protocol that can give you ECMP routing). EVPN as an additional AF only for the places where you need break out into the physical world. multitenancy can implemented with microsegmentation(Calico does that).
      SDN means centralised network view, it doesn't mean centralised SPOF, controllers can scale horizontally.
      SPB/TRILL limit the ability of vendors to innovate, i.e. peddle new gear to their clients. Most hw vendors can do VXLAN but we already have GENEVE on the horizon.
    2. I like the concept of using unnumbered interfaces. That takes a lot of admin overhead out of the system. Also, I'm comfortable with using BGP/MPLS/VRF. I really want our "legacy" technologies to work because they have been proven stable for 10+ years.

      But we still have a couple problems with a traditional IP core.

      1. The IP/LDP/MPLS/BGP/VRF/OSPF stack is a lot of moving pieces. I may be comfortable with each of these protocols, but I'm *not* comfortable with how *many* protocols I need to get the job done. Also, I've been doing this for fifteen years. A more junior person is going to have difficulty.
      2. I'm not going to get BGP/MPLS/VRF in anything but the top-end datacenter gear. That costs a lot of money. Ugh.

      I guess if you are building a datacenter, you might be in the price-range for the proper gear to do BGP/MPLS/VRF. But I build a lot of enterprise networks too. I have user-facing closet switches and enterprise core. For Cisco people, we're talking Catalyst 4500, 3850 and 3650. For HPE, we're talking the 5400R.

      I have many of the same needs as a datacenter.

      1. I want ECMP. I don't want to shut off all links but one (STP).
      2. I'd like multitenancy. Example: My guest wireless shouldn't interact with my other traffic.
      3. I might want a VLAN to span across multiple different areas of a campus. Example: I have a campus with five buildings, and each of them needs a VLAN (security partition) for the HVAC controls. Should that be one VLAN or five? Logically, it's only one application with one security profile. But I don't want STP to span across the entire campus.

      It seems an SPB or TRILL type of technology would solve my problems. If I could get one of these into the type of gear I use for enterprise builds I could get rid of STP forever!!
    3. @Dan (Q#1)

      "But what technology do we build the underlay with?"

      Simple routed network (like Internet). Works every time.

      "There is (possibly) a lot of manual configuration. Do we use a truck load of /30 networks all over the place?"

      One VLAN per ToR switch if you don't need redundancy. One VLAN per ToR switch pair if you need redundant server access. L3 toward the spine. Well covered in leaf-and-spine fabric designs webinar.

      "There is no built-in support for multi-tenancy (for the underlay) unless we deploy VRF, MPLS, RFC2547, etc."

      Why do you need multi-tenancy in underlay if you're running multi-tenant networks in the overlay? Separating storage and VXLAN? Use two VLAN-based VRFs.
    4. @Dan (Q#2) - If you need physical multi-tenancy in a DC environment go with EVPN+VXLAN. Supported by most DC switching vendors these days.

      Campus is a different story. SPB might be interesting there because many existing chipsets support PBB (for SP applications). TRILL is totally new encapsulation, so new chipset; I'd prefer VXLAN over TRILL.
    5. @Dan: You exactly expressed my opinion on the topic. All unnecessary complexity just makes the whole system hard to configure & maintain, and less reliable (more pieces => more bugs, more possibilities to fail).

      Thus we went for much simpler & straightforward approach and built national academic network using TRILL. It's not new encapsulation, all BCM ASICs support it for several years already - just the vendors haven't enabled it in software. And of course, all programable ASICs can support it as well.

      Experience? TRILL took all the good things from routed IP world and implemented it into layer2. You configure it in a few minutes, and can forget about it - because it just works.

      For an enterprise network, I wouldn't even start thinking about MPLS or VXLAN anymore. But obviously, solutions like TRILL don't get much attention from vendors, since complex technologies could be sold for more money and you'll probably also need to buy some expert services just to get them working. But we, end-users, need to push for solutions we need, instead of blindly buying complex solutions just because they are loudly marketed.
  2. I am in avaya working on SPB and I feel they did not advertise/market it properly in early years. They did not create eco system. Now when they have started marketing it, the world has changed so they are just doing catch up. Even sales doesn't know much about the technology. With uncertainty hanging around the company, it will be more difficult. Regarding comparing with VXLAN, vxlan packet has more overhead than SPB packet.
  3. checkout apstra's aos, if you don't like vendor lockin
    1. And you'll never be locked into AOS, right? Wake up...
  4. What about the H3C S6830 (HP 5940): MPLS, VPLS, EVPN, BGP, VRF, SPB, TRILL,... 48x SFP+ (10G) + 6x QSFP+ (100Gbe) for ~9000$ Does anyone really use those in datacenters?
  5. I found a link to a video that I remember viewing a while back.
    The presentation is talking about their fancy new ASIC in the Catalyst 3850 and how the processing pipeline is programmable with new microcode. They are crowing about being able to handle wireless CAPWAP traffic on-chip. But check out the 00:29:30 mark. The presenter claims they have not committed to, but have considered microcode to do TRILL and SPB!

    That was nine months ago. I headed over to the newest release notes. Looks like they are real serious about new features. The 3850 microcode can now do MPLS framing and the software can do LDP.

    They have also introduced something called Campus Fabric which looks like it uses LISP.

    Anybody have any info on Campus Fabric?

    Anybody on this forum work for Cisco Skunkworks and can tell us when TRILL and SPB will be released on the 3850? :-)
    1. By the way, I've seen Barefoot mentioned a couple times at packetpushers.net
      Did Cisco beat Barefoot to market with a programmable pipeline and nobody noticed?

    2. Programmable pipelines have been around forever (IIRC even Intel's FM6000 has one).

      Barefoot ASIC might be faster and/or cheaper than the alternatives, but that remains to be seen.

      The rest is hype generated to attract funding (see also: OpenFlow).
    3. BRKARC-3467 has a slide that shows TRILL, SPB, LISP, and VXLAN as future possibilities for the Catalyst 3850 UADP ASIC.

      Cisco just added MPLS L3VPN and VXLAN is due next year. Something tells me SPB isn't on their radar anymore. :)

      Cisco also told me they had a proof-of-concept of P4 language running on the Catalyst 3850 UADP! It's a very cool and versatile chip.
  6. Alcatel-Lucent Enterprise and HP seems to ramp up their SPB implementations. Put the routing outside the SPB if you like and use it as the efficient transport it is. The separation of the logical vs physical network is the goal and the efficient configuration the enabler. It's not that complicated to debug once you have read up on it. At least not harder than other proprietary/open source "SDN" solutions giving the same benefits. I agree strongly with the use cases Dan Massameno presents above and are going to implement a SPB as soon as I get to buy the gear. Cisco Campus fabric is a waste of time in comparison. If you eventually want to exit SPB just shrink the size of the fabric until gone. The switches runs the other protocols as well...
    1. We had Juniper taking about their DC solutions a couple of weeks ago and I mentioned my interest in SPB and they very vaguely said they heard rumours about Juniper supporting it in future. I hold my thumbs.
  7. Having spent the last few months working in an SPB network, I can vouch that it's easy and solid. It's a lot like Cisco FabricPath.

    But like FabricPath, SPB does nothing to discourage poor network design. It happily accepts sloppy cabling, daisy-chained devices, inconsistent naming, weird link speeds, mismatched firmware, etc.

    Sure you can make a mess of VXLAN too, but it's a lot harder since you're forced to think separately about the underlay fabric and the overlay data networks. And if you plan to use the VXLAN/EVPN's distributed layer-3 routing, a clean design is a must.
  8. With a programmable pipeline, the flexible ability to handle framing at Layer-2 and the P4 language, maybe the industry can do some core research/development and innovation.

    For instance, I know everyone pretty much things Ethernet is so ubiquitous it will never go away. But there are serious design issues that are now inappropriate for modern networks.

    For instance: there twelve bytes of address information in the header. That's 2^96 addresses. But if we have a PTP routed link to the next switch, don’t we need only two addresses? How about no addresses; it will arrive at the other side and we don't need addresses. Talk about bloat. Let the upper layer do the addressing (it already does!)
  9. Sometimes you have to look beyond technology and consider soft factors, for example readily-available skills. There are zillions of engineers familiar with IP and IP routing protocols, and thousands of engineers familiar with MPLS. Fewer people had in-depth exposure to PBB (SPBM data plane) and only a few have hands-on SPBM experience (not to mention experience with Avaya’s extensions).

    LOL that a good one....SPB fabric is work of art that makes networking easy ...
    The operator Need to be completely halfwit to mess that up...

    1. And you never had to troubleshoot something that looked so easy it's impossible to mess up? Can I mention a few rules from RFC 1925?
    2. L2 Tunnel (Avaya vsn)
      Switch A
      Conf t
      i-sid 100 vlan 100

      Switch B
      Conf t
      i-sid 100 vlan 100

      even big ape like me can manage that
      And no I don't work for Avaya
      just a dude that loves to keep it's simple

  10. I don't really think your arguments for not using Avaya SPBM are very valid at all. The biggest argument for not using Avaya, might be that one is reluctant to learn the tech behind it.
    As far as being forced to use Avaya end to end, that's simply not true. You can certainly mix the Avaya fabric with a traditional network.
    You can use 2 cheap Avaya VSP 4000s to create an overlay network , connect cisco switches at either end, do a "show cdp neighbor" and the Cisco switches would think they are on the same LAN. And thats just one use case..

    If you plan to do any multicast on your network, then there's no discussion to be had. Avaya is flat out better.
    I make an argument for Avaya SPBM over cisco here...I've had a CCIE for over 15 years.
    1. I don't think your arguments are very persuasive either ;), they go mostly into the "it's cheaper" direction.

      In any case, it seems you're selling Avaya boxes, and I'm just a consultant who has to point out all the pros and cons to the customer, so no surprise our perspectives are different.
    2. "Its cheaper" is pretty persuasive if you get the same functionality. That's why there are whole websites dedicated to comparing prices of things.

      I don't actually sell any boxes.

      What are the actual Pros and Cons you point out in your article, since the lock-in argument does not fly?
      Kindly break it down to a list of Pros / Cons for each of Cisco and Avaya. I'd be extremely happy to have this discussion.
  11. Just tumbled upon this post. I can understand Ivan is trying to be vendor neutral.

    However, the typical networking equipment life cycle is about 5 years and many of Medium sized businesses prefer single vendor during the life time of the network, larger business likely have two-vendor policy. Lock-in, maybe, for longtime, no way.

    According to Avaya SPB deployments reached 1200+ mark. I would say it is not longer an untested solution.

    In reality, VXLAN and EVPN are designed for Data Centers not campus networks.

    If there is a solution that is easy to deploy, easy to operate, automated service creation, you can enjoy it for 5 years, great. If anything better available after 5 years, go for it.

    1. Gates
      I found SPBM very attractive to utility companies that have some big networks.
      moving from Legacy SONT/SDH to Ethernet.
      That switch is a perfect match to that application

  12. This article is out of touch with reality. SPBm is an IEEE standard and not proprietary. Avaya, Ciena, Alcotel, HP, hauwei, and Fujitsu all support SPBm. So you are not trapped. Now that Brocade and avaya have been bought up by extreme, you can add that to the list.

    1. As I wrote in the blog post (you did spend more than three seconds reading it, did you?) the layer-2 part is standard, everything else that makes Avaya's solution really interesting is Avaya's proprietary extension.
Add comment