Would You Use Avaya's SPBM Solution?
Got this comment to one of my L2-over-VXLAN blog posts:
I found the Avaya SPBM solution "right on the money" to build that L2+ fabric. Would you deploy Avaya SPBM?
Interestingly, I got that same question during one of the ExpertExpress engagements and here’s what I told that customer:
TL&DR: If you’re OK with being locked into a single vendor, Avaya’s SPBM would be a perfect fit. Are you OK with that?
Before going into the details: I love some of the things Avaya did, and most of them make perfect sense. Would I recommend Avaya’s fabric to a customer? I might, after carefully explaining the implications of the rest of this blog post.
Avoiding lock-in (or not)
To be fair, it’s really hard to avoid some sort of lock-in, and you’d be in exactly the same position when considering Cisco’s ACI or FabricPath, Juniper’s Junos Fusion or Virtual Chassis Fabric, or Arista’s CVX/CloudVision.
However, while you can’t avoid some sort of lock-in, you could try to minimize it (if that’s your goal), sometimes trading lock-in for complexity. In this particular example, you could use EVPN over MPLS or VXLAN to get similar results that you’d get with Avaya’s fabric (apart from ease of deploying IP multicast).
For more details on SPBM and Avaya’s extensions, listen to the podcast I did with Roger Lapuh and watch his part of the Data Center Fabrics or Leaf-and-Spine Fabric Architectures webinar.
EVPN used with MPLS or VXLAN is definitely more complex than running a single routing protocol (IS-IS within SPB) that carries core topology as well as L2VPN and L3VPN address families, but it’s a standard solution, scales beyond a single IS-IS area, and you can deploy it over any MPLS or IP core. Avaya’s fabric (like any of the solutions from other vendors I mentioned above) requires an end-to-end Avaya network.
Notes:
- SPBM is a standard technology, and you could build a multi-vendor SPBM fabric. However, all the interesting features (IP routing, L3VPN, distributed router functionality) are Avaya’s extensions to SPBM and are currently not implemented by any other vendor (that I’m aware of).
- One might argue that it would be theoretically possible to build a multi-vendor SPBM core and deploy Avaya switches only at the edges. I don’t think that would work in all cases - if I understood their approach, sometimes they need the core switches to set up multicast distribution trees based on Avaya-specific IS-IS information.
Using well-known technologies
Sometimes you have to look beyond technology and consider soft factors, for example readily-available skills. There are zillions of engineers familiar with IP and IP routing protocols, and thousands of engineers familiar with MPLS. Fewer people had in-depth exposure to PBB (SPBM data plane) and only a few have hands-on SPBM experience (not to mention experience with Avaya’s extensions).
Also, all other vendors are moving to L2-over-VXLAN-over-IP with EVPN control plane. It remains to be seen whether that’s a wise technology decision or a lemming reflex, but regardless of implementation differences the skills gained working with gear from Vendor A remain somewhat relevant even if you move to Vendor B. Working in an VXLAN+EVPN environment is thus better for your career prospects than working in an SPBM environment.
There might be a whole alternate universe out there that I’m not seeing that relies heavily on PBB and SPBM. If you happen to be living in that universe and reading my blog please write a comment.
More on VXLAN transport
You could build an Avaya fabric on top of IP fabric using VXLAN as the transport mechanism, but you wouldn’t get line-rate performance (going from PBB to VXLAN encapsulation cannot be done in a single pass through the Broadcom’s Trident-2 chipset), and you’d have an interesting tunneling challenge.
While most VXLAN-based solutions build automatic tunnels based on egress VTEP IP address, Avaya’s SPBM-over-VXLAN solution uses what looks like point-to-point VXLAN tunnels and runs IS-IS on top of them, and is thus ideal when you want to link SPBM islands across an IP core, but not when you’d like to connect edge switches across an IP transport network.
To use or not to use?
Sometimes it makes sense to use a well-integrated proprietary product, particularly if you’re building smaller islands connected to a standards-based core. Sometimes it makes sense to build a network based on open standards that is easily extended with gear from multiple vendors. The choice is yours, and if you need a second opinion beyond the generic thoughts outlined in this blog post, there’s always ExpertExpress online consulting service.
That's fantastic and completely eliminates any limitations in the underlay network. Also, it's controlled by the end-host, so VMWare and Microsoft are free to innovate as quickly as they like without waiting for the underlay architecture to catch up.
But what technology do we build the underlay with?
1. A routed IP network is a mature technology and works great except there are a number of limitations:
1a. There is (possibly) a lot of manual configuration. Do we use a truck load of /30 networks all over the place?
1b. There is no built-in support for multi-tenancy (for the underlay) unless we deploy VRF, MPLS, RFC2547, etc. Those features are not available unless you start buying much more expensive gear.
2. You can use regular VLANs with Spanning Tree. It's 2016. STP stinks. We all want out of that dungeon.
3. SDN is an option where I continue to have doubts. My old network with standard routing protocols was distributed; failures were localized. If I have a pair of controllers working to orchestrate everything than I have a centralized system with a single point of failure.
4. We have SPB and TRILL. They support millions of segments. They support multi-tenancy. They support ECMP. We have chipsets in inexpensive gear that can move these types of Ethernet frames at line rate.
I agree that vendor lock-in should be a consideration. I agree there are not very many engineers that know SPB and TRILL. But I also know the IT field is staffed by capable people that can accommodate change in technology better than any other segment of the population!
Can't I ask for a world where VXLAN sits on top of something sane?
SDN means centralised network view, it doesn't mean centralised SPOF, controllers can scale horizontally.
SPB/TRILL limit the ability of vendors to innovate, i.e. peddle new gear to their clients. Most hw vendors can do VXLAN but we already have GENEVE on the horizon.
But we still have a couple problems with a traditional IP core.
1. The IP/LDP/MPLS/BGP/VRF/OSPF stack is a lot of moving pieces. I may be comfortable with each of these protocols, but I'm *not* comfortable with how *many* protocols I need to get the job done. Also, I've been doing this for fifteen years. A more junior person is going to have difficulty.
2. I'm not going to get BGP/MPLS/VRF in anything but the top-end datacenter gear. That costs a lot of money. Ugh.
I guess if you are building a datacenter, you might be in the price-range for the proper gear to do BGP/MPLS/VRF. But I build a lot of enterprise networks too. I have user-facing closet switches and enterprise core. For Cisco people, we're talking Catalyst 4500, 3850 and 3650. For HPE, we're talking the 5400R.
I have many of the same needs as a datacenter.
1. I want ECMP. I don't want to shut off all links but one (STP).
2. I'd like multitenancy. Example: My guest wireless shouldn't interact with my other traffic.
3. I might want a VLAN to span across multiple different areas of a campus. Example: I have a campus with five buildings, and each of them needs a VLAN (security partition) for the HVAC controls. Should that be one VLAN or five? Logically, it's only one application with one security profile. But I don't want STP to span across the entire campus.
It seems an SPB or TRILL type of technology would solve my problems. If I could get one of these into the type of gear I use for enterprise builds I could get rid of STP forever!!
"But what technology do we build the underlay with?"
Simple routed network (like Internet). Works every time.
"There is (possibly) a lot of manual configuration. Do we use a truck load of /30 networks all over the place?"
One VLAN per ToR switch if you don't need redundancy. One VLAN per ToR switch pair if you need redundant server access. L3 toward the spine. Well covered in leaf-and-spine fabric designs webinar.
"There is no built-in support for multi-tenancy (for the underlay) unless we deploy VRF, MPLS, RFC2547, etc."
Why do you need multi-tenancy in underlay if you're running multi-tenant networks in the overlay? Separating storage and VXLAN? Use two VLAN-based VRFs.
Campus is a different story. SPB might be interesting there because many existing chipsets support PBB (for SP applications). TRILL is totally new encapsulation, so new chipset; I'd prefer VXLAN over TRILL.
Thus we went for much simpler & straightforward approach and built national academic network using TRILL. It's not new encapsulation, all BCM ASICs support it for several years already - just the vendors haven't enabled it in software. And of course, all programable ASICs can support it as well.
Experience? TRILL took all the good things from routed IP world and implemented it into layer2. You configure it in a few minutes, and can forget about it - because it just works.
For an enterprise network, I wouldn't even start thinking about MPLS or VXLAN anymore. But obviously, solutions like TRILL don't get much attention from vendors, since complex technologies could be sold for more money and you'll probably also need to buy some expert services just to get them working. But we, end-users, need to push for solutions we need, instead of blindly buying complex solutions just because they are loudly marketed.
https://vimeo.com/155635184
The presentation is talking about their fancy new ASIC in the Catalyst 3850 and how the processing pipeline is programmable with new microcode. They are crowing about being able to handle wireless CAPWAP traffic on-chip. But check out the 00:29:30 mark. The presenter claims they have not committed to, but have considered microcode to do TRILL and SPB!
That was nine months ago. I headed over to the newest release notes. Looks like they are real serious about new features. The 3850 microcode can now do MPLS framing and the software can do LDP.
They have also introduced something called Campus Fabric which looks like it uses LISP.
https://goo.gl/l8ykcJ
Anybody have any info on Campus Fabric?
Anybody on this forum work for Cisco Skunkworks and can tell us when TRILL and SPB will be released on the 3850? :-)
https://goo.gl/6AM5HY
Did Cisco beat Barefoot to market with a programmable pipeline and nobody noticed?
Barefoot ASIC might be faster and/or cheaper than the alternatives, but that remains to be seen.
The rest is hype generated to attract funding (see also: OpenFlow).
Cisco just added MPLS L3VPN and VXLAN is due next year. Something tells me SPB isn't on their radar anymore. :)
Cisco also told me they had a proof-of-concept of P4 language running on the Catalyst 3850 UADP! It's a very cool and versatile chip.
But like FabricPath, SPB does nothing to discourage poor network design. It happily accepts sloppy cabling, daisy-chained devices, inconsistent naming, weird link speeds, mismatched firmware, etc.
Sure you can make a mess of VXLAN too, but it's a lot harder since you're forced to think separately about the underlay fabric and the overlay data networks. And if you plan to use the VXLAN/EVPN's distributed layer-3 routing, a clean design is a must.
For instance, I know everyone pretty much things Ethernet is so ubiquitous it will never go away. But there are serious design issues that are now inappropriate for modern networks.
For instance: there twelve bytes of address information in the header. That's 2^96 addresses. But if we have a PTP routed link to the next switch, don’t we need only two addresses? How about no addresses; it will arrive at the other side and we don't need addresses. Talk about bloat. Let the upper layer do the addressing (it already does!)
LOL that a good one....SPB fabric is work of art that makes networking easy ...
The operator Need to be completely halfwit to mess that up...
Switch A
Conf t
i-sid 100 vlan 100
Switch B
Conf t
i-sid 100 vlan 100
even big ape like me can manage that
And no I don't work for Avaya
just a dude that loves to keep it's simple
As far as being forced to use Avaya end to end, that's simply not true. You can certainly mix the Avaya fabric with a traditional network.
You can use 2 cheap Avaya VSP 4000s to create an overlay network , connect cisco switches at either end, do a "show cdp neighbor" and the Cisco switches would think they are on the same LAN. And thats just one use case..
If you plan to do any multicast on your network, then there's no discussion to be had. Avaya is flat out better.
I make an argument for Avaya SPBM over cisco here...I've had a CCIE for over 15 years.
http://www.bluesodium.com/blog/8-reasons-to-choose-avaya-instead-of-cisco-for-your-data-network/
In any case, it seems you're selling Avaya boxes, and I'm just a consultant who has to point out all the pros and cons to the customer, so no surprise our perspectives are different.
I don't actually sell any boxes.
What are the actual Pros and Cons you point out in your article, since the lock-in argument does not fly?
Kindly break it down to a list of Pros / Cons for each of Cisco and Avaya. I'd be extremely happy to have this discussion.
However, the typical networking equipment life cycle is about 5 years and many of Medium sized businesses prefer single vendor during the life time of the network, larger business likely have two-vendor policy. Lock-in, maybe, for longtime, no way.
According to Avaya SPB deployments reached 1200+ mark. I would say it is not longer an untested solution.
In reality, VXLAN and EVPN are designed for Data Centers not campus networks.
If there is a solution that is easy to deploy, easy to operate, automated service creation, you can enjoy it for 5 years, great. If anything better available after 5 years, go for it.
I found SPBM very attractive to utility companies that have some big networks.
moving from Legacy SONT/SDH to Ethernet.
That switch is a perfect match to that application
https://www.al-enterprise.com/en/products/switches/omniswitch-6865