L2 or L3 switching in campus networks?

Michael sent me an interesting question:

I work in a rather large enterprise facing a campus network redesign. I am in favor of using a routed access for floor LANs, and make Ethernet segments rather small (L3 switching on access devices). My colleagues seem to like L2 switching to VSS (distribution layer for the floor LANs). OSPF is in use currently in the backbone as the sole routing protocol. So basically I need some additional pros and cons for VSS vs Routed Access. :-)

The follow-up questions confirmed he has L3-capable switches in the access layer connected with redundant links to a pair of Cat6500s:

What are the options?

There are two fundamental designs Michael could use:

Layer-3 switching (also known as routing) in the access layer. VLANs would be terminated at the access-layer switch (no user-to-switch redundancy, thus no HSRP), the links between access and distribution layer would be P2P L3 links (routed interfaces) and every single switch would participate in the OSPF routing.

Layer-2 switching (also known as bridging) in the access layer. VLANs would be terminated at the distribution layer; the access layer switches would run as pure bridges. Half of the uplinks would be blocked due to the spanning tree limitations, unless you aggregate them with multi-chassis link aggregation (MLAG), which requires VSS on the Cat6500. You would still run STP with MLAG to prevent forwarding loops due to configuration or wiring errors.

When you configure VSS on Cat6500s, they appear as a single IP device, so yet again you don’t need HSRP.

Which one is better?

Both designs have minor benefits and drawbacks. For example, L3 design is more complex and has larger OSPF areas, L2 design requires VSS on Cat6500. The major showstopper is usually the requirement for multiple security zones (for example, users in different departments or guest VLANs).

You might be lucky enough and satisfy the security requirements by installing packet filters in every access VLANs, but more often than not you have to implement path separation throughout the network – for example, the guest VLAN traffic should stay separated from internal traffic.

The proper L3 solution to path separation is full-blown MPLS/VPN with label-based forwarding in the L3 part of the network ... but HP seems to be the only vendor with MPLS/VPN support on low-end A-series switches.

Without MPLS/VPN you’re left with the Multi-VRF kludge (assuming your access layer switch support VRFs – not all do), where you have to create numerous P2P L3 interfaces (using VLANs) between access and core switches. Do I have to mention you have to run a separate copy of OSPF in each VRF instance?

Obviously the MultiVRF-based path separation doesn’t scale, so it might be easier to go with the L2 design: terminate VLANs on the Cat6500, where you can use centralized packet filters, VRFs and even MPLS/VPN if you need to retain the path separation across the network core.

Have I missed something?

What are your thoughts? Would you prefer L2 or L3 switching in access network? Do you believe in “route where you must, bridge where you can” or in “route as much as possible”? Write a comment!

Any relevant webinars?

Sure. Enterprise MPLS/VPN Deployment webinar (recording) describes the path separation challenges and the potential solutions – MultiVRF and MPLS/VPN with label-based forwarding. You’ll also learn about VRF-aware NAT and DHCP (just in case you need them in your network). And if you’re interested in a wider range of topics, you might find the yearly subscription cost effective.

And what are those crazy diagrams?

Greg Ferro has persuaded me that iPad-based drawing has a future. I bought a proper pen (doing it with your fingers will get you a kindergarten-grade results), Penultimate software (nothing to do with Penultimate Hop Popping) and started experimenting. Who know, I just might learn how to do good napkin drawings.

42 comments:

  1. I would go for "Layer-3 switching (also known as routing) in the access layer."

    Why? No bullshitting around with the STP, and security concerns with root/bpdu -guard etc and storm-control.

    Just OSPF and use passive interfaces by default.

    Although the other option "Layer-2 switching (also known as bridging) in the access layer" might be interesting if he has a large number of clients (laptops mostly) who do roaming across the building.

    My 2 cents.
  2. We actually just finished implementing the 'VRF Lite Kludge' in our environment to provide path separation between PCI and non-PCI systems.

    I can't say that this is the best issue and we are looking forward to transitioning to VSS later but we had't considered using MPLS/VPN. Not sure what the advantage would be.
  3. We do exactly as per your final diagram is (well, no VSS - STP for blocking links) - layer2 access, layer3 core, MPLS VPNs to separate security zones, routing between security zones via firewalls. Core is 6500/sup720

    We rejected layer3 access for several reasons. The main one is that "edge" layer3 switches have very poor feature sets in comparison with bigger boxes - at the time, our concerns included "no multicast in VRF lite on 3750", "no netflow", "no ipv6 in hardware (later introduced on 3750)" as well as "a hell of a lot more routers to configure"

    We've actually made edge subnets a lot larger as time has gone by; this helps avoid IP wastage. For example - a not uncommon requirement at our place is 10 floors with maybe 160 regular hosts but requirement for "bursting" to 50 extra dynamic IPs (e.g. during infrequent meetings) in each location. This needs a /24 each i.e. the best part of a /19, or I can provision a single big /21. This is also less lines in the router config. Fault domain size was a concern so we grew slowly - but we're just not seeing problems, only benefits.

    The original iteration was VRF lite. It was a pain in the backside, and I'm glad we went with MPLS. Much less typing to bring up a new VRF, many fewer routing adjacencies.
  4. Advantages to MPLS over VRF-lite having moved from the latter - less typing per-VRF on each new router, fewer routing adjacencies, don't have to burn sub-ints or VLANs for per-VRF p2p interfaces and so on.

    There are apparently some Cisco IOS features coming which make VRF lite "a bit like MPLS" in terms of typing and config - IIRC it is basically auto-creation of the per-VRF p2p VLANs and maintenance of the routing adjacencies - but we're on MPLS and using it for other things (TE to load-balance unequal cost paths) now.
  5. Loving the drawings Ivan!
  6. Currently sitting in a huge all L2 campus, VLANs span everywhere, L3 routing at the core.

    Submitted designs for the latter option mentioned here. L3 at the distribution and p2p L3 links up the chain. Currently no need for pci compliance/security zones in the access layer. Planning to do identity based security into the DC(s).
  7. One more remark (also made by guest): our campus vlans for wifi are getting bigger and bigger. There can be issues with L3 roaming using wifi as opposed to campus wide L2 vlans.
    Remark : as a campus network admin, i really like this kind of blog posts, with (for me) real-life arhitecture issues.... Keep up the good work!
  8. Route as much as possible. Proprietary protocols that do behind the scenes magic (VSS) bother me. I've seen crazy bugs where a packet enters one chassis and gets to the egress asic on the other chassis, yet completely fails to leave the box (Confirmed by Cisco).

    Wireless reminds me of a campus version of vMotion. A technology that may be easy to deploy in a small network, but as things get larger, make designing for scale more difficult. Back in the day (1994), Carnegie Mellon had a dedicated wireless network wired network for all the APs. It peered with the rest of the campus network. It let us optimize the wireless network backbone separately from the wired network backbone. This was in the stone age of wireless networks.

    http://www.cmu.edu/computing/about/history/wireless/index.html has some interesting history.

    When going for scale, I think it best the the network engineers, software developers, system engineers, all have the same goals. We can't dumb down one of the legs of this three legged stool to make it easier for one of the other legs to function. Too much dependence on "fancy" / proprietary protocols makes one a slave to a vendors software development teams and update cycles.

    IMHO
  9. Crazy idea here... fully L3 design, subnets based entirely on physical layout, no security beyond basic DoS/MitM protection, and provide all access to secured resources over VPNs to the data center. Security decisions are based on VPN credentials at the DC firewall. No need to separate guest users; they're only provided internet access without a VPN connection.

    Probably not appropriate for the typical corporate environment, but if your company culture has lots of people working remotely then you have to have the VPN set up for all your workers anyway, and why duplicate the security decisions from the VPN setup to the access network?
  10. What about a L3 design with statics routes per VRF in the access layer and BFD for failure detection?
  11. L2 + L3 but without VSS and similar protocols. They are all great until something goes wrong and you have to debug not so clear internal mechanisms and/or proprietary protocols to basically ending up reloading the boxes because your customers can't afford to wait for you to find what went wrong (assuming you would find it).

    If you go with the L2+L3 distribute your VLANs among your access-distribution uplinks to reduce the wasted bandwidth and rely on QoS for deal with congestion when all VLANs bounce to only one side. Btw you should already have a QoS policy in place to deal with other things beside congestion due to failure (e.g. congestion due to problematic hosts that are doing something unusual).

    Also make sure 4096 VLANs are not a problem to you. You might be able to use Q-in-Q to overcome this limitation, but last time I played with it some vendors had issues supporting Q-in-Q and additional features due to the way the packets would flow from asic-to-asic in the dataplane.

    But I believe we can make something a little clever with L3 only. So here's a couple of ideas:

    1) provisioning L3 only solution requires more typing but shouldn't really be a problem. If it is, then is time for you to put some scripting/automation in your provisioning process. Provisioning customer for a worldwide Provider also requires a lot of typing... guess what they usually do.

    2) If your L3 access switch supports BGP, please don't use OSPF, avoid it like the plague unless we are talking about your backbone.

    3) BGP works pretty well with CE-PE in a MPLS-VPN context so why shouldn't it work great between your L3 access and your L3 distribution (speaking of which... do you really need a distribution layer there?). Maybe BGP is not as reactive as OSPF but you can tweak a couple of things (timers and others) to make it react faster to changing events. Most probably you don't need sub-second convergence anyway (VOIP and Interactive Video can be a challenge).

    4) Unless you have roaming users with /32's floating around between different switches you probably are only going to need a dynamic routing protocol to "test" your access-distribution links and make sure you don't blackhole packets. See if there are other ways to validate these links, I believe that there are some vendors that implement alternative ways to signal routing protocols and/or interface status based on connectivity checks.

    5) If you have roaming users, and you per user VLANs, maybe you have to use something like 802.11X with dynamic VLAN assignment. Maybe you don't need your users to maintain a static IP to track his/her privileges, in which case your address pools would be static so one less reason to depend on a dynamic routing protocol.
  12. L2 campus access switching is (1) cheaper, and (2) easier.

    (1) no L3 licenses
    (2) simple configuration
  13. Of course there's nothing to stop you doing L3 and L2 at the same time: http://packetlife.net/blog/2011/feb/9/hybrid-access-layer-design-revisited/
  14. Once we determined the cost couldn't be beat, we went with the pure L3 design (on Juniper EX) and it has been great. Besides the occasional requirement to span a subnet across multiple closets (NO!) of course. The argument has always been "L3 costs more" but that's not always the case if you negotiate with the right vendors.

    When we looked at the VSS design, my Cisco SE gave us a rare solid piece of advice. He said with VSS if you want to upgrade one, you have to take the whole VSS down. This was 10 months ago and I'm not sure if it's true anymore but that was a headache I didn't want to deal with.

    For what it's worth, we're doing an OSPF totally stub area into area 0 at the dist/core layer and letting EIGRP summary routes take care of the campus reachability to/from DC's. 2x10GbE links per access closet, i have more BW in my campus than my DC's!
  15. Without additional information I's say "route where you must, bridge where you can" and make sure to use DHCP Option 82.
    Dot1x is an additional helper in wired enviroments... but once You can assign the IP without end-user interaction, You can use 'dummer' access switches and route and secure at your own gusto.
  16. If you are into network virtualization, the clean strategy as Ivan points out it to go MPLS VPN.

    We are moving into the deployment phase of a large school board MAN rework, where we are using HP Comware-based equipment (7500s) to virtualize their entire campus. This gear supports MPLS VPN, the various L2 MPLS technologies and even includes VPLS. The one thing you find out when working with Comware is that most of the gear seems to have a great deal of service provider catering functionality put in (likely has to do with China Telecom being a major customer). We've now built this really cool setup and the customer hadn't even asked for MPLS originally - it just came with the gear so we designed using it. The subinterface/multi-slash-30 alternative really isn't fun to deploy and add onto when you have a new VPN required.

    HP does also have the 5800 which supports all of the above. A 5k$ switch with VPLS support is not something you can usually find.

    As another poster highlighted, the issue with VSS (or HP IRF) is that ISSU really only is for minor, compatible upgrades. The manufacturers are usually quick to point out the advantages of stacking but hesitant to point out the caveats. Of course if you have maintenance windows allowing you to bring down your whole core, this really isn't that big a deal. Your availability vs performance requirements will dictate whether the pros and cons are really worth it.
  17. we have those decent 5800s here. I like them a lot and even decided to blog about em a bit :)
  18. For the routed access you have quicker convergence (OSPF/EIGRP well tuned compared to RSTP) but like Brad said you have to put IP Services Licenses on the access switches !!!
  19. What about the Easy Virtual Networking Feature? This should allow routing in Access-Layer without extra dot1q uplinks for each VRF... So getting best of both worlds. Stability of Routing and simple config like L2...

    http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6557/ps6604/whitepaper_c11-638769.html

    Dietmar
  20. EVN is just a configuration sugarcoat on top of the Multi-VRF kludge. You still have to run a separate VLAN and a separate routing protocol for each VRF.
  21. Static routes = manual config != scalable design
  22. Use OpenFlow for static route configuration 8-) Maybe in 3-5 years ...
  23. Cisco now includes a stub only ospf implementation in newer versions of IOS for their 3750s.

    Honestly, I don't see the configuration being simpler. You either have the complexity at layer 2 or layer 3. No matter what you need to create loop free topologies. I'd much rather look in the routing table then have to hop through multiple bridges to find out where a mac address lives on the network.
  24. L2 and _NO_ VSS.

    I've yet to support a network where some VLAN didn't have to be randomly spanned across closets. You can design it without spanning a VLAN, but that doesn't mean it will never be a "business need". And without a hack or redesign, you're essentially screwed with a routed access layer. It's pretty, it's clean, I love it... but it's less realistic than L2.

    VSS just scares me. Shared control plane in core/distro... ack.
    Replies
    1. If you need to SPAN a VLAN in a L3 network, use EoMPLS
    2. Exactly! But be careful, you don't want to end up with a bowl of spaghetti with too much EoMPLS
  25. Actually, you dont necessarily need to buy a license: OSPF for routed access is a recently released feature which allows up to 200 routes within a "free" OSPF implementation on IP Base.

    Or you can simply consider a vendor that does not promote vendor lock-in and not settle for a gimped version of an important protocol like this. Cisco still manages to not have an LLDP implementation on anything else than switches. And lets be truthful, its on the switches so they can win competing phone vendor switching business.
  26. We use VSS in 3 different locations and never had a "problem" as your SE described it for any upgrades. We started with 12.2.33 SXH and progressed through SXI-nn . The upgrade process (eFSU) did not disrupt the network. We use a similar topology as Ivan described in his last example above (L3 NSF and SSO in Cat6k with VRFs and L2 with MEC/MLAG towards access layer - Cat3650 and Cat3750, all SMI). It all depends on what final arrangement one has around the VSS. We use it in both collapsed Aggregation&Core and Aggregation only connected at L3 to the Core upstream.
    More details on eFSU here: http://www.cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/issu_efsu.html
  27. Why not BGP with BFD if you have it? The problem is not many low end switches like 4500 support BFD.
  28. Why not BGP with BFD if you have it? The problem is not many low end switches like 4500 support BFD.
  29. (3) you got some sort of mobility because you can plug your cable in and work in a different apartment, as an example the operations manager wants to bring his notebook to the finance department to share some nice calculations etc...sad but true, those scenarios and needs are driving designs...
  30. Why does routing break this? Can the manager not get a dhcp lease on the network in the finance department? Hopefully one would be doing more to protect finance data than put it in a vlan with ACLs. IP address based security is weak at best.
  31. Darn 4900 (part of 4500 family) doesn't even support fast LACP. grr
  32. you are basically right but i dealt with larger companies where no dhcp leases are allowed(just fixed reservations). our solution was a path isolated guest network and the connect via vpn.
  33. Two default routes per VRF it's not a big deal.
  34. Someone said it earlier, but convergence could be better with L3 in the access for certain applications, namely multicast. I'm working on some designs now with the N7K with heavy multicast apps and the app providers def prefer L3 to the access with a reliance on PIM. Too bad we didn't have the right license on the access switches :) Anyway, running VPC in the Core limits certain mcast functionality on the 7K (see config guide for details)...also I've heard VPC w/ certain FHRPs (namely GLBP) could be a nightmare...again, for mcast apps. Generally speaking though, preference here is loop free L2 to allow for a place for common policy mgmt/deployment...as well as an increased ability to "sniff" in a centralized location.
  35. Hi All,

    I like Layer3 campus setup to avoid the Layer 2 loops and broadcast. I implemented the VSS with layer 3 campus setup with Converged network. It is sable more than a year.
    One time we faced the issues due to IOS bugs. Now the setup is stable more than years 9,500 users in the campus network…
  36. Hi All,

    I like Layer3 campus setup to avoid the Layer 2 loops and broadcast. I implemented the VSS with layer 3 campus setup with Converged network. It is sable more than a year.
    One time we faced the issues due to IOS bugs. Now the setup is stable more than years 9,500 users in the campus network…
  37. "with VSS if you want to upgrade one, you have to take the whole VSS down"
    Nope. ISSU is supported, and if all downlinks are etherchannels - there would be no traffic disruption upon upgrade (well, Cisco claims 200ms, but I could live with that). Though each of the chassis' would be reloaded, the control plane stays solid.
    But... Once I actually tried that with SXI4a, the whole VSS went down. TAC said it's a bug. So in theory VSS is brilliant, but in practice it's too risky.
  38. Layer 2 is simple. Complexity is the enemy of uptime.

    For the same reasons, complex VLAN setups should be avoided as much as possible, with security and segregation implemented on the hosts via managed firewall and IPsec policies. Software solutions are always more flexible.
  39. It very much depends on how large a site we are looking at.
    For a smaller site, I would use the layer2 with VSS solution.

    For a larger site I would run MPLS - but I would collapse the CE into the PE as running VRF lite CEs doesn't scale in a campus environment.

    When you have 20+ VRFs, needed to set up BGP sessions between each vrf between the PE and CE becomes very messy.

    Therefore my recommendation for larger sites:
    Pair of 6500(VSS) as [PE/CE]
    Access switches connected via port channels to the collapsed PE/CE.

    I have had a look at the smaller HP MPLS switches, but they only support relatively small routing tables so they would probably be too small for most MPLS enabled sites.
  40. I don't understand what people have against VSS. It's absolutely rock solid. It's not cheap however. If it was cheaper, I would use it in many more situations.

    I have worked in environments where it's deployed at a major international airport, a top law firm and multiple in data centers. It's really versatile. It's also dead-easy to manage - just barely different than managing one chassis. There are just some very minor differences to keep in mind with VSS deployments.
Add comment
Sidebar