Large-Scale Bridging = Nuked Earth

If you’re not working for a data center fabric vendor, you’ll probably enjoy the excellent analogy Ethan Banks made after reading my TRILL-over-WAN post:

Think of a network topology like a road map. There's boulevards, major junction points, highways, dead ends, etc. Now imagine what that map looks like after it's been nuked from orbit: flat. Sure, we blew up the world, but you can go in a straight line anywhere you want.

... and don’t forget to be nice to the people asking for inter-DC VM mobility ;)

add comment

Nexus 1000V LACP offload and the dangers of in-band control

2021-03-01: Nexus 1000v turned into abandonware long time ago, and is now officially a zombie (oops, EOL). However, the challenges they were facing with LACP offload are still worth pointing out to anyone advocating centralized control plane (stupidity formerly known as SDN).

A while ago someone sent me the following comment as part of a lengthy discussion focusing on Nexus 1000V: “My SE tells me that the latest 1000V release has rewritten the LACP code so that it operates entirely within the VEM. VSM will be out of the picture for LACP negotiations. I guess there have been problems.

If you’re not convinced you should be running LACP between the ESX hosts and the physical switches, read this one (and this one). Ready? Let’s go.

read more see 1 comments

VXLAN, OTV and LISP

Immediately after VXLAN was announced @ VMworld, the twittersphere erupted in speculations and questions, many of them focusing on how VXLAN relates to OTV and LISP, and why we might need a new encapsulation method.

VXLAN, OTV and LISP are point solutions targeting different markets. VXLAN is an IaaS infrastructure solution, OTV is an enterprise L2 DCI solution and LISP is ... whatever you want it to be.

read more see 14 comments

VXLAN: MAC-over-IP-based vCloud networking

In one of my vCloud Director Networking Infrastructure rants I wrote “if they had decided to use IP encapsulation, I would have applauded.” It’s time to applaud: Cisco has just demonstrated Nexus 1000V supporting MAC-over-IP encapsulation for vCloud Director isolated networks at VMworld, solving at least some of the scalability problems MAC-in-MAC encapsulation has.

Nexus 1000V VEM will be able to (once the new release becomes available) encapsulate MAC frames generated by virtual machines residing in isolated segments into UDP packets exchanged between VEMs.

read more see 7 comments

FCoE networking elements classification

When I (somewhat jokingly) wrote about the dense- and sparse-mode FCoE, I had no idea someone would try to extend the analogy to all possible FCoE topologies like Tony Bourke did. Anyhow, now that the cat is out of the bag, let’s state the obvious: enumerating all possible FCoE topologies is like trying to list all possible combinations of NAT, IP routing over at least two L2 technologies, and bridging; while it can be done, the best one can reasonably hope for is a list of supported topologies from various vendors.

However, it might make sense to give you a series of questions to ask the vendors offering FCoE gear to help you classify what their devices actually do.

read more see 5 comments

BGP Next Hop Processing

Following my IBGP or EBGP in an enterprise network post a few people have asked for a more graphical explanation of IBGP/EBGP differences. Apart from the obvious ones (AS path does not change inside an AS) and more arcane ones (local preference is only propagated on IBGP sessions, MED of an EBGP route is not propagated to other EBGP neighbors), the most important difference between IBGP and EBGP is BGP next hop processing.

read more see 14 comments

Interesting links (2011-08-28)

Most persuasive argument of the week: “Are traffic charges needed to avert a coming capex catastrophe?” by Robert Kenny. This is how you rebut the claims of the greedy Service Providers (and their hired guns), not by hysterical screaming and spitting perfected by some net neutrality zealots.

Most insightful talk: An Attempt to Motivate and Clarify Software-Defined Networking by Scott Shenker. While he’s handwaving across a lot of details, the framework does make sense.

read more see 3 comments

DMVPN as a Backup for MPLS/VPN

SK left a long comment to my More OSPF-over-DMVPN Questions post describing a scenario I find quite often in enterprise networks:

  • Primary connectivity is provided by an MPLS/VPN service provider;
  • Backup connectivity should use DMVPN;
  • OSPF is used as the routing protocol;
  • MPLS/VPN provider advertises inter-site routes as external OSPF routes, making it hard to properly design the backup connectivity.

If you’re familiar with the way MPLS/VPN handles OSPF-in-VRF, you’re probably already asking the question, “How could the inter-site OSPF routes ever appear as E1/E2 routes?”

read more see 7 comments

IBGP or EBGP in an enterprise network?

I got the following question from one of my readers:

I recently started working at a very large enterprise and learnt that the network uses BGP internally. Running IBGP internally is not that unexpected, but after some further inquiry it seems that we are running EBGP internally. I must admit I'm a little surprised about the use of EBGP internally and I wanted to know your thoughts on it.

Although they are part of the same protocol, IBGP and EBGP solve two completely different problems; both of them can be used very successfully in a large enterprise network.

read more see 22 comments

BGP/IGP Network Design Principles

In the next few days, I'll write about some of the interesting topics we’ve been discussing during the last week’s fantastic on-site workshop with Ian Castleman and his team. To get us started, here’s a short video describing BGP/IGP network design principles. It’s taken straight from my Building IPv6 Service Provider Core webinar (recording), but the principles apply equally well to large enterprise networks.

read more add comment

Soft Switching Might not Scale, but We Need It

Following a series of soft switching articles written by Nicira engineers (hint: they are using a similar approach as Juniper’s QFabric marketing team), Greg Ferro wrote a scathing Soft Switching Fails at Scale reply.

While I agree with many of his arguments, the sad truth is that with the current state of server infrastructure virtualization we need soft switching regardless of the hardware vendors’ claims about the benefits of 802.1Qbg (EVB/VEPA), 802.1Qbh (port extenders) or VM-FEX.

read more see 7 comments

Quotes of the week

I’ve spent the last few days with a fantastic group of highly skilled networking engineers (can’t share the details, but you know who you are) discussing the topics I like most: BGP, MPLS, MPLS Traffic Engineering and IPv6 in Service Provider environment.

One of the problems we were trying to solve was a clean split of a POP into two sites, retaining redundancy without adding too much extra equipment. The strive for maximum redundancy nudged me to propose the unimaginable: layer-2 interconnect between four tightly controlled routers running BGP, but even that got shot down with a memorable quote from the senior network architect:

read more see 4 comments

DMVPN Deployment Success Story

Warning: totally shameless plug ahead. You might want to stop reading right now.

Every now and then one of the engineers listening to my webinars shares a nice success story with me. One of them wrote:

I'm doing a DMVPN deployment and Cisco design docs just don’t cover dual ISPs for the spokes hence I thought I give your webinars/configs a try.

... and a bit later (after going through the configs that you get with the DMVPN webinar):

read more see 1 comments

VM-FEX – not as convoluted as it looks

Update 2021-01-03: As far as I understand, VM-FEX died together with Cisco Nexus 1000v. I might be wrong and the zombie is still kicking...

Reading Cisco’s marketing materials, VM-FEX (the feature probably known as VN-Link before someone went on a FEX-branding spree) seems like a fantastic idea: VMs running in an ESX host are connected directly to virtual physical NICs offered by the Palo adapter and then through point-to-point virtual links to the upstream switch where you can deploy all sorts of features the virtual switch embedded in the ESX host still cannot do. As you might imagine, the reality behind the scenes is more complex.

read more see 8 comments
Sidebar