BGP/IGP Network Design Principles

In the next few days, I'll write about some of the interesting topics we’ve been discussing during the last week’s fantastic on-site workshop with Ian Castleman and his team. To get us started, here’s a short video describing BGP/IGP network design principles. It’s taken straight from my Building IPv6 Service Provider Core webinar (recording), but the principles apply equally well to large enterprise networks.

read more add comment

Soft Switching Might not Scale, but We Need It

Following a series of soft switching articles written by Nicira engineers (hint: they are using a similar approach as Juniper’s QFabric marketing team), Greg Ferro wrote a scathing Soft Switching Fails at Scale reply.

While I agree with many of his arguments, the sad truth is that with the current state of server infrastructure virtualization we need soft switching regardless of the hardware vendors’ claims about the benefits of 802.1Qbg (EVB/VEPA), 802.1Qbh (port extenders) or VM-FEX.

read more see 7 comments

Quotes of the week

I’ve spent the last few days with a fantastic group of highly skilled networking engineers (can’t share the details, but you know who you are) discussing the topics I like most: BGP, MPLS, MPLS Traffic Engineering and IPv6 in Service Provider environment.

One of the problems we were trying to solve was a clean split of a POP into two sites, retaining redundancy without adding too much extra equipment. The strive for maximum redundancy nudged me to propose the unimaginable: layer-2 interconnect between four tightly controlled routers running BGP, but even that got shot down with a memorable quote from the senior network architect:

read more see 4 comments

DMVPN Deployment Success Story

Warning: totally shameless plug ahead. You might want to stop reading right now.

Every now and then one of the engineers listening to my webinars shares a nice success story with me. One of them wrote:

I'm doing a DMVPN deployment and Cisco design docs just don’t cover dual ISPs for the spokes hence I thought I give your webinars/configs a try.

... and a bit later (after going through the configs that you get with the DMVPN webinar):

read more see 1 comments

VM-FEX – not as convoluted as it looks

Update 2021-01-03: As far as I understand, VM-FEX died together with Cisco Nexus 1000v. I might be wrong and the zombie is still kicking...

Reading Cisco’s marketing materials, VM-FEX (the feature probably known as VN-Link before someone went on a FEX-branding spree) seems like a fantastic idea: VMs running in an ESX host are connected directly to virtual physical NICs offered by the Palo adapter and then through point-to-point virtual links to the upstream switch where you can deploy all sorts of features the virtual switch embedded in the ESX host still cannot do. As you might imagine, the reality behind the scenes is more complex.

read more see 8 comments

The Road to Complex Designs Is Paved with Great Recipes

A while ago someone asked me to help him troubleshoot his Internet connectivity. He was experiencing totally weird symptoms that turned out to be a mix of MTU problems, asymmetric routing (probably combined with RPF checks on ISP side) and non-routable PE-CE subnets. While trying to figure out what might be wrong from the router configurations, I was surprised by the amount of complexity he’d managed to introduce into his DMZ design by following recipes and best practices we all dole out in blog posts, textbooks and training materials.

read more see 14 comments

Interesting links (2011-08-14)

My Inbox is overflowing (yet again); here are some great links from last week:

Data centers and summer clouds

F5 is addressing an interesting problem with its latest software release: DNS DoS attacks. In other posts, Lori MacVittie describes cloud configuration management and troubleshooting problems.

Matthew Norwood describes an interesting product from HP – you could probably build a small data center with a single blade enclosure.

read more add comment

More OSPF-over-DMVPN Questions

After weeks of waiting, perfect summer weather finally arrived … and it’s awfully hard to write blog posts that make marginal sense when being dead-tired from day-long mountain biking, so I’ll just recap the conversation I had with Brian a few days ago. He asked:

How would I set up a (dual) hub running OSPF with phase 1 spokes and prevent all spoke routes from being seen at other spokes? Think service provider environment.

If you want to have a scalable DMVPN environment, you have to put numerous spokes connected to the same hub in a single IP subnet (otherwise, you’ll end with point-to-point tunnels), which also means they have to be in a single OSPF area and would thus see each other’s LSAs.

read more see 3 comments

Stop reinventing the wheel and look around

Building large-scale VLANs to support IaaS services is every data center designer’s nightmare and the low number of VLANs supported by some data center gear is not helping anyone. However, as Anonymous Coward pointed out in a comment to my Building a Greenfield Data Center post, service providers have been building very large (and somewhat stable) layer-2 transport networks for years. It does seem like someone is trying to reinvent the wheel (and/or sell us more gear).

read more see 7 comments

High Availability Fallacies

I’ve already written about the stupidities of risking the stability of two data centers to enable live migration of “mission critical” VMs between them. Now let’s take the discussion a step further – after hearing how critical the VM the server or application team wants to migrate is, you might be tempted to ask “and how do you ensure its high availability the rest of the time?” The response will likely be along the lines of “We’re using VMware High Availability” or even prouder “We’re using VMware Fault Tolerance to ensure even a hardware failure can’t bring it down.”

read more see 10 comments

Imagine the Ruckus When the Hypervisor Vendors Wake Up

It seems that most networking vendors consider the Flat Earth architectures the new bonanza. Everyone is running to join the gold rush, from Cisco’s FabricPath and Brocade’s VCS to HP’s IRF and Juniper’s upcoming QFabric. As always, the standardization bodies are following the industry with a large buffet of standards to choose from: TRILL, 802.1ag (SPB), 802.1Qbg (EVB) and 802.1bh (Port extenders).

read more see 14 comments

Building a Greenfield Data Center

The following design challenge landed in my Inbox not too long ago:

My organization is the in the process of building a completely new data center from the ground up (new hardware, software, protocols ...). We will currently start with one site but may move to two for DR purposes. What DC technologies should we be looking at implementing to build a stable infrastructure that will scale and support technologies you feel will play a big role in the future?

In an ideal world, my answer would begin with “Start with the applications.”

read more see 12 comments
Sidebar