Category: design

Feedback: Designing Active/Active and Disaster Recovery Data Centers

In the Designing Active-Active and Disaster Recovery Data Centers I tried to give networking engineers a high-level overview of challenges one might face when designing a highly-available application stack, and used that information to show why the common “solutions” like stretched VLANs make little sense if one cares about application availability (as opposed to auditor report). Some (customer) engineers like that approach; here’s the feedback I received not long ago:

As ever, Ivan cuts to the quick and provides not just the logical basis for a given design, but a wealth of advice, pointers, gotchas stemming from his extensive real-world experience. What is most valuable to me are those “gotchas” and what NOT to do, again, logically explained. You won’t find better material IMHO.

Please note that I’m talking about generic multi-site scenarios. From the high-level connectivity and application architecture perspective there’s not much difference between a multi-site on-premises (or collocation) deployment, a hybrid cloud, or a multicloud deployment.

add comment

Design Clinic: Small-Site IPv6 Multihoming

I decided to stop caring about IPv6 when the protocol became old enough to buy its own beer (now even in US), but its second-system effects keep coming back to haunt us. Here’s a question I got for the February 2023 ipSpace.net Design Clinic:

How can we do IPv6 networking in a small/medium enterprise if we’re using multiple ISPs and don’t have our own IPv6 Provider Independent IPv6 allocation. I’ve brainstormed this with people far more knowledgeable than me on IPv6, and listened to IPv6 Buzz episodes discussing it, but I still can’t figure it out.

read more see 1 comments

BGP in ipSpace.net Design Clinic

The ipSpace.net Design Clinic has been running for a bit over than a year. We covered tons of interesting technologies and design challenges, resulting in over 13 hours of content (so far), including several BGP-related discussions:

All the Design Clinic discussions are available with Standard or Expert ipSpace.net Subscription, and anyone can submit new design/discussion challenges.

add comment

EVPN/VXLAN or Bridged Data Center Fabric?

An attendee in the Building Next-Generation Data Center online course sent me an interesting dilemma:

Some customers don’t like EVPN because of complexity (it is required knowledge BGP, symmetric/asymmetric IRB, ARP suppression, VRF, RT/RD, etc). They agree, that EVPN gives more stability and broadcast traffic optimization, but still, it will not save DC from broadcast storms, because protections methods are the same for both solutions (minimize L2 segments, storm-control).

We’ll deal with the unnecessary EVPN-induced complexity some other time, today let’s start with a few intro-level details.

read more see 3 comments

VXLAN-Focused Design Clinic in June 2022

ipSpace.net subscribers are probably already familiar with the Design Clinic: a monthly Zoom call in which we discuss real-life design- and technology challenges. I started it in September 2021 and it quickly became reasonably successful; we covered almost two dozen topics so far.

Most of the challenges contributed for the June 2022 session were focused on VXLAN use cases (quite fitting considering I just updated the VXLAN Technical Deep Dive webinar), including:

  • Can we implement Data Center Interconnect (DCI) with VXLAN? (Yes, but…)
  • Can we run VXLAN over SD-WAN (and does it make sense)? (Yes/No)
  • What happened to traditional MPLS/VPN Enterprise core and can we use VXLAN/EVPN instead? (Still there/Maybe)
  • Should we use routers or switches as data center WAN edge devices, and how do we integrate them with VXLAN/EVPN data center fabric? (Yes 😊)

For more details, join us on June 6th. There’s just a minor gotcha: you have to be an active ipSpace.net subscriber to do it.

add comment

Combining BGP and IGP in an Enterprise Network

Syed Khalid Ali left the following question on an old blog post describing the use of IBGP and EBGP in an enterprise network:

From an enterprise customer perspective, should I run iBGP or iBGP+IGP (OSPF/ISIS/EIGRP) or IGP while doing mutual redistribution on the edge routers. I was hoping if you could share some thoughtful insight on when to select one over the another?

We covered tons of relevant details in the January 2022 Design Clinic, here’s the CliffNotes version. Keep in mind that the road to hell (and broken designs) is paved with great recipes and best practices, and that I’m presenting a black-and-white picture because I don’t feel like transcribing the discussion we had into an oversized blog post. People wrote books on this topic; I’m pretty sure you can search for Russ White and find a few of them.

Finally, there’s no good substitute for understanding how things work (which brings me to another webinar ;).

read more see 2 comments

ICMP Redirects Considered Harmful

One of my readers sent me an intriguing challenge based on the following design:

  • He has a data center with two core switches (C1 and C2) and two Cisco Nexus edge switches (E1 and E2).
  • He’s using static default routing from core to edge switches with HSRP on the edge switches.
  • E1 is the active HSRP gateway connected to the primary WAN link.

The following picture shows the simplified network diagram:

read more see 3 comments

Running BGP between Virtual Machines and Data Center Fabric

Got this question from one of my readers:

When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?

Short answer: you don’t.

Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.

read more see 2 comments

Optimizing the Time-to-First-Byte

I don’t think I’ve ever met someone saying “I wish my web application would run slower.” Everyone wants their stuff to run faster, but most environments are not willing to pay the cost (rearchitecting the application). Welcome to the wonderful world of PowerPoint “solutions”.

The obvious answer: The Cloud. Let’s move our web servers closer to the clients – deploy them in various cloud regions around the world. Mission accomplished.

Not really; the laws of physics (latency in particular) will kill your wonderful idea. I wrote about the underlying problems years ago, wrote another blog post focused on the misconceptions of cloudbursting, but I’m still getting the questions along the same lines. Time for another blog post, this time with even more diagrams.

read more see 2 comments

Circular Dependencies Considered Harmful

A while ago, my friend Nicola Modena sent me another intriguing curveball:

Imagine a CTO who has invested millions in a super-secure data center and wants to consolidate all compute workloads. If you were asked to run a BGP Route Reflector as a VM in that environment, and would like to bring OSPF or ISIS to that box to enable BGP ORR, would you use a GRE tunnel to avoid a dedicated VLAN or boring other hosts with routing protocol hello messages?

While there might be good reasons for doing that, my first knee-jerk reaction was:

read more see 3 comments

Reusing Underlay Network for Infrastructure Services

Boris Lazarov sent me an excellent question:

Does it make sense and are there any inherent problems from design perspective to use the underlay not only for transport of overlay packets, but also for some services. For example: VMWare cluster, vMotion, VXLAN traffic, and some basic infrastructure services that are prerequisite for the rest (DNS).

Before answering it, let’s define some terminology which will inevitably lead us to the it’s tunnels all the way down endstate.

read more see 2 comments

Building a Small Data Center Fabric with Four Switches

One of my subscribers has to build a small data center fabric that’s just a tad too big for two switch design.

For my datacenter I would need two 48 ports 10GBASE-T switches and two 48 port 10/25G fibber switches. So I was watching the Small Fabrics and Lower-Speed Interfaces part of Physical Fabric Design to make up my mind. There you talk about the possibility to do a leaf and spine with 4 switches and connect servers to the spine.

A picture is worth a thousand words, so here’s the diagram of what I had in mind:

read more see 1 comments

New: ipSpace.net Design Clinic

In early September, I started yet another project that’s been on the back burner for over a year: ipSpace.net Design Clinic (aka Ask Me Anything Reasonable in a more structured format). Instead of collecting questions and answering them in a podcast (example: Deep Questions podcast), I decided to make it more interactive with a live audience and real-time discussions. I also wanted to keep it valuable to anyone interested in watching the recordings, so we won’t discuss obscure failures of broken designs or dirty tricks that should have remained in CCIE lab exams.

read more add comment
Sidebar