VPLS versus OTV for L2 Data Center Interconnect (DCI)

DJ Spry asked an interesting question in a comment to my MPLS/VPN in DCI designs post: “Why would one choose OTV over MPLS/VPN?” The answer is simple: it depends on what you need. MPLS/VPN provides path isolation between layer-3 domains (routed networks) across MPLS or IP infrastructure whereas OTV providers layer-2 transport (and VLAN-based path isolation) across IP infrastructure. However, it does make sense to compare OTV with VPLS (which was DJ Spry’s next question). Apart from the obvious platform dependence (OTV runs on Nexus 7000, VPLS runs on Catalyst 6500/Cisco 7600 and a few other routers) which might disappear once ASR1K gets the rumored OTV support, there’s a huge gap in functionality and complexity between the two layer-2 transport technologies.

read more see 16 comments

MPLS/VPN in Data Center Interconnect (DCI) Designs

Yesterday I was describing a dreamland in which hypervisor switches would use MPLS/VPN to implement seamless scalable VM mobility across IP+MPLS infrastructure. Today I’ll try to get down to earth; there are exciting real-life design using MPLS/VPN between data centers. You can implement them with Catalyst 6500/Cisco 7600 or ASR1K and will soon be able to do the same with Nexus 7000.

Most data centers have numerous security zones, from external network, DMZ, web servers and applications servers to database servers, IP-based storage and network management. When you design active/active data centers, you want to keep the security zones strictly separate and the “usual” solution proposed by L2-crazed crowd is to bridge multiple VLANs across the DCI infrastructure (in the next microsecond they start describing the beauties of their favorite L2 DCI technology).

read more see 6 comments

(v)Cloud Architects, ever heard of MPLS?

Duncan Epping, the author of fantastic Yellow Bricks virtualization blog tweeted a very valid question following my vCDNI scalability (or lack thereof) post: “What you feel would be suitable alternative to vCD-NI as clearly VLANs will not scale well?

Let’s start with the very basics: modern data center switches support anywhere between 1K and 4K (the theoretical limit based on 802.1q framing) VLANs. If you need more than 1K VLANs, you’ve either totally messed up your network design or you’re a service provider offering multi-tenant services (recently relabeled by your marketing department as IaaS cloud). Service Providers had to cope with multi-tenant networks for decades ... only they haven’t realized those were multi-tenant networks and called them VPNs. Maybe, just maybe, there’s a technology out there that’s been field-proven, known to scale, and works over switched Ethernet.

read more see 68 comments

vCloud Director Network Isolation (vCDNI) scalability

When VMware launched its vCloud Director Networking Infrastructure, Greg Ferro (of the Packet Pushers Podcast fame) and myself were very skeptical about its scaling capabilities, more so as it uses MAC-in-MAC encapsulation and bridging was never known for its scaling properties. However, Greg’s VMware contacts claimed that vCDNI scales to thousands of physical servers and Greg wanted to do a podcast about it.

As always, we prepared a few questions in advance, including “How are broadcasts, multicasts and unknown unicasts handled in vCDNI-based private networks?” and “what happens when a single VM goes crazy?” For one reason or another, the podcast never happened. After analyzing Wireshark traces of vCDNI traffic, I probably know why that’s the case.

read more see 3 comments

OpenFlow: BIOS Does Not a Server Make

Last week Greg (@etherealmind) Ferro invited me to the OpenFlow Packet Pushers podcast with Matt Davey. I was pleasantly surprised by Matt’s realistic attitude (you should really listen to the whole podcast), it was nice to hear that they’re running a country-wide pilot with OpenFlow-enabled switches deployed at several universities, and some of the applications he mentioned (for example, the capability to download ACLs into the switch from your customized application) definitely tickled my inner geek. However, I’m even more convinced that the brouhaha surrounding Open Networking Foundation has little grounds in the realities of OpenFlow.

read more add comment

DMVPN: How to Get from Zero to Hero?

John (not a real name for obvious reasons) sent me the following e-mail:

I am a Sys Admin who has recently assumed duties as a Net Eng. I am currently expected to perform responsibilities utilizing DMVPN with Cisco routers though I have never worked with DMVPN and have very little router experience. I started with your DMVPN webinar and it has been extremely helpful, but there’s still a huge gap between what I know so far and what I need to know to work with DMVPN.

In a few days I will deploy to Afghanistan to start work for a customer and I was hoping you might be able to give me some advice on the matter, perhaps some how-to documents or good books to purchase that will assist in the huge learning curve.

read more see 5 comments

Brocade VCS fabric has almost-perfect load balancing

Short summary for differently-attentive: proprietary load balancing Brocade uses over ISL trunks in VCS fabric is almost perfect (and way better for high-throughput sessions than what you get with other link aggregation methods).

During the Data Center Fabrics Packet Pushers Podcast we’ve been discussing load balancing across aggregated inter-switch links and Brocade’s claims that its “chip-based balancing” performs better than standard link aggregation group (LAG) load balancing. Ever skeptical, I said all LAG load balancing is chip-based (every vendor does high-speed switching in hardware). I also added that I would be mightily impressed if they’d actually solved intra-flow packet scheduling.

read more see 22 comments

Interesting links (2011-04-03)

General networking

Protecting the router control plane (RFC 6192): among other goodies, this document has a high-level description of high speed routers (sometimes known as layer-3 switches).

Is the network administrator role going away? I’ve heard the “something is going away” prediction too often in the last 20 years. We just end up doing other (more complex) things.

8 hints for using DNS more effectively – another great post from The Lone Sysadmin.

read more see 4 comments

Cisco and Brocade working together on interoperable TRILL products

Reading the blogosphere catfights erupting at the time Brocade announced their VCS fabric, one could never imagine networking vendors pushing toward interoperable implementations of their products. The recent announcements from Brocade and Cisco look way more promising: Brocade will implement standard TRILL with IS-IS and Cisco will include FSPF as an alternate FabricPath routing protocol in NX-OS to ensure interoperability with VCS fabric.

Even more, as a follow-up step to the QFabric project, Cisco and Juniper are working together on a version of ICCP that will solve multi-chassis link aggregation problems in a standardized way. Long-term, we can expect to have a legacy low-cost access switch connected with a single LAG bundle to Nexus 5000 and QFX3500 and both of them exchanging data with FSPF-enabled TRILL with a VDX switch from Brocade.

read more see 8 comments

Open Networking Foundation – Fabric Craziness Reaches New Heights

Some of the biggest buyers of the networking gear have decided to squeeze some extra discount out of the networking vendors and threatened them with open-source alternative, hoping to repeat the Linux/Apache/MySQL/PHP saga that made it possible to build server farms out of low-cost commodity gear with almost zero licensing costs. They formed the Open Networking Foundation, found a convenient technology (OpenFlow) and launched another major entrant in the Buzzword Bingo – Software-Defined Networking (SDN).

Networking vendors, either trying to protect their margins by stalling the progress of this initiative, or stampeding into another Wild West Gold Rush (hoping to unseat their bigger competitors with low-cost standard-based alternatives) have joined the foundation in hordes; the list of initial members (see the press release for details) reads like Who’s Who in Networking.

read more see 10 comments

NAT-PT is dead! Long live NAT-64!

I’m getting questions like this one all the time: “Where are we with NAT-PT? It was implemented in IOS quite a few years ago but it has never made it into ASA code.

Bad news first: NAT-PT is dead. Repeat after me: NAT-PT is dead. Got it? OK.

More bad news: NAT-PT in Cisco IOS was seriously broken after they pulled fast switching code out of IOS. Whatever is left in Cisco IOS might be good enough for a proof-of-concept or early deployment trials, but not for a production-grade solution.

read more see 8 comments

MPLS/VPN-over-GRE-over-IPSec: Does It Really Work?

Short answer: yes, it does.

During the geeky chat we had just after we’d finished recording the Data Center Fabric Packet Pushers podcast, Kurt (@networkjanitor) Bales asked me whether the MPLS/VPN-over-DMVPN scenarios I’m describing in Enterprise MPLS/VPN Deployment webinar really work (they do seem a bit complex).

I always test the router configurations I use in my webinars and I usually share them with the attendees. Enterprise MPLS/VPN Deployment webinar includes a complete sets of router configurations covering 10 scenarios, including five different MPLS/VPN-over-DMVPN designs, so you can easily test them in your lab and verify that they do work. But what about a live deployment?

read more see 15 comments

TRILL/Fabric Path – STP Integration

Every Data Center fabric technology has to integrate seamlessly with legacy equipment running the venerable Spanning Tree Protocol (STP) or one of its facelifted incarnations (for example, RSTP or MST). The alternative, called rip-and-replace when talking about other vendors’ boxes or synchronized upgrade when promoting your wares (no, I haven’t heard it yet, but I’m sure it’s coming), is simply indigestible to most data center architects.

TRILL and Cisco’s proprietary Fabric Path take a very definitive stance: the new fabric is the backbone of the network routing TRILL-encapsulated layer-2 frames across bridged segments (TRILL) or contiguous backbone (Fabric Path). Both architectures segment the original STP domain into small chunks at the edges of the network as shown in the following figure:

read more see 2 comments
Sidebar