Category: workshop

FCoE between data centers? Forget it!

Was anyone trying to sell you the “wonderful” idea of running FCoE between Data Centers instead of FC-over-DWDM or FCIP? Sounds great ... until you figure out it won’t work. Ever ... or at least until switch vendors drastically increase interface buffers on the 10GE ports.

FCoE requires lossless Ethernet between its “routers” (Fiber Channel Forwarders – see Multihop FCoE 101 for more details), which can only be provided with Data Center Bridging (DCB) standards, specifically Priority Flow Control (PFC). However, if you want to have lossless Ethernet between two points, every layer-2 (or higher) device in the path has to support DCB, which probably rules out any existing layer-2+ solution (including Carrier Ethernet, pseudowires, VPLS or OTV). The only option is thus bridging over dark fiber or a DWDM wavelength.

read more see 7 comments

FCoE, QCN and Frame Relay analogies

Just when I hoped we were finally getting somewhere with the FCoE/QCN discussion, Brocade managed to muddy the waters with its we-still-don’t-know-what-it-is announcement. Not surprisingly, networking consultants like my friend Greg Ferro of the Etherealmind fame responded to the shenanigan with statements like “FCoE ... is a technology so mindboggingly complicated that marketing people can argue over competing claims and all be correct.” Not true, the whole thing is exceedingly simple once you understand the architecture (and the marketing people always had competing claims).

Pretend for a minute that FC ≈ IP and LAN bridging ≈ Frame Relay, teleport into this parallel universe and allow me to tell you the whole story once again in more familiar terms.

read more add comment

Nexus 1000V: another IPv6 #FAIL

Just stumbled across this unbelievable fact in the Nexus 1000V release notes:

IPV6 ACL rules are not supported.

My first reaction: “You must be kidding, right? Are we still in 20th century?” ... and then it dawned on me: Nexus 1000V is using the NX-OS control plane and it’s still stuck in 4.0 release which did not support IPv6 ACLs (IPv6 support was added to NX-OS in release 4.1(2)).

read more see 3 comments

Does FCoE need QCN (802.1Qau)?

One of the recurring religious FCoE-related debates of the last months is undoubtedly “do you need QCN to run FCoE” with Cisco adamantly claiming you don’t (hint: Nexus doesn’t support it) and HP claiming you do (hint: their switch software lacks FC stack) ... and then there’s this recent announcement from Brocade (more about it in a future post). As is usually the case, Cisco and HP are both right ... depending on how you design your multi-hop FCoE network.

read more see 1 comments

Data Center Bridging (DCB) Congestion Notification (802.1Qau)

The last (and the least popular) Data Center Bridging (DCB) standard tries to solve the problem of congestion in large bridged domains (PFC enables lossless transport and ETS standardizes DWRR queuing). To illustrate the need for congestion control, consider a simple example shown in the following diagram:

It came to my attention that a vendor might be using this blog post to justify the need for QCN in FCoE environments. Should that be the case, please make sure you also read about the difference between dense and sparse FCoE, the (lack of) need for QCN in FCoE and whether it makes sense to run FCoE over TRILL. Finally, consider how you’ll troubleshoot FCoE environments.

read more see 4 comments

vCloud Director: hand the network over to server admins

A few months ago VMware decided to kick away one of the more stubborn obstacles in their way to Data Center domination: the networking team. Their vCloud architecture implements VLANs, NAT, firewalls and a bit of IP routing within the VMware hypervisor and add-on modules ... and just to make sure the networking team has no chance of interfering, they implemented MAC-in-MAC encapsulation, making their cloudy dreamworld totally invisible to the lowly net admins.

read more see 7 comments

VPLS is a technology, not just a service provider offering

The Internet Exchange and Peering Points Packet Pushers Podcast is as good as the rest of them (listen to it first and then continue reading), but also strangely relevant to the data center engineers. When you look beyond the peering policies, route servers and BGP tidbits, an internet exchange is a high-performance large-scale layer-2 network that some data center switching vendors are dreaming about ... the only difference being that the internet exchanges have to perform extremely well using existing products and technologies, not the shortest-path-bridging futures promised by the vendors.

read more see 26 comments

IPv6 Addressing: How Wrong Can You Get It?

Mike was wondering whether his ISP is giving him what he needs to start an IPv6 pilot within his enterprise network. He wrote:

So I got an IPv6 assignment with a /120 mask (basically our IPv4/24 network mapped to IPv6) and two smaller networks to use for links between our external router and the ISP.

Believe it or not, I’m not making this up. I was as amazed as you probably are.

Dear Mike’s ISP: where were you when the rest of the world was preparing to deploy IPv6? Did you read IPv6 Unicast Address Assignment Considerations (RFC 5375) or IPv6 Address Allocation and Assignment Policy from RIPE or your regional registry?

read more see 26 comments

QoS over MPLS/VPN Networks

A while ago John McManus wrote a great DSCP QoS Over MPLS Thoughts article at Etherealmind blog explaining how 6-bit IP DSCP value gets mapped into 3-bit MPLS EXP bits (now renamed to Traffic Class field). The most important lesson from his post should be “there is no direct DSCP-to-EXP mapping and you have to coordinate your ideas with the SP”. Let’s dig deeper into the SP architecture to truly understand the complexities of this topic.

We’ll start with a reference diagram: user traffic is flowing from Site-A to Site-B and the Service Provider is offering MPLS/VPN service between PE-A and PE-B. Traffic from multiple customer sites (including Site-A) is concentrated at SW-A and passed in individual VLANs to PE-A.

read more see 1 comments

PFC/ETS and storage traffic: the real story

Data Center Ethernet (or DCB or CEE, depending on who you are) is a hot story these days and it’s no wonder that misconceptions galore. However, when I hear several CCIEs I highly respect talk about “Priority Flow Control can be used to stop all the other traffic when storage needs more bandwidth”, I get worried. Exactly the opposite is true: you use PFC to stop the overzealous storage traffic (primarily FCoE, but also iSCSI) to make sure you don’t drop it.

read more add comment

PPPoE Testbed

During my last Building IPv6 Service Provider Core webinar I got a lot of questions about IPv6 over PPPoE (obviously we’re close to widespread IPv6 implementation; I never got PPPoE questions before). I wanted to test various scenarios in my IPv6 lab and thus enabled PPPoE on an Ethernet link between CE and PE routers.

This time I wanted to test multiple configurations in parallel ... no problem thanks versatile PPPoE implementation in Cisco.

read more see 10 comments

Introduction to 802.1Qaz (Enhanced Transmission Selection – ETS)

Enhanced Transmission Selection (ETS) is the second part of the Data Center Bridging puzzle (I’ve already described Priority Flow Control). It specifies two different technologies:

  • Queuing mechanisms in bridges
  • Data Center Bridging eXchange protocol: a Control/Negotiation protocol that allows bridges and hosts to negotiate QoS parameters in a bridged network.

Although some bridges from some vendors supported numerous QoS mechanisms in the past, 802.1Qaz is the first attempt to standardize a richer set of QoS behaviors than the strict priority queuing defined in 802.1p.

read more add comment

Long-distance vMotion and the traffic trombone

Few days ago I wrote about the impact of vMotion on a Data Center network and the traffic flow issues. Now let’s walk through what happens when you move a running virtual machine (VM) between two data centers (long-distance vMotion). Imagine we’re moving a web server that is:

  • Serving a few Internet clients (with firewall/NAT and/or load balancing somewhere in the path);
  • Getting most of its data from a database server sitting nearby;
  • Reading and writing to a local disk.

The traffic flows are shown in the following diagram:

read more see 8 comments
Sidebar