FCoMPLS – attack of the zombies

A while ago someone asked me whether I think FC-over-MPLS would be a good PhD thesis. My response: while it’s always a good move to combine two totally unrelated fields in your PhD thesis (that almost guarantees you will be able to generate several unique and thus publishable articles), FCoMPLS might be tough because you’d have to make MPLS lossless. However, where there’s a will, there’s a way ... straight from the haze of the “Just because you can doesn’t mean you should” cloud comes FC-BB_PW defined in FC-BB-5 and several IETF drafts.

My first brief encounter with FCoMPLS was a twitxchange with Miroslaw Burnejko who responded to my “must be another lame joke” tweet with a link to a NANOG presentation briefly mentioning it and an RFC draft describing the FCoMPLS flow control details. If you know me, you have probably realized by now that I simply had to dig deeper.

read more see 3 comments

Why would FC/FCoE scale better than iSCSI?

During one of the iSCSI/FC/FCoE tweetstorms @stu made an interesting claim: FC scales to thousands of nodes; iSCSI can’t do that.

You know I’m no storage expert, but I fail to see how FC would be inherently (architecturally) better than iSCSI. I would understand someone claiming that existing host or storage iSCSI adapters behave worse than FC/FCoE adapters, but I can’t grasp why properly implemented iSCSI network could not scale.

Am I missing something? Please help me figure this one out. Thank you!

see 11 comments

Load sharing in MPLS/VPN networks with route reflectors

Some of the e-mails and comments I received after writing the “Changing VPNv4 route attributespost illustrated common MPLS/VPN misconceptions, so it’s worth addressing them in a series of posts. Let’s start with the simplest scenario: load balancingsharing toward a multi-homed customer site. We’ll use a very simple MPLS/VPN network with three customer sites, four CE-routers, four PE-routers a route reflector:

read more see 15 comments

Local Area Mobility (LAM) – the true story

Every time I mention that Cisco IOS had Local Area Mobility (LAM) (the feature that would come quite handy in today’s virtualized data centers) more than a decade ago, someone inevitably asks “why don’t we use it?” LAM looks like a forgotten step-child, abandoned almost as soon as it was created (supposedly it never got VRF support). The reason is simple (and has nothing to do with the size of L3 forwarding tables): LAM was always meant to be a short-term kludge and L3 gurus never appreciated its potentials.

read more see 4 comments

Layer-3 gurus: asleep at the wheel

I just read a great article by Kurt (the Network Janitor) Bales eloquently describing how a series of stupid decisions led to the current situation where everyone (but the people who actually work with the networking infrastructure) think stretched layer-2 domains are the mandatory stepping stone toward the cloudy nirvana.

It’s easy to shift the blame to everyone else, including storage vendors (for their love of FC and FCoE) and VMware (for the broken vSwitch design), but let’s face the reality: the rigid mindset of layer-3 gurus probably has as much to do with the whole mess as anything else.

read more see 23 comments

How Did We Ever Get Into This Switching Mess?

If you’re confused about the numerous meanings of a switch, you’re not the only one. If you wonder how the whole mess started, here’s the full story (from the biased perspective of a grumpy GONER):

In the early 1980s, there were no bridges or routers. Hosts communicated directly with each other or used intermediate nodes (usually hosts, sometimes dedicated devices called gateways) to pass traffic. Networking engineers’ lives would have remained simple were it not for a few overly bright engineers at DEC who decided their application (LAT) would run directly on layer 2 to make it faster.

Their company imploded (actually, it was sold in pieces) in the previous millennium, but their eagerness to cut corners still haunts every one of us.
read more see 19 comments

Changing IP precedence values in router-generated pings

When I was testing QoS behavior in MPLS/VPN-over-DMVPN networks, I needed a traffic source that could generate packets with different DSCP/IP precedence values. If you have enough routers in your lab (and the MPLS/DMVPN lab that was used to generate the router configurations you get as part of the Enterprise MPLS/VPN Deployment and DMVPN: From Basics to Scalable Networks webinars has 8 routers), it’s usually easier to use a router as a traffic source than to connect an extra IP host to the lab network. Task-at-hand: generate traffic with different DSCP values from the router.

read more see 9 comments

The week of blunders

This week we finally got some great warm(er) dry weather after months of eternal late autumn interspersed with snowstorms and cold spells, making me way too focused on rock climbing while blogging and testing IOS behavior. The incredible results: two blunders in a single week.

First I “discovered” anomalies in ToS propagation between IP precedence values and MPLS EXP bits. It was like one of those unrepeatable cold fusion experiments: for whatever stupid reason it all made sense while I was doing the tests, but I was never able to recreate the behavior. The “End-to-end QoS marking in MPLS/VPN-over-DMVPN networks” post is fixed (and I’ve noticed a few additional QoS features while digging around).

The second stupidity could only be attributed to professional blindness. Whenever I read about pattern matching, the regular expressions come to mind. Not always true – as some commentators to my “EEM QA: what were they (not) doing?” post pointed out, the action string match command expects Tcl patterns (not regular expressions).

At least the rock climbing parts of the week were great ;)

add comment

EEM QA: what were they (not) doing?

When I was writing the applet that should stop accidental scheduled router reloads, I wanted to use the action string match command to perform pattern matching on the output of the show reload command. Somehow the applet didn’t want to work as expected, so I checked the documentation on Cisco’s web site.

Reading the command description, I should have realized the whole thing must be broken. It looks like the documentation writer was fast asleep; even someone with a major in classical philosophy and zero exposure to networking should be able to spot the glaring logical inconsistencies.

read more see 5 comments

End-to-End QoS marking in MPLS/VPN-over-DMVPN networks

I got a great question in one of my Enterprise MPLS/VPN Deployment webinars when I was describing how you could run MPLS/VPN across DMVPN cloud:

That sounds great, but how does end-to-end QoS work when you run IP-over-MPLS-over-GRE-over-IPSec-over-IP?

My initial off-the-cuff answer was:

Well, when the IP packet arriving through a VRF interface gets its MPLS label, the IP precedence bits from the IP packet are copied into the MPLS EXP (now TC) bits. As for what happens when the MPLS packet gets encapsulated in a GRE packet and when the GRE packet is encrypted… I have no clue. I need to test it.

read more see 9 comments

IPv6 Provider Independent Addresses

If you want your network to remain multihomed when the Internet migrates to IPv6, you need your own Provider Independent (PI) IPv6 prefix. That’s old news (I was writing about the multihoming elephant almost two years ago), but most of the IT industry managed to look the other way pretending the problem does not exist. It was always very clear that the lack of other multihoming mechanisms will result in explosion of global IPv6 routing tables (attendees of my Upcoming Internet Challenges webinar probably remember the topic very well, as it was one of my focal points) and yet nothing was done about it (apart from the LISP development efforts, which will still take a while before being globally deployed).

To make matters worse, some Service Providers behave like the model citizens in the IPv6 world and filter prefixes longer than /32 when they belong to the Provider Assigned (PA) address space, which means that you cannot implement reliable multihoming at all if you don’t get a chunk of PI address space.

read more see 6 comments

Open FCoE – Software implementation of the camel jetpack

Intel announced its Open FCoE (software implementation of FCoE stack on top of Intel’s 10GB Ethernet adapters) using the cloudy bullshit bingo including simplifying the Data Center, Free New Technology, Cloud Vision and Green Computing (ok, they used Environmental impact) and lots of positive supporting quotes. The only thing missing was an enthusiastic Gartner quote (or maybe they were too expensive?).

read more see 1 comments

Interesting links (2010-01-30)

Links to interesting content have yet again started gathering dust in my Inbox. Time for a cleanup action. Technical content first:

Cisco Pushing More vNetwork into Hardware. Pretty good description of impact of Nexus 1000V and VN-Link on virtualized network security.

Convergence Delays: SVI vs Routed Interface. Another great article by Stretch. I never realized carrier-delay could be that harmful. The moral of the story is also important: test and verify the device behavior, don’t trust PPT slides (once I’ll share with you how I’ve learned that lesson the hard way).

RFC 6092 - Recommended Simple Security Capabilities in Customer Premises Equipment (CPE) for Providing Residential IPv6 Internet Service. A fantastic document – now we can only hope that every magazine evaluating consumer IPv6-ready CPEs starts using it as a benchmark (and that the IPv6 Ready guys pick it up).

read more see 7 comments
Sidebar