Blog Posts in June 2011

Multisite Clusters Done Right... by None Other than Microsoft

I had to check the Microsoft clustering terminology a few days ago, so I used Google to find the most relevant pages for “Windows cluster” and landed on the Failover clustering home page where the Multisite Clustering link immediately caught my attention. Dreading the humongous amount of layer-2 DCI stupidities that could lurk hidden behind such a concept, I barely dared to click on the link… which unveiled one of the most pleasant surprises I’ve got from an IT vendor in a very long time.

read more see 1 comments

Brocade ServerIron ADX – NAT64 done right

With the latest software release (12.3.01) the ServerIron ADX, Brocade’s load balancer product, supports the real NAT64 (not 6-to-4 load balancing). Even more, it supports all of the features I would like to see in a NAT64 box plus a few more:

True NAT64 support, mapping the whole IPv4 address space into an IPv6 prefix that can be reached by IPv6 clients. One would truly hope the implementation is conformant with RFC 6146, but the RFC is not mentioned in the documentation and I had no means of checking the actual behavior. DNS64 is not included, but that’s not a major omission as BIND 9.8.0 supports it.

read more add comment

The beauties of dense-mode FCoE

J Michel Metz brought out an interesting aspect of the dense/sparse mode FCoE design dilemma in a comment to my FCoE over Trill ... this time from Juniper post: FC-focused troubleshooting. I have to mention that he happens to be working for a company that has the only dense-mode FCoE solution, but the comment does stand on its own.

Before reading this post you might want to read the definition of dense- and sparse-mode FCoE and a few more technical details.

read more see 15 comments

Soft (hypervisor) switching links

Martin Casado and his team have published a great series of blog articles describing hypervisor switching (for the VMware-focused details, check out my VMware Networking Deep Dive). It starts with an overview of Open vSwitch (the open source alternative for VMware’s vSwitch, commonly used in Xen/KVM environments), describes the basics of hypervisor-based switching and addresses some of the performance myths. There’s also an interesting response from Intel setting straight the SR-IOV facts.

read more see 4 comments

Inter-DC IP-based vMotion with LISP

In early autumn of 2010, a “DRAFT on Cisco Nexus 1000V LISP Configuration Guide” appeared on CCO. It’s gone now (and unfortunately I haven’t saved a copy), but the possibilities made me really excited – with LISP in Nexus 1000V, we could do close-to-perfect vMotion over any IP infrastructure (including inter-DC vMotion that requires stretched VLANs and L2 DCI today). Here’s what I had to say on this topic during my Data Center Interconnect webinar (buy a recording).

see 7 comments

vCider: climbing the virtual networking mountain

You probably know the old saying – if the mountain doesn’t want to come to you, you have to go out there and climb it. vCider, a brand-new startup launching their product at Gigaom Structure Launchpad, decided to do something similar in the server virtualization (Infrastructure-as-a-Service; IaaS) space – its software allows IaaS customers to build their own virtual layer-2 networks (let’s call then vSubnets) on top of IaaS provider’s IP infrastructure; you can even build a vSubnets between VMs running within your enterprise network (private cloud in the cloudy lingo) and those running within Amazon EC2 or Rackspace.

Full disclosure: Chris Marino from vCider got in touch with me in early June. I found the idea interesting, he helped me understand their product (even offered a test run, but I chose to trust the technical information available on their web site and passed to me in e-mails and phone calls), and I decided to write about it. That’s it.

read more see 3 comments

Some More QoS Basics

I got a really interesting question from one of my readers (slightly paraphrased):

Is this a correct statement: QoS on a WAN router will always be on if there are packets on the wire as the line is either 100% utilized or otherwise nothing is being transmitted. Comments like “QoS will kick in when there is congestion, but there is always congestion if the link is 100% utilized on a per moment basis” are confusing.

Well, QoS is more than just queuing. First you have to classify the packets; then you can perform any combination of marking, policing, shaping, queuing and dropping.

read more see 14 comments

Automatic edge VLAN provisioning with VM Tracer from Arista

One of the implications of Virtual Machine (VM) mobility (as implemented by VMware’s vMotion or Microsoft’s Live Migration) is the need to have the same VLAN configured on the access ports connected to the source and the target hypervisor hosts. EVB (802.1Qbg) provides a perfect solution, but it’s questionable when it will leave the dreamland domain. In the meantime, most environments have to deploy stretched VLANs ... or you might be able to use hypervisor-aware features of your edge switches, for example VM Tracer implemented in Arista EOS.

read more see 3 comments

Blast from the past: ATM and POS interfaces

I got a question along these lines from a friend working in SP environment:

Customer wants to upgrade a 7200 with PA-A3-OC3SMI to ASR1001. Can they use ASR1001-2XOC3POS interfaces or are those different from “normal ATM interfaces”?

Both interfaces (PA-A3-OC3SMI for the 7200 and 2XOC3POS for the ASR1001) use SONET framing on layer 1, so you can connect them to the same SONET (layer-1) gear.

read more see 3 comments

FCoE over TRILL ... this time from Juniper

A tweet from J Michel Metz has alerted me to a “Why TRILL won't work for data center network architecture” article by Anjan Venkatramani, Juniper’s VP of Product Management. Most of the long article could be condensed in two short sentences my readers are very familiar about: Bridging does not scale and TRILL does not solve the traffic trombone issues (hidden implication: QFabric will solve all your problems)... but the author couldn’t resist throwing “FCoE over TRILL” bone into the mix.

read more see 19 comments

Stretched Clusters: Almost as Good as Heptagonal Wheels

Some people are changing round wheels to heptagonal format because they will roll better. Some other people are building stretched high-availability clusters – clusters of servers stretched over multiple data centers. Unfortunately only one of these claims is false.

Similar to the stretched firewalls design, stretched tightly coupled HA clusters are vulnerable – you lose the inter-DC link for long enough time (depending on how the cluster heartbeat is configured, a few seconds could be enough) and you have a total disaster on your hands.

read more see 6 comments

Speculation: first OpenFlow product from Cisco

2021-01-03: I was totally wrong. I thought they would go for Nexus OS as OpenFlow controller, but they implemented a horrible OpenFlow agent on Nexus OS – just enough to get a tick-in-the-box in the RFP process.

Sometime around the Open Networking Foundation launch, Paul McNab VP/CTO of the Data Center Switching and Services Group, supposedly said “[OpenFlow] would be built into the NX-OS operating system of high end Nexus switches.” A bit later, the story changed to “I prefer not to pre-announce.” As I wrote before, I don’t think Cisco’s first move will be to implement OpenFlow API in NX7K and allow third parties to replace NX-OS and/or mess up the NX7K TCAM. So what could it be?

read more add comment

Random MPLS/VPN Q&A

I got a long list of MPLS-related follow-up questions from one of the attendees of my Enterprise MPLS/VPN Deployment webinar and thought it might be a good idea to share them (and the answers) with you.

You said that the golden rule in simple VPN topologies is RD = export RT = import RT. Are there any other “generic rules”? How would you setup this RD&RT association for hub&spoke VPN scenario?

Common services VPN topologies could be implemented in two ways (on top of existing simple VPN topology):

read more see 1 comments

VN-Tag/802.1Qbh basics

A few years ago Cisco introduced an interesting concept to the data center networking: fabric extenders, devices acting like remote linecards of a central switch (Juniper’s “revolutionary” QFabric looks very similar from a distance; the only major difference seems to be local switching in the QF/Nodes). Cisco’s proprietary technology used in its FEX products became the basis for 802.1Qbh, an IEEE draft that is supposed to standardize the port extender architecture.

If you’re not familiar with the FEX products, read my “Port or Fabric Extenders?” article before continuing ... and disregard most of what it says about 802.1Qbh.

read more see 2 comments

Getting ready for World IPv6 Day ... in six days

In a few minutes Jan Žorž, a true IPv6 evangelist, will open the Fifth Slovenian IP Summit. The event is focused on the World IPv6 Day and I decided to use a hypothetical case study: imagine your CIO just came back from an off-site social networking event where everyone got all hyped up about the World IPv6 Day.

Next thing you know, you’re in his office and he’s telling you the PR gurus have decided your organization simply has to participate in this revolutionary event. Assuming you haven’t invested in IPv6 yet, my presentation might serve as a short survival guide (hint: you have only 6 days left).

see 1 comments

Speculation: This is how I would build QFabric

2021-01-03: Even though QFabric was an interesting architecture (and reverse-engineering it was a fun intellectual exercise), it withered a few years ago. Looks like Juniper tried to bite off too much.

Three months after the QFabric launch, the details remain shrouded in mystical clouds, so let’s try to speculate what they could be hiding. We have two well-known facts:

  • QFabric has three components: QF/Node (edge device), QF/Interconnect (high-speed core device) and QF/Director (the brains).
  • Juniper is strong in the Service Provider technologies, including MPLS, MPLS/VPN, VPLS and BGP. It’s also touting its BGP MPLS-based MAC VPN technology (too long to write more than once, let’s call it BMMV).

I am positive Juniper would never try to build a monster single-brain fabric with Borg or Big Brother architecture as they simply don’t scale (as the OpenFlow crowd will learn in a few years).

read more see 19 comments
Sidebar