Category: data center

Is Fibre Channel Switching Bridging or Routing?

A comment left on my dense-mode FCoE post is a perfect example of the dangers of using vague, marketing-driven and ill-defined word like “switching”. The author wrote: “FC-SW is by no means routing… Fibre Channel is switching.” As I explained in one of my previous posts, switching can mean anything, from circuit-based activities to bridging, routing and even load balancing (I am positive some vendors claim their load balancers application delivery controllers are L4-L7 switches), so let’s see whether Fibre Channel “switching” is closer to bridging or routing.

read more see 3 comments

Do we need distributed switching on Nexus 2000?

Yandy sent me an interesting question:

Is it just me or do you also see the Nexus 2000 series not having any type of distributed forwarding as a major design flaw? Cisco keeps throwing in the “it's a line-card” line, but any dumb modular switch nowadays has distributed forwarding in all its line cards.

I’m at least as annoyed as Yandy is by the lack of distributed switching in the Nexus port (oops, fabric) extender product range, but let’s focus on a different question: does it matter?

read more see 6 comments

Multisite Clusters Done Right... by None Other than Microsoft

I had to check the Microsoft clustering terminology a few days ago, so I used Google to find the most relevant pages for “Windows cluster” and landed on the Failover clustering home page where the Multisite Clustering link immediately caught my attention. Dreading the humongous amount of layer-2 DCI stupidities that could lurk hidden behind such a concept, I barely dared to click on the link… which unveiled one of the most pleasant surprises I’ve got from an IT vendor in a very long time.

read more see 1 comments

The beauties of dense-mode FCoE

J Michel Metz brought out an interesting aspect of the dense/sparse mode FCoE design dilemma in a comment to my FCoE over Trill ... this time from Juniper post: FC-focused troubleshooting. I have to mention that he happens to be working for a company that has the only dense-mode FCoE solution, but the comment does stand on its own.

Before reading this post you might want to read the definition of dense- and sparse-mode FCoE and a few more technical details.

read more see 15 comments

vCider: climbing the virtual networking mountain

You probably know the old saying – if the mountain doesn’t want to come to you, you have to go out there and climb it. vCider, a brand-new startup launching their product at Gigaom Structure Launchpad, decided to do something similar in the server virtualization (Infrastructure-as-a-Service; IaaS) space – its software allows IaaS customers to build their own virtual layer-2 networks (let’s call then vSubnets) on top of IaaS provider’s IP infrastructure; you can even build a vSubnets between VMs running within your enterprise network (private cloud in the cloudy lingo) and those running within Amazon EC2 or Rackspace.

Full disclosure: Chris Marino from vCider got in touch with me in early June. I found the idea interesting, he helped me understand their product (even offered a test run, but I chose to trust the technical information available on their web site and passed to me in e-mails and phone calls), and I decided to write about it. That’s it.

read more see 3 comments

Automatic edge VLAN provisioning with VM Tracer from Arista

One of the implications of Virtual Machine (VM) mobility (as implemented by VMware’s vMotion or Microsoft’s Live Migration) is the need to have the same VLAN configured on the access ports connected to the source and the target hypervisor hosts. EVB (802.1Qbg) provides a perfect solution, but it’s questionable when it will leave the dreamland domain. In the meantime, most environments have to deploy stretched VLANs ... or you might be able to use hypervisor-aware features of your edge switches, for example VM Tracer implemented in Arista EOS.

read more see 3 comments

FCoE over TRILL ... this time from Juniper

A tweet from J Michel Metz has alerted me to a “Why TRILL won't work for data center network architecture” article by Anjan Venkatramani, Juniper’s VP of Product Management. Most of the long article could be condensed in two short sentences my readers are very familiar about: Bridging does not scale and TRILL does not solve the traffic trombone issues (hidden implication: QFabric will solve all your problems)... but the author couldn’t resist throwing “FCoE over TRILL” bone into the mix.

read more see 19 comments

Stretched Clusters: Almost as Good as Heptagonal Wheels

Some people are changing round wheels to heptagonal format because they will roll better. Some other people are building stretched high-availability clusters – clusters of servers stretched over multiple data centers. Unfortunately only one of these claims is false.

Similar to the stretched firewalls design, stretched tightly coupled HA clusters are vulnerable – you lose the inter-DC link for long enough time (depending on how the cluster heartbeat is configured, a few seconds could be enough) and you have a total disaster on your hands.

read more see 6 comments

VN-Tag/802.1Qbh basics

A few years ago Cisco introduced an interesting concept to the data center networking: fabric extenders, devices acting like remote linecards of a central switch (Juniper’s “revolutionary” QFabric looks very similar from a distance; the only major difference seems to be local switching in the QF/Nodes). Cisco’s proprietary technology used in its FEX products became the basis for 802.1Qbh, an IEEE draft that is supposed to standardize the port extender architecture.

If you’re not familiar with the FEX products, read my “Port or Fabric Extenders?” article before continuing ... and disregard most of what it says about 802.1Qbh.

read more see 2 comments

Speculation: This is how I would build QFabric

2021-01-03: Even though QFabric was an interesting architecture (and reverse-engineering it was a fun intellectual exercise), it withered a few years ago. Looks like Juniper tried to bite off too much.

Three months after the QFabric launch, the details remain shrouded in mystical clouds, so let’s try to speculate what they could be hiding. We have two well-known facts:

  • QFabric has three components: QF/Node (edge device), QF/Interconnect (high-speed core device) and QF/Director (the brains).
  • Juniper is strong in the Service Provider technologies, including MPLS, MPLS/VPN, VPLS and BGP. It’s also touting its BGP MPLS-based MAC VPN technology (too long to write more than once, let’s call it BMMV).

I am positive Juniper would never try to build a monster single-brain fabric with Borg or Big Brother architecture as they simply don’t scale (as the OpenFlow crowd will learn in a few years).

read more see 19 comments

EVB (802.1Qbg) – the S component

Update 2021-01-03: IBM implemented EVB in Linux bridge, and Juniper added EVB support to Junos, but I haven't seen (or heard of) a single EVB implementation since I wrote this blog post almost 9 years ago.

The Edge Virtual Bridging (EVB; 802.1Qbg) standard solves two important layer-2-based virtualization issues:

  • Automatic provisioning of access switches based on hypervisor-signaled information (discussed in the EVB eases VLAN configuration pains article)
  • Multiplexing of multiple logical 802.1Q links over a single physical link.

Logical link multiplexing might seem a solution in search of a problem until you discover that VMware-related design documents usually recommend using 6 to 10 NICs per server – an approach that either wastes switch ports or is hard to implement with blade servers’ mezzanine cards (due to limited number of backplane connections).

read more add comment

Data Center Fabric Architectures update#1

Two months ago I wrote the Data Center Fabric Architectures post jokingly defining Borg and Big Brother architectures. In the meantime, a number of vendors have launched (or announced) their fabric products and the post badly needed an update.

I decided to move the updated text to my main web site (where it will be easier to edit), wrote an introductory section, removed a few tongue-in-cheek comments (after all, it’s time to get serious if Cisco’s Data Center blog links to your article) and added numerous links to in-depth articles and examples of individual architectures.

read more see 1 comments

Scaling IaaS network infrastructure

I got totally fed up with the currently popular “flat-earth with long-distance bridging” architecture paradigm while developing the Data Center Interconnects webinar. It all started with the layer-2 hypervisor switches and lack of decent L3 network-side solutions; promoting non-scalable cloudy solutions doesn’t help either.

The network infrastructure would scale better if the hypervisors would work as MPLS/VPN PE-routers, but even MPLS would hit scalability limits when the number of servers grows into tens of thousands. The only truly scalable solution is IP-over-IP or MAC-over-IP implemented in the hypervisor switches.

I tried to organize all these thoughts in the “How to build a scalable IaaS cloud network infrastructure” article that was recently published by SearchTelecom ... and just a few days after the article was published, Brad Hedlund pointed me to Infrastructure as a Service Builder’s Guide document, which is saying almost the same thing (and coming to flawed conclusions because they had to promote OpenFlow and NEC).

see 7 comments

Ignoring STP? Be careful, be very careful

A while ago I described what it takes to integrate TRILL backbone with the legacy equipment running Spanning Tree Protocol (STP). Unfortunately, Brocade decided to use a non-standard approach to BPDU handling when implementing their TRILL-like VCS fabric. VDX switches running in fabric mode can either drop incoming BPDU frames or transport them transparently across the fabric to other edge ports. Although VDX switches support STP, RSTP and MSTP (as well as RootGuard and BPDUGuard) when configured as standalone switches, the STP processing is disabled when you configure fabric mode; VCS fabric looks like a huge shared LAN segment to the end hosts and core switches.

2013-03-31: Network OS 4.0 and above supports Distributed Spanning Tree (DiST), for more details read this blog post.

read more see 14 comments

Complexity Belongs to the Network Edge

Whenever I write about vCloud Director Networking Infrastructure (vCDNI), be it a rant or a more technical post, I get comments along the lines of “What are the network guys going to do once the infrastructure has been provisioned? With vCDNI there is no need to keep network admins full time.”

Once we have a scalable solution that will be able to stand on its own in a large data center, most smart network admins will be more than happy to get away from provisioning VLANs and focus on other problems. After all, most companies have other networking problems beyond data center switching.

read more see 16 comments
Sidebar