FCoE over TRILL ... this time from Juniper
A tweet from J Michel Metz has alerted me to a “Why TRILL won't work for data center network architecture” article by Anjan Venkatramani, Juniper’s VP of Product Management. Most of the long article could be condensed in two short sentences my readers are very familiar about: Bridging does not scale and TRILL does not solve the traffic trombone issues (hidden implication: QFabric will solve all your problems)... but the author couldn’t resist throwing “FCoE over TRILL” bone into the mix.
QoS in Large-Scale DMVPN Networks
Got this question a few days ago:
I have a large DMVPN network (~ 1000 sites) using variety of DSL, cable modem, and wireless connections. In all of these cases the bandwidth is extremely dissimilar and even varies with time. How can I handle this in a scalable way?
Hub-to-spoke QoS implementations in DMVPN networks usually use one of the following options:
Stretched Clusters: Almost as Good as Heptagonal Wheels
Some people are changing round wheels to heptagonal format because they will roll better. Some other people are building stretched high-availability clusters – clusters of servers stretched over multiple data centers. Unfortunately only one of these claims is false.
Similar to the stretched firewalls design, stretched tightly coupled HA clusters are vulnerable – you lose the inter-DC link for long enough time (depending on how the cluster heartbeat is configured, a few seconds could be enough) and you have a total disaster on your hands.
Random MPLS/VPN Q&A
I got a long list of MPLS-related follow-up questions from one of the attendees of my Enterprise MPLS/VPN Deployment webinar and thought it might be a good idea to share them (and the answers) with you.
You said that the golden rule in simple VPN topologies is RD = export RT = import RT. Are there any other “generic rules”? How would you setup this RD&RT association for hub&spoke VPN scenario?
Common services VPN topologies could be implemented in two ways (on top of existing simple VPN topology):
VN-Tag/802.1Qbh basics
A few years ago Cisco introduced an interesting concept to the data center networking: fabric extenders, devices acting like remote linecards of a central switch (Juniper’s “revolutionary” QFabric looks very similar from a distance; the only major difference seems to be local switching in the QF/Nodes). Cisco’s proprietary technology used in its FEX products became the basis for 802.1Qbh, an IEEE draft that is supposed to standardize the port extender architecture.
If you’re not familiar with the FEX products, read my “Port or Fabric Extenders?” article before continuing ... and disregard most of what it says about 802.1Qbh.
Getting ready for World IPv6 Day ... in six days
In a few minutes Jan Žorž, a true IPv6 evangelist, will open the Fifth Slovenian IP Summit. The event is focused on the World IPv6 Day and I decided to use a hypothetical case study: imagine your CIO just came back from an off-site social networking event where everyone got all hyped up about the World IPv6 Day.
Speculation: This is how I would build QFabric
Three months after the QFabric launch, the details remain shrouded in mystical clouds, so let’s try to speculate what they could be hiding. We have two well-known facts:
- QFabric has three components: QF/Node (edge device), QF/Interconnect (high-speed core device) and QF/Director (the brains).
- Juniper is strong in the Service Provider technologies, including MPLS, MPLS/VPN, VPLS and BGP. It’s also touting its BGP MPLS-based MAC VPN technology (too long to write more than once, let’s call it BMMV).
I am positive Juniper would never try to build a monster single-brain fabric with Borg or Big Brother architecture as they simply don’t scale (as the OpenFlow crowd will learn in a few years).
EVB (802.1Qbg) – the S component
The Edge Virtual Bridging (EVB; 802.1Qbg) standard solves two important layer-2-based virtualization issues:
- Automatic provisioning of access switches based on hypervisor-signaled information (discussed in the EVB eases VLAN configuration pains article)
- Multiplexing of multiple logical 802.1Q links over a single physical link.
Logical link multiplexing might seem a solution in search of a problem until you discover that VMware-related design documents usually recommend using 6 to 10 NICs per server – an approach that either wastes switch ports or is hard to implement with blade servers’ mezzanine cards (due to limited number of backplane connections).
Building CsC-enabled MPLS backbone
Just got this question from one of my Service Provider friends: “If I am building a new MPLS backbone from scratch, should I design it with Carrier’s Carrier in mind?” Of course you should ... after all, the CsC functionality has almost no impact on the MPLS backbone (apart from introducing an extra label in the label stack).
… updated on Wednesday, July 6, 2022 14:40 UTC
For the Record: I Am Not Against OpenFlow ...
… as some of its supporters seem to believe every now and then (I do get severe allergic reaction when someone claims it will change the laws of physics or when I’m faced with technical inaccuracies peddled by an Instant Expert not to mention knee-jerking financial experts). Even more, assuming it can cross the adoption gap1, it could fundamentally change the business models of networking vendors (maybe not in the way you’d like them to be changed).