Automatic edge VLAN provisioning with VM Tracer from Arista
One of the implications of Virtual Machine (VM) mobility (as implemented by VMware’s vMotion or Microsoft’s Live Migration) is the need to have the same VLAN configured on the access ports connected to the source and the target hypervisor hosts. EVB (802.1Qbg) provides a perfect solution, but it’s questionable when it will leave the dreamland domain. In the meantime, most environments have to deploy stretched VLANs ... or you might be able to use hypervisor-aware features of your edge switches, for example VM Tracer implemented in Arista EOS.
Test your VMware networking skills
Two vSwitch portgroup-related questions:
- Can you configure the same VLAN on two portgroups in the same vSwitch? How about vDS?
- Can VMs attached to two different portgroups in the same ESX host talk to each other directly or do they have to go communicate through an external switch (or L3 device)?
Got your answers? Now click the Read more ... link.
Blast from the past: ATM and POS interfaces
I got a question along these lines from a friend working in SP environment:
Customer wants to upgrade a 7200 with PA-A3-OC3SMI to ASR1001. Can they use ASR1001-2XOC3POS interfaces or are those different from “normal ATM interfaces”?
Both interfaces (PA-A3-OC3SMI for the 7200 and 2XOC3POS for the ASR1001) use SONET framing on layer 1, so you can connect them to the same SONET (layer-1) gear.
FCoE over TRILL ... this time from Juniper
A tweet from J Michel Metz has alerted me to a “Why TRILL won't work for data center network architecture” article by Anjan Venkatramani, Juniper’s VP of Product Management. Most of the long article could be condensed in two short sentences my readers are very familiar about: Bridging does not scale and TRILL does not solve the traffic trombone issues (hidden implication: QFabric will solve all your problems)... but the author couldn’t resist throwing “FCoE over TRILL” bone into the mix.
QoS in Large-Scale DMVPN Networks
Got this question a few days ago:
I have a large DMVPN network (~ 1000 sites) using variety of DSL, cable modem, and wireless connections. In all of these cases the bandwidth is extremely dissimilar and even varies with time. How can I handle this in a scalable way?
Hub-to-spoke QoS implementations in DMVPN networks usually use one of the following options:
Stretched Clusters: Almost as Good as Heptagonal Wheels
Some people are changing round wheels to heptagonal format because they will roll better. Some other people are building stretched high-availability clusters – clusters of servers stretched over multiple data centers. Unfortunately only one of these claims is false.
Similar to the stretched firewalls design, stretched tightly coupled HA clusters are vulnerable – you lose the inter-DC link for long enough time (depending on how the cluster heartbeat is configured, a few seconds could be enough) and you have a total disaster on your hands.
Random MPLS/VPN Q&A
I got a long list of MPLS-related follow-up questions from one of the attendees of my Enterprise MPLS/VPN Deployment webinar and thought it might be a good idea to share them (and the answers) with you.
You said that the golden rule in simple VPN topologies is RD = export RT = import RT. Are there any other “generic rules”? How would you setup this RD&RT association for hub&spoke VPN scenario?
Common services VPN topologies could be implemented in two ways (on top of existing simple VPN topology):
VN-Tag/802.1Qbh basics
A few years ago Cisco introduced an interesting concept to the data center networking: fabric extenders, devices acting like remote linecards of a central switch (Juniper’s “revolutionary” QFabric looks very similar from a distance; the only major difference seems to be local switching in the QF/Nodes). Cisco’s proprietary technology used in its FEX products became the basis for 802.1Qbh, an IEEE draft that is supposed to standardize the port extender architecture.
If you’re not familiar with the FEX products, read my “Port or Fabric Extenders?” article before continuing ... and disregard most of what it says about 802.1Qbh.
Getting ready for World IPv6 Day ... in six days
In a few minutes Jan Žorž, a true IPv6 evangelist, will open the Fifth Slovenian IP Summit. The event is focused on the World IPv6 Day and I decided to use a hypothetical case study: imagine your CIO just came back from an off-site social networking event where everyone got all hyped up about the World IPv6 Day.
Next thing you know, you’re in his office and he’s telling you the PR gurus have decided your organization simply has to participate in this revolutionary event. Assuming you haven’t invested in IPv6 yet, my presentation might serve as a short survival guide (hint: you have only 6 days left).
Speculation: This is how I would build QFabric
Three months after the QFabric launch, the details remain shrouded in mystical clouds, so let’s try to speculate what they could be hiding. We have two well-known facts:
- QFabric has three components: QF/Node (edge device), QF/Interconnect (high-speed core device) and QF/Director (the brains).
- Juniper is strong in the Service Provider technologies, including MPLS, MPLS/VPN, VPLS and BGP. It’s also touting its BGP MPLS-based MAC VPN technology (too long to write more than once, let’s call it BMMV).
I am positive Juniper would never try to build a monster single-brain fabric with Borg or Big Brother architecture as they simply don’t scale (as the OpenFlow crowd will learn in a few years).
EVB (802.1Qbg) – the S component
The Edge Virtual Bridging (EVB; 802.1Qbg) standard solves two important layer-2-based virtualization issues:
- Automatic provisioning of access switches based on hypervisor-signaled information (discussed in the EVB eases VLAN configuration pains article)
- Multiplexing of multiple logical 802.1Q links over a single physical link.
Logical link multiplexing might seem a solution in search of a problem until you discover that VMware-related design documents usually recommend using 6 to 10 NICs per server – an approach that either wastes switch ports or is hard to implement with blade servers’ mezzanine cards (due to limited number of backplane connections).
Building CsC-enabled MPLS backbone
Just got this question from one of my Service Provider friends: “If I am building a new MPLS backbone from scratch, should I design it with Carrier’s Carrier in mind?” Of course you should ... after all, the CsC functionality has almost no impact on the MPLS backbone (apart from introducing an extra label in the label stack).
… updated on Wednesday, July 6, 2022 14:40 UTC
For the Record: I Am Not Against OpenFlow ...
… as some of its supporters seem to believe every now and then (I do get severe allergic reaction when someone claims it will change the laws of physics or when I’m faced with technical inaccuracies peddled by an Instant Expert not to mention knee-jerking financial experts). Even more, assuming it can cross the adoption gap1, it could fundamentally change the business models of networking vendors (maybe not in the way you’d like them to be changed).
On the more technological front, I still don’t expect to see miracles. Most OpenFlow-related ideas I’ve heard about have been tried (and failed) before. I fail to see why things would be different just because we use a different protocol to program the forwarding tables.
I wrote about my OpenFlow views in an article that was published on SearchNetworking.com in 2011. That article is long gone, so I’m including in this blog post.
If you haven’t spent the last few weeks on a forgotten island with no satellite phone coverage, you’ve probably noticed the spiking levels of hype surrounding the newest internetworking technology OpenFlow. The networking industry is obviously in dire need of the next big thing. The last time I saw something similar to this was in the early 2000s when MPLS was supposed to solve every internetworking problem ever envisioned. In those days the levels of hype were so high that someone wrote an April 1st RFC describing the use of MPLS for electricity transport.
Like MPLS, OpenFlow won’t bring world peace, cure cancer or discover alien civilizations. It might, however, help change the internetworking environment in the same way Unix and Linux changed the operating system landscape by providing a standard way of configuring forwarding tables in a distributed switching architecture.
But that doesn’t account for the explosion of OpenFlow announcements at Interop. After all, OpenFlow was an unknown academic toy only a few months ago. In fact, the speed with which vendors were able to throw together a proof-of-concept code indicates one of the drawbacks of OpenFlow: it’s a simple low-level API (some people are comparing it to BIOS). The hard part of the exercise will be writing the controller software that everyone is already raving about. But that won’t be easy. Networking vendors have invested thousands of man-years into similar efforts. So those that expect revolutionary new controller applications appearing out of the blue sky probably also believe in tooth fairy and unicorn tears.
One of the most extreme analogies I’ve heard so far compared OpenFlow to a C compiler. Instead of using off-the-shelf applications, now we have the ability to develop our own. This might be true, but someone still has to develop these applications, test them and make sure they scale, which is one of the biggest hurdles OpenFlow has to cross. Meanwhile, vendors are already touting controller applications as the “magic” ingredient, but I wouldn’t expect miracles. As technical guru and professor Scott Shenker explained: “[OpenFlow] doesn’t let you do anything you couldn’t do on a network before.”
Moreover, even if OpenFlow were comparable to a C compiler, we haven’t seen an explosion of database packages or spreadsheet programs just because we have a C compiler. A few vendors own the majority of the market in each application segment, and the OpenFlow controller landscape might look very similar in a few years. There will likely be a few makers of commoditized hardware based on common merchant silicon and a few software vendors (probably including Cisco, Juniper and VMware) providing the vast majority of the controller nodes. And just in case you still believe OpenFlow will bring down prices and shrink the fat margins of some internetworking companies, take a brief look at Oracle’s financial reports.
Still Want to Know More about OpenFlow?
If you’re keen on figuring out how an obsolete protocol worked, you’ll find all the gory details in the OpenFlow Deep Dive webinar. If you’re more interested in real-life solutions, explore other SDN or network automation webinars.
Revision History
- 2022-07-06
- Added the OpenFlow article to the blog post
-
Hint: It did not. ↩︎
MPLS/VPN Transport Options
Jason sent me an interesting question a few days ago: “assuming a vSwitch *did* support MPLS/VPN PE router functionality, what type of protocol support would be needed on the access layer switches?”
While the MPLS/VPN support in hypervisor switches remains in the realm of science fiction, it’s worth knowing that there are at least five different transport options you can use between PE-routers. Here they are, from the most decoupled to the most tightly coupled ones:
Data Center Fabric Architectures update#1
Two months ago I wrote the Data Center Fabric Architectures post jokingly defining Borg and Big Brother architectures. In the meantime, a number of vendors have launched (or announced) their fabric products and the post badly needed an update.
I decided to move the updated text to my main web site (where it will be easier to edit), wrote an introductory section, removed a few tongue-in-cheek comments (after all, it’s time to get serious if Cisco’s Data Center blog links to your article) and added numerous links to in-depth articles and examples of individual architectures.