Changing IP precedence values in router-generated pings
When I was testing QoS behavior in MPLS/VPN-over-DMVPN networks, I needed a traffic source that could generate packets with different DSCP/IP precedence values. If you have enough routers in your lab (and the MPLS/DMVPN lab that was used to generate the router configurations you get as part of the Enterprise MPLS/VPN Deployment and DMVPN: From Basics to Scalable Networks webinars has 8 routers), it’s usually easier to use a router as a traffic source than to connect an extra IP host to the lab network. Task-at-hand: generate traffic with different DSCP values from the router.
The week of blunders
This week we finally got some great warm(er) dry weather after months of eternal late autumn interspersed with snowstorms and cold spells, making me way too focused on rock climbing while blogging and testing IOS behavior. The incredible results: two blunders in a single week.
First I “discovered” anomalies in ToS propagation between IP precedence values and MPLS EXP bits. It was like one of those unrepeatable cold fusion experiments: for whatever stupid reason it all made sense while I was doing the tests, but I was never able to recreate the behavior. The “End-to-end QoS marking in MPLS/VPN-over-DMVPN networks” post is fixed (and I’ve noticed a few additional QoS features while digging around).
The second stupidity could only be attributed to professional blindness. Whenever I read about pattern matching, the regular expressions come to mind. Not always true – as some commentators to my “EEM QA: what were they (not) doing?” post pointed out, the action string match command expects Tcl patterns (not regular expressions).
At least the rock climbing parts of the week were great ;)
EEM QA: what were they (not) doing?
When I was writing the applet that should stop accidental scheduled router reloads, I wanted to use the action string match command to perform pattern matching on the output of the show reload command. Somehow the applet didn’t want to work as expected, so I checked the documentation on Cisco’s web site.
Reading the command description, I should have realized the whole thing must be broken. It looks like the documentation writer was fast asleep; even someone with a major in classical philosophy and zero exposure to networking should be able to spot the glaring logical inconsistencies.
End-to-End QoS marking in MPLS/VPN-over-DMVPN networks
I got a great question in one of my Enterprise MPLS/VPN Deployment webinars when I was describing how you could run MPLS/VPN across DMVPN cloud:
That sounds great, but how does end-to-end QoS work when you run IP-over-MPLS-over-GRE-over-IPSec-over-IP?
My initial off-the-cuff answer was:
Well, when the IP packet arriving through a VRF interface gets its MPLS label, the IP precedence bits from the IP packet are copied into the MPLS EXP (now TC) bits. As for what happens when the MPLS packet gets encapsulated in a GRE packet and when the GRE packet is encrypted… I have no clue. I need to test it.
IPv6 Provider Independent Addresses
If you want your network to remain multihomed when the Internet migrates to IPv6, you need your own Provider Independent (PI) IPv6 prefix. That’s old news (I was writing about the multihoming elephant almost two years ago), but most of the IT industry managed to look the other way pretending the problem does not exist. It was always very clear that the lack of other multihoming mechanisms will result in explosion of global IPv6 routing tables (attendees of my Upcoming Internet Challenges webinar probably remember the topic very well, as it was one of my focal points) and yet nothing was done about it (apart from the LISP development efforts, which will still take a while before being globally deployed).
To make matters worse, some Service Providers behave like the model citizens in the IPv6 world and filter prefixes longer than /32 when they belong to the Provider Assigned (PA) address space, which means that you cannot implement reliable multihoming at all if you don’t get a chunk of PI address space.
Open FCoE – Software implementation of the camel jetpack
Intel announced its Open FCoE (software implementation of FCoE stack on top of Intel’s 10GB Ethernet adapters) using the cloudy bullshit bingo including simplifying the Data Center, Free New Technology, Cloud Vision and Green Computing (ok, they used Environmental impact) and lots of positive supporting quotes. The only thing missing was an enthusiastic Gartner quote (or maybe they were too expensive?).
Interesting links (2010-01-30)
Links to interesting content have yet again started gathering dust in my Inbox. Time for a cleanup action. Technical content first:
Cisco Pushing More vNetwork into Hardware. Pretty good description of impact of Nexus 1000V and VN-Link on virtualized network security.
Convergence Delays: SVI vs Routed Interface. Another great article by Stretch. I never realized carrier-delay could be that harmful. The moral of the story is also important: test and verify the device behavior, don’t trust PPT slides (once I’ll share with you how I’ve learned that lesson the hard way).
RFC 6092 - Recommended Simple Security Capabilities in Customer Premises Equipment (CPE) for Providing Residential IPv6 Internet Service. A fantastic document – now we can only hope that every magazine evaluating consumer IPv6-ready CPEs starts using it as a benchmark (and that the IPv6 Ready guys pick it up).
Stop accidental scheduled router reloads
Alexandra Stanovska wrote an excellent comment to my Schedule reload before configuring the router post:
It may come in handy creating some form of script that would display some basic upon logout - show debug, show reload etc.
The new capabilities of CLI event detector introduced in EEM 3.0 allow us to catch CLI commands in a particular parser mode. Writing an EEM applet that catches exec-mode exit or logout and performs a few checks is thus a trivial task.
Published on , commented on July 10, 2022
VMware Cluster: Up and Running in Three Hours
A few days ago I wanted to test some of the new networking features VMware introduced with the vShield product family. I almost started hacking together a few old servers (knowing I would have wasted countless hours with utmost stupidities like trying to get the DVDs to boot), but then realized that we already have the exact equipment I need: a UCS system with two Fabric Interconnects and a chassis with five blade servers – the lab for our Data Center training classes (the same lab has a few Nexus switches, but that’s another story).
I managed to book lab access for a few days, which was all I needed. Next step: get a VMware cluster installed on it. As I never touched the UCS system before, I asked Dejan Strmljan (one of our UCS gurus) to help me.
vSwitch in Multi-chassis Link Aggregation (MLAG) environment
Yesterday I described how the lack of LACP support in VMware’s vSwitch and vDS can limit the load balancing options offered by the upstream switches. The situation gets totally out-of-hand when you connect an ESX server with two uplinks to two (or more) switches that are part of a Multi-chassis Link Aggregation (MLAG) cluster.
Let’s expand the small network described in the previous post a bit, adding a second ESX server and another switch. Both ESX servers are connected to both switches (resulting in a fully redundant design) and the switches have been configured as a MLAG cluster. Link aggregation is not used between the physical switches and ESX servers due to lack of LACP support in ESX.
VMware vSwitch does not support LACP
This is very old news to any seasoned system or network administrator dealing with VMware/vSphere: the vSwitch and vNetwork Distributed Switch (vDS) do not support Link Aggregation Control Protocol (LACP). Multiple uplinks from the same physical server cannot be bundled into a Link Aggregation Group (LAG, also known as port channel) unless you configure static port channel on the adjacent switch’s ports.
When you use the default (per-VM) load balancing mechanism offered by vSwitch, the drawbacks caused by lack of LACP support are usually negligible, so most engineers are not even aware of what’s (not) going on behind the scenes.
Intelligent Redundant Framework (IRF) – Stacking as Usual
When I was listening to the Intelligent Redundant Framework (IRF) presentation from HP during the Tech Field Day 2010 and read the HP/H3C IRF 2.0 whitepaper afterwards, IRF looked like a technology sent straight from Data Center heavens: you could build a single unified fabric with optimal L2 and L3 forwarding that spans the whole data center (I was somewhat skeptical about their multi-DC vision) and behaves like a single managed entity.
No wonder I started drawing the following highly optimistic diagram when creating materials for the Data Center 3.0 webinar, which includes information on Multi-Chassis Link Aggregation (MLAG) technologies from numerous vendors.
Interesting links (2011-01-23)
Interesting links keep appearing in my Inbox. Thank you, my Twitter friends and all the great bloggers out there! Here’s this week’s collection:
Cores and more Cores… We don’t need them! More is not always better.
Emulating WANs with WANem. I wanted to blog about WANem for at least three years, now I don’t have to; like always, Stretch did an excellent job.
New Year's Resolutions for Geeks Like Me. The ones I like most: Deploy a pure IPv6 subnet somewhere; Put something in public cloud. And there’s “find a mentor” ... more about that in a few months ;)
VPN Network Design: Selecting the Technology
After all the DMVPN-related posts I’ve published in the last days, we’re ready for the OSPF-over-DMVPN design challenge, but let’s step back a few more steps and start from where every design project should start: deriving the technical requirements and the WAN network design from the business needs.
Do I Need a VPN?
Whenever considering this question, you’re faced with a buy-or-build dilemma. You could buy MPLS/VPN (or VPLS) service from a Service Provider or get your sites hooked up to the Internet and build a VPN across it. In most cases, the decision is cost-driven, but don’t forget to consider the hidden costs: increased configuration and troubleshooting complexity, lack of QoS over the Internet and increased exposure of Internet-connected routers.
Configuring OSPF in a Phase 2 DMVPN network
Cliffs Notes version (details in the DMVPN webinar):
- Configure ip nhrp multicast-map for the hub on the spoke routers. Otherwise, the spokes will not send OSPF hellos to the hub.
- Use dynamic NHRP multicast maps on the hub router, or the spokes will not receive its OSPF hellos.
- Use broadcast network type on all routers.