Lies, damned lies and product marketing
Greg Ferro’s “Layer-3 routing” post successfully kicked my huge sore spot: the numerous ways technical terminology is abused by product marketing gurus.
Twenty years ago, before networking became a multi-billion dollar industry, things were clear, simple and consistent: layer-2 (data-link layer) frame forwarding was bridging and layer-3 (network layer) packet forwarding was routing. Everything was crystal clear until some overly smart people tried to turn bridges into something they were not: WAN extension devices. A few large WAN networks were built with bridges … and failed spectacularly. Router vendors quickly used the opportunity to push the “routing is good, bridging is bad” mantra.
Certifications and the hiring process
My good friend Stretch wrote an interesting article about the usability of certifications in the hiring process. I can’t agree more with everything he wrote about certifications, it nicely summarizes the various topics Greg Ferro and myself wrote about during the last year (please note: I’m not claiming Stretch was in any way influenced by our thoughts, anyone seriously considering the current certification processes has to come to the same conclusions).
Regrettably, I have to disagree with most of his alternative approach (although some of the ideas are great). It would work in an ideal world, but faces too many real-life obstacles in this one.
iSCSI and SAN: Two Decades of Rigid Mindsets
Greg Ferro asked a very valid question in his blog: “Why does iSCSI use TCP as the underlying transport protocol”? It’s possible to design storage-focused protocol that uses connectionless lower layers (NFS comes to mind), but the storage industry has been too focused on their past to see past the artificial obstacles they’ve set up themselves.

IPv6 in the campus but not in the Data Center?
Cisco has recently published two excellent design guides: Deploying IPv6 in Campus Networks and Deploying IPv6 in Branch Networks. As expected from the engineers writing Cisco’s design documents, these guides contain tons of useful information and good recommendations; they’re a highly recommended reading if you’ve started considering IPv6 deployment in your enterprise network. These design guides are part of Design Zone for Branch and Design Zone for Campus.
IPv6 deployment issues are just one of the topics covered in the Enterprise IPv6 Deployment workshop. You can attend an online version of the workshop or we can organize a dedicated event for your team.
HQF: intra-class WFQ ignores the IP precedence
Continuing my tests of the Hierarchical Queuing Framework, I’ve tested whether the fair queuing works similarly to the previous IOS implementations, where the high-precedence sessions got proportionally more bandwidth.
Summary: Fair queuing within HQF ignores IP precedence.
Dual-stack PPP requires two separate sessions?
A while ago a senior Service Provider network designer told me that they have serious issues with IPv6 deployment as IPv6 requires a separate PPPoE session from the CPE devices which significantly increases their licensing costs. The statement really surprised me; PPP was designed to be a multi-protocol environment and it’s very easy to configure IPv4 and IPv6 over a single PPP session in Cisco IOS. I think I might have tracked down the source of this “information” to the 6deploy IPv6 and DSL presentation which states on Page 11 that “Separate PPP sessions are established between the Subscriber’s systems (or CPE) and the BBRAS for IPv6 and IPv4 traffic”.
Being too Cisco-centric, I cannot figure out whether this claim has any merit, as it clearly does not apply to Cisco IOS. Are there really boxes out there that are so stupid that they cannot run two protocols across a single PPPoE session?
“ip ospf mtu-ignore” Is a Dangerous Command
A while ago, I wrote about the problems caused by MTU mismatch between OSPF neighbors and warned that the ip ospf mtu-ignore interface configuration command that supposedly solves the problem could cause significant headaches.
These are the router configurations I’ve used to generate the problem described in this blog post.
Who Needs IPv6?
One of the most common questions asked by my enterprise customers is “Who needs IPv6?” Since IPv6 does not add any significant new functionality (apart from larger address space), you can’t gain much by deploying it in an enterprise network … unless you’re huge enough that the private IPv4 address space (RFC 1918) becomes too confining for you. A good case study is Halliburton; you’ll find the details in Global IPv6 Strategies: From Business Analysis to Operational Planning book (my review).

Detect short bursts with EEM
Last week I’ve described how you can use EEM to detect long-term interface congestion which could indicate denial-of-service attack. The mechanism I’ve used (the averaged interface load) is pretty slow; using the lowest possible value for the load-interval (30 seconds) it takes almost a minute to detect a DOS attack (see below).
If you want to detect outbound bursts, you can do better: you can monitor the increase in the number of output drops over a short period of time.
HQF: intra-class fair queuing
Continuing from my first excursion into the brave new world of HQF, I wanted to check how well the intra-class fair queuing works. I’ve started with the same testbed and router configurations as before and configured the following policy-map on the WAN interface:
policy-map WAN
class P5001
bandwidth percent 20
fair-queue
class P5003
bandwidth percent 30
class class-default
fair-queue
The test used this background load:
Class | Background load |
---|---|
P5001 | 10 parallel TCP sessions |
P5003 | 1500 kbps UDP flood |
class-default | 1500 kbps UDP flood |
Detect DoS Attacks with EEM
Someone sent me an interesting question a while ago: “is it possible to detect DOS flooding with an EEM applet?” Of course it is (assuming the DOS attack results in very high load on the Internet-facing interface) and the best option is the EEM interface event detector.

Detecting interface overload with EEM
The interface event detector is more user-friendly than the SNMP event detector. You can specify interface name and parameter name in the interface event detector; with SNMP event detector you have to specify SNMP object identifier (OID). The interface event detector stores the interface name, measured parameter name and its value in three convenient environment variables that you can use to generate syslog messages or alert the operators via e-mail.
IPv6 in the Data Center: is Cisco ready?
With the recent Cisco’s push into the Data Center environment and all the (not so very unreasonable) fuss around IPv4 address depletion and imminent need for IPv6, I wanted to check whether an all-Cisco shop could do the first step: deploy IPv6 on Internet-facing production servers. If you follow the various design guidelines, your setup will have at least the following elements (and I bet someone from Cisco has already told you that you also need XML firewall, Ironport and WAAS appliance):
Now let’s see how well these boxes support IPv6.
I’m describing the Data Center IPv6 deployment issues in the Enterprise IPv6 Deployment workshop. The diagram above was taken straight from the workshop materials.
First HQF impressions: excellent job
Several readers told me that the Hierarchical Queuing Framework introduced in IOS releases 12.4(20)T and 15.0 (why do I always have the urge to write 12.5?) works much better than CB-WFQ. After spending several hours trying to break HQF, I have to concur with them: Cisco’s engineers did a splendid job. However, the HQF behavior might be slightly counterintuitive to those that became too familiar with CB-WFQ.
Solution: Bandwidth+Police actions in CB-WFQ
Most of the respondents to my last week’s challenge got it almost right. The minor (common) error was the assumption that police rate percent 50 would result in a TCP session getting 50% of the bandwidth. Eyal got that right: the TCP throughput is always significantly lower than that due to frequent drops caused by low burst sizes assumed by the police command and resulting TCP restarts (the most I was able to push through was around 90 kbps; half of the bandwidth would be 128 kbps).