HP Virtual Connect: every vendor has its own dinosaurs

I was listening to the HP Virtual Connect (VC) PPP podcast recently and got the impression that HP VC is a weirdly convoluted product. I started wondering what exactly they were thinking when they were designing it ... and had the epiphany when Ken Henault took a step back and explained the history leading to the current complexity (listen to the Packet Pushers podcast to get the whole story)

read more see 6 comments

How much IPv6 address space should a residential customer get?

A while ago I wrote about IPv6 addressing challenges some ISPs face and recommended what I thought was agreed-upon practice of giving residential customers a /64 or a /56. Not long after, I received an e-mail from an IPv6 guru saying:

[Worse] is when people start claiming to have expertise in IPv6 and promulgate this idea of residential /56s and /64s as immutable fact. The reality is that it is becoming more and more apparent that /56s and especially /64s to residential customers are going to be harmful to future innovation in IPv6.
read more see 35 comments

Multi-Chassis Link Aggregation (MLAG) and Hot Potato Switching

There are two reasons one would bundle parallel Ethernet links into a port channel (official term is Link Aggregation Group):

  • Transforming parallel links into a single logical link bypasses Spanning Tree Protocol loop avoidance logic; all links belonging to the port channel can be active at the same time (see also: Multi-Chassis Link Aggregation basics).
  • Load sharing across parallel links in a port channel increases the total bandwidth available between adjacent L2 switches or between routers/hosts and switches.

Ethan Banks wrote an excellent explanation of traditional port channel caveats (proving that 1+1 sometimes does not equal 2); things get way worse when you start using Multi-Chassis Link Aggregation due to hot potato switching (the switch tries to forward packets toward destination MAC address as soon as possible) used by all MLAG implementations I’m familiar with.

read more see 13 comments

CLNP and the Multihoming Myths

When IESG decided to adopt SIP, not TUBA (TCP/UDP over CLNP) as IPv6, a lot of people were mightily disappointed and some of them still propagate the myths how CLNP with its per-node addresses would fare better than IPv6 with its per-interface addresses (you might find the writings of John Day on this topic interesting and Petr Lapukhov is also advocating this view in his comments).

These views are correct when considering small-scale (intra-network) multihoming, but unfortunately wrong when it comes to Internet-scale multihoming, where CLNP with TCP on top of it would be as bad as IPv4 or IPv6 is (routing table explosion due to multihoming is also one of the topics of my Upcoming Internet Challenges webinar).

read more see 4 comments

Can we go back to CLNP?

Paulie, a frustrated enterprise IPv6 early adopter summarized his pains in a comment to my “Small-site multihoming in IPv6: mission impossible?” post saying “[IPv6/IPv6 support] is a mess and depressing” and asked “Is it too late to go to CLNS?”

Quite a few old-timers (I’m definitely one of them) lament the glory days of VMS, DECnet Phase V and CLNP, but while CLNP was a viable alternative for the next-generation IP in 1993, it would fare worse than IPv6 today.

read more see 3 comments

Another security product killed

We all knew MARS is becoming a dead end (Cisco first removed third-party support and then stopped developing the product), now it’s official. MARS is dead.

Just in case you haven’t noticed, this is the third security product (after WAF and XML Gateway) Cisco has killed this year. Are they implementing borderless networks or trimming down to core competences while preparing for onslaught of market adjacencies?

see 8 comments

Chinese BGP incident: was it a traffic hijack?

You’re probably familiar with the April fat fingers incident in which Chinanet (AS 23724) originated ~37.000 prefixes for about 15 minutes. The incident made it into the annual report of US Congress’ U.S.-China Economic and Security Review Commission (page 243 of this PDF) and the media was more than happy to pick it up (Andree Toonk has a whole list of links in his blog post). We might never know whether the misleading statements in the report were intentional or just a result of clueless technical advisors, but the facts are far away from what they claim:

read more see 2 comments

Small Site Multihoming in IPv6: Mission Impossible?

Summary: I can’t figure out how to make small-site multihoming (without BGP or PI address space) work reliably and decently fast (failover in seconds, not hours) with IPv6. I’m probably not alone.

Problem: There are cases where a small site needs (or wants) to have Internet connectivity from two ISPs without going through the hassle of getting a BGP AS number and provider-independent address space, and running BGP with both upstream ISPs.

read more see 10 comments

Requirements for IPv6 in ICT equipment

Greg Ferro reached an interesting conclusion after going through my Content over IPv6 presentation: we won’t see IPv6 for a few years, so why bother. Although I disagree with his approach, he may be right ... but if you decide to ignore IPv6, you might be forced to implement it in a hurry, at which point you’ll be stuck if your equipment won’t support IPv6. The very minimum you need to do today is to buy IPv6-ready gear (and yell at the vendors if they try to charge extra for IPv6 support).

read more see 4 comments

FCoE between data centers? Forget it!

Was anyone trying to sell you the “wonderful” idea of running FCoE between Data Centers instead of FC-over-DWDM or FCIP? Sounds great ... until you figure out it won’t work. Ever ... or at least until switch vendors drastically increase interface buffers on the 10GE ports.

FCoE requires lossless Ethernet between its “routers” (Fiber Channel Forwarders – see Multihop FCoE 101 for more details), which can only be provided with Data Center Bridging (DCB) standards, specifically Priority Flow Control (PFC). However, if you want to have lossless Ethernet between two points, every layer-2 (or higher) device in the path has to support DCB, which probably rules out any existing layer-2+ solution (including Carrier Ethernet, pseudowires, VPLS or OTV). The only option is thus bridging over dark fiber or a DWDM wavelength.

read more see 7 comments

VMware Virtual Switch: no need for STP

During the Data Center 3.0 webinar I always mention that you can connect a VMware ESX server (with embedded virtual switch) to the network through multiple active uplinks without link aggregation. The response is very predictable: I get a few “how does that work” questions in the next seconds.

VMware did a great job with the virtual switch embedded in the VMware hypervisor (vNetwork Standard Switch – vSS – or vNetwork Distributed Switch – vDS): it uses special forwarding rules (I call them split horizon switching, Cisco UCS documentation uses the term End Host Mode) that prevent forwarding loops without resorting to STP or port blocking.

read more see 7 comments

Cisco IOS Login Enhancements are not IPv6-aware

One of the comments to my “IPv6 in Data Center: after a year, Cisco is still not ready” post included the following facts:

Up through at least 15.0(1)M and 12.2(53)SE2 the IPv6 support for management protocols is spotty; syslog is there, SNMP traps and the RADIUS/TACACS control plane aren't.

Another bug along the same lines was discovered by Jónatan Jónasson: When the Cisco IOS Login Enhancements feature logs successful or failed login attempt, it reports the top 32 bits of the remote IPv6 address in IPv4 address format. Here’s a sample printout taken from a router running IOS release 15.0(1)M.

P#
%SEC_LOGIN-5-LOGIN_SUCCESS: Login Success [user: test]
[Source: 254.192.0.0] [localport: 23] at ...
P#who
Line User Host(s) Idle Location
* 0 con 0 idle 00:00:00
2 vty 0 test idle 00:00:06 FEC0::CCCC:1

It looks like the recommendation we’ve been making two years ago is still valid: use IPv4 for network management.

see 7 comments
Sidebar