HP Virtual Connect: every vendor has its own dinosaurs

I was listening to the HP Virtual Connect (VC) PPP podcast recently and got the impression that HP VC is a weirdly convoluted product. I started wondering what exactly they were thinking when they were designing it ... and had the epiphany when Ken Henault took a step back and explained the history leading to the current complexity (listen to the Packet Pushers podcast to get the whole story)

Obvious HP VC started as a simple patch panel and evolved through a series of building-a-better-mousetrap enhancements while still retaining the backward compatibility. No wonder Cisco UCS networking seems revolutionary, while it’s in fact just a great common-sense greenfield design using contemporary technology. Obviously every vendor falls into the vicious circle of maintaining dinosaur technologies and supporting legacy decision (the prime example being IBM’s mainframes and SNA) ... until the product finally crumbles under the weight of unsupportable layers of face-lifting plaster.

As the long history of failed startups shows, it’s not enough to have a disruptive product. What matters more is the ability to execute: delivering high-quality product when and where expected, delivering all the supporting documentation (from whitepapers and design guides to product manuals), offering excellent technical support, and winning the hearts and minds of the users. Cisco UCS is doing pretty well because Cisco has mastered this process.

Last but definitely not least, let’s tackle the obvious T.rex in the room: Cisco IOS. Monolithic architecture designed 20 years ago and stretched way past its intended usage parameters. It’s been attacked for years with moderate success (example: Juniper), but we haven’t seen truly disruptive challenges yet (HP’s 3Com/H3C acquisition is just a different mousetrap competing with Cisco on price and Gartner praise). I’ve seen some interesting alternatives during the Net Field Day in September, but they either remain limited to a niche market (Force10 has excellent high-speed layer-3 switches) or lack the ability to execute (Arista still keeps its product documentation a highly-guarded secret). Too bad; having a disruptive competitor would put some much-needed excitement back into IOS development.


  1. I know it's been mentioned before, but the open book approach that Cisco has taken with regards to its documentation is one of their critical keys to success. I've always appreciated that, while they have plenty of marketing whitepapers and such fluff, I can drill down to config manuals, tech notes, and design guides quite easily.
  2. It was a good podcast. It seemed like the regulars on packetpushers had trouble getting their head around VC (no offence) for some reason. VC is essentially a very simplified and "dumbed down" switch. I wrote a short blog post recently comparing the functionality of VC next to a basic D-Link switch where I argue the D-Link is a more capable device.
  3. I've seen and loved your post; excellent comparison. For everyone else - here's the link:

  4. I actually like the VC concept. There is a very good document called " HP Virtual Connect for the Cisco Network Administrator".

    All it says is basically that VC is very much like vSwitch. I have lots of vSwitches in my network. They give me 0 problems. Actually, now the NOC doesn't have to bother configuring server ports. I wish all access layer switches would behave like vSwitch.

    Anyway, I think that HP FlexFabric (VC with 10G, vNIC and FCoE) is the best blade switch for HP. Passthrough for 4 blade chasis in a rack (256 NICs), with 4 nics per server is not realistic for me. It also save tons of money for mezzanine cards. I don't need to buy HBAs and extra ethernet ports.

    Regarding FCoE, its doing only single hop FCoE, ending it at the VC itself, which is very nice start for FCoE implementation.

    Anyway, I am about to test it next week. I hope it works as advertised... :) Then I'll test IBM's FCoE, vNIC blade switch solutions.

  5. I believe that Virtual Connect was HP's approach to implementing a simple to understand (high-bandwidth) network solution for the HP C7000 Blade Enclosure. You only have to take one look at the GUI and almost immediately understand that this was built for server administrators not network engineers. It doesn't have all the bells and whistles because it doesn't really need them (at least not yet). And to date the product appears to work well and delivers. I can't disagree with Josh, when I first looked at the GUI I was very confused and it too me several days of exploring and reading before I could begin to understand the product because I was coming from a network engineer's mindset. Server administrators just want X, Y, X VLAN dropped on A, B, C NICs (or ports). They don't care to know or what to know anything more about the network. It was funny because my first question was something like "where's the FDB/MAC table?" Have you ever seen the CLI interface... it's by far the ugliest interface you'll ever see or have to work with.

    I haven't ventured into the whole FCoE thing just yet... I'll wait for others to blaze the trail.

  6. Why do you compare VC with a network switch? In my opinion it's a virtualization layer between the servers and the network. It makes the management of the blade system much easier and since you can virtualize MAC and WWN addresses/ids you're not bound to specific component. You can even move servers from one chassis to another. If you use SAN boot, you can start the machine every you want, like a virtual machines. Lot's of benefits, even if VC is only a "dumbed down" switch. But the primary focus of VC is not switch functionality, it's virtualization!
Add comment