Blog Posts in January 2011

Open FCoE – Software implementation of the camel jetpack

Intel announced its Open FCoE (software implementation of FCoE stack on top of Intel’s 10GB Ethernet adapters) using the cloudy bullshit bingo including simplifying the Data Center, Free New Technology, Cloud Vision and Green Computing (ok, they used Environmental impact) and lots of positive supporting quotes. The only thing missing was an enthusiastic Gartner quote (or maybe they were too expensive?).

read more see 1 comments

Interesting links (2010-01-30)

Links to interesting content have yet again started gathering dust in my Inbox. Time for a cleanup action. Technical content first:

Cisco Pushing More vNetwork into Hardware. Pretty good description of impact of Nexus 1000V and VN-Link on virtualized network security.

Convergence Delays: SVI vs Routed Interface. Another great article by Stretch. I never realized carrier-delay could be that harmful. The moral of the story is also important: test and verify the device behavior, don’t trust PPT slides (once I’ll share with you how I’ve learned that lesson the hard way).

RFC 6092 - Recommended Simple Security Capabilities in Customer Premises Equipment (CPE) for Providing Residential IPv6 Internet Service. A fantastic document – now we can only hope that every magazine evaluating consumer IPv6-ready CPEs starts using it as a benchmark (and that the IPv6 Ready guys pick it up).

read more see 7 comments

Stop accidental scheduled router reloads

Alexandra Stanovska wrote an excellent comment to my Schedule reload before configuring the router post:

It may come in handy creating some form of script that would display some basic upon logout - show debug, show reload etc.

The new capabilities of CLI event detector introduced in EEM 3.0 allow us to catch CLI commands in a particular parser mode. Writing an EEM applet that catches exec-mode exit or logout and performs a few checks is thus a trivial task.

read more see 1 comments

VMware Cluster: Up and Running in Three Hours

A few days ago I wanted to test some of the new networking features VMware introduced with the vShield product family. I almost started hacking together a few old servers (knowing I would have wasted countless hours with utmost stupidities like trying to get the DVDs to boot), but then realized that we already have the exact equipment I need: a UCS system with two Fabric Interconnects and a chassis with five blade servers – the lab for our Data Center training classes (the same lab has a few Nexus switches, but that’s another story).

I managed to book lab access for a few days, which was all I needed. Next step: get a VMware cluster installed on it. As I never touched the UCS system before, I asked Dejan Strmljan (one of our UCS gurus) to help me.

read more see 2 comments

vSwitch in Multi-chassis Link Aggregation (MLAG) environment

Yesterday I described how the lack of LACP support in VMware’s vSwitch and vDS can limit the load balancing options offered by the upstream switches. The situation gets totally out-of-hand when you connect an ESX server with two uplinks to two (or more) switches that are part of a Multi-chassis Link Aggregation (MLAG) cluster.

Let’s expand the small network described in the previous post a bit, adding a second ESX server and another switch. Both ESX servers are connected to both switches (resulting in a fully redundant design) and the switches have been configured as a MLAG cluster. Link aggregation is not used between the physical switches and ESX servers due to lack of LACP support in ESX.

read more see 12 comments

VMware vSwitch does not support LACP

This is very old news to any seasoned system or network administrator dealing with VMware/vSphere: the vSwitch and vNetwork Distributed Switch (vDS) do not support Link Aggregation Control Protocol (LACP). Multiple uplinks from the same physical server cannot be bundled into a Link Aggregation Group (LAG, also known as port channel) unless you configure static port channel on the adjacent switch’s ports.

When you use the default (per-VM) load balancing mechanism offered by vSwitch, the drawbacks caused by lack of LACP support are usually negligible, so most engineers are not even aware of what’s (not) going on behind the scenes.

read more see 22 comments

Intelligent Redundant Framework (IRF) – Stacking as Usual

When I was listening to the Intelligent Redundant Framework (IRF) presentation from HP during the Tech Field Day 2010 and read the HP/H3C IRF 2.0 whitepaper afterwards, IRF looked like a technology sent straight from Data Center heavens: you could build a single unified fabric with optimal L2 and L3 forwarding that spans the whole data center (I was somewhat skeptical about their multi-DC vision) and behaves like a single managed entity.

No wonder I started drawing the following highly optimistic diagram when creating materials for the Data Center 3.0 webinar, which includes information on Multi-Chassis Link Aggregation (MLAG) technologies from numerous vendors.

read more see 15 comments

Interesting links (2011-01-23)

Interesting links keep appearing in my Inbox. Thank you, my Twitter friends and all the great bloggers out there! Here’s this week’s collection:

Cores and more Cores… We don’t need them! More is not always better.

Emulating WANs with WANem. I wanted to blog about WANem for at least three years, now I don’t have to; like always, Stretch did an excellent job.

New Year's Resolutions for Geeks Like Me. The ones I like most: Deploy a pure IPv6 subnet somewhere; Put something in public cloud. And there’s “find a mentor” ... more about that in a few months ;)

read more add comment

VPN Network Design: Selecting the Technology

After all the DMVPN-related posts I’ve published in the last days, we’re ready for the OSPF-over-DMVPN design challenge, but let’s step back a few more steps and start from where every design project should start: deriving the technical requirements and the WAN network design from the business needs.

Do I Need a VPN?

Whenever considering this question, you’re faced with a buy-or-build dilemma. You could buy MPLS/VPN (or VPLS) service from a Service Provider or get your sites hooked up to the Internet and build a VPN across it. In most cases, the decision is cost-driven, but don’t forget to consider the hidden costs: increased configuration and troubleshooting complexity, lack of QoS over the Internet and increased exposure of Internet-connected routers.

read more add comment

Configuring OSPF in a Phase 2 DMVPN network

Cliffs Notes version (details in the DMVPN webinar):

  • Configure ip nhrp multicast-map for the hub on the spoke routers. Otherwise, the spokes will not send OSPF hellos to the hub.
  • Use dynamic NHRP multicast maps on the hub router, or the spokes will not receive its OSPF hellos.
  • Use broadcast network type on all routers.
You could use the non-broadcast network type and configure the neighbors manually, but that would just destroy the scalability of the solution. If you use point-to-multipoint network type, all the traffic will flow through the hub router.
read more add comment

DMVPN Phase 2 Fundamentals

Phase 2 DMVPN in a nutshell:

  • Multipoint GRE tunnels on all routers.
  • NHRP is used for dynamic spoke registrations (like with Phase 1 DMVPN), but also for on-demand resolution of spoke transport addresses.
  • Traffic between the spokes initially flows through the hub router until NHRP resolves the remote spoke transport IP address and IKE establishes the IPSec session with it.
  • The IP next-hop address for any prefix reachable over DMVPN must be the egress router (hub or spoke). From the routing perspective, Phase 2 DMVPN subnet should behave like a LAN.
  • Multicast packets (including routing protocol hello packets and routing updates) are exchanged only between the hub and the spoke routers.
  • Routing adjacencies are established only between the hub and the spoke routers unless you use statically configured neighbors.

For more details watch the DMVPN webinar webinar.

add comment

OSPF Configuration in Phase 1 DMVPN Network

This is how you configure OSPF in a Phase 1 DMVPN network (read the introductory post and Phase 1 DMVPN fundamentals first):

Remember:

  • Use point-to-multipoint network type on the hub router to ensure the hub router is always the IP next hop for the DMVPN routes.
  • Use point-to-multipoint network type on the spoke routers to ensure the OSPF timers match with the hub router.
  • The DMVPN part of your network should be a separate OSPF area; if at all possible, make it a stub or NSSA area.
  • If absolutely needed, use OSPF LSA flood filter on the hub router and a static default route on the spokes.

For more information, watch the DMVPN Technology and Configuration webinar.

see 4 comments

DMVPN Phase 1 Fundamentals

Phase 1 DMVPN in a nutshell:

  • Point-to-point GRE tunnel on spoke routers
  • Multipoint GRE tunnel on the hub router.
  • All the DMVPN traffic (including the traffic between the spokes) flows through the hub router.
  • On the spoke routers, the hub router must be the IP next-hop for all destinations reachable through the DMVPN subnet (including other spokes).
  • Multicast packets (including routing protocol hello packets and routing updates) are exchanged only between the hub and the spoke routers.
read more see 1 comments

Sometimes You Need to Step Back and Change Your Design

A few days ago I received the following e-mail from one of my readers:

I am trying presently to put in place a DMVPN solution running OSPF. I was wondering if you ever saw a solution with dual hub dual cloud design with OSPF working in practice because since I started I have issue with asymmetric routing because of the OSPF functionality.

Actually, I did… and exactly the same setup is included in the tested router configurations you get with the DMVPN: from Basics to Scalable Networks webinar. While there are many things that can go wrong with DMVPN, I’ve never heard about asymmetric routing problems, so I started to investigate what’s actually going on.

read more see 5 comments

MPLS/VPN over mGRE strikes again

More than five years after the MPLS/VPN-in-mGRE encapsulation was standardized (add a few more years for the work-in-progress and IETF draft stages), it finally debuted in a mainstream-wannabe IOS release running on ISR routers (15.1(2)T), making it usable for the enterprise WAN designers, who are probably its best target audience.

I was writing about the two conflicting MPLS/VPN over mGRE implementations a while ago and got the impression the Service Providers aren’t too excited about this option. No wonder – most of them use full-blown MPLS backbones, so they have no need for GRE tunnels.

read more see 5 comments

Interesting links (2011-01-09)

Jedi Mind Tricks: HTTP Request Smuggling – an intriguing HTTP vulnerability and the countermeasure using ... what else ... F5.

Flailing IPv6 – up to 13% of IPv6 connections fail, mostly due to broken tunnels. Stop tunneling!

Cisco UCS criticism and FUD: Answered – another great article by @bradhedlund. Assuming he’s not making it up, some competitors must be really desperate.

Understanding Inter-Area Loop Prevention Caveats in OSPF Protocol – a masterpiece by @plapukhov. I thought I knew almost everything there is to know about OSPF. Boy was I wrong.

read more add comment

Campfire: the true story of MPLS

Just before 2010 disappeared, a tweet by my friend Greg @etherealmind Ferro triggered a minor twitstorm. He wrote:

If we had implemented IPv6 ten years ago, would we have MPLS today? I think not.

His tweet contains two major misconceptions:

  • MPLS was designed to implement layer-3 VPN services;
  • We wouldn’t need VPNs if everyone would be using global IPv6 addresses.

I’ll focus on the first one today; the inaccuracy of the second one is obvious to anyone who was asked to implement MPLS VPNs in enterprise networks to ensure end-to-end path separation between departments or users with different security levels.

read more see 5 comments

Interesting links (2011-01-02)

New Year Resolution #1: I shall clean my Inbox on a weekly basis. Here are the links that started gathering dust during the last week:

add comment
Sidebar