Blog Posts in April 2011
OpenFlow 1.1 in hardware: I was wrong (again)
Earlier this month I wrote “we’ll probably have to wait at least a few years before we’ll see a full-blown hardware product implementing OpenFlow 1.1.” (and probably repeated something along the same lines in during the OpenFlow Packet Pushers podcast). I was wrong (and I won’t split hairs and claim that an academic proof-of-concept doesn’t count). Here it is: @nbk1 pointed me to a 100 Gbps switch implementing the latest-and-greatest OpenFlow 1.1.
DMVPN Spoke NHRP Behavior Changed in IOS Release 15.0M
In the good old days, we (thought we) knew how Phase 2 DMVPN works and what happens when the spoke-to-spoke session cannot be established. As I discovered when developing the lab configurations for the DMVPN: New Features in IOS Release 15 webinar, that behavior has forever changed (and not for the better) sometime in the 12.4T (or 15.0M) release. I blame the introduction of NAT awareness in IOS release 12.4(15)T, but it could be another totally unrelated change.
New Data Center switches from Force10
Force10 has just launched a new series of data center switches. The ZettaScale switches are, as one would expect from Force10, down-to-earth high-performance low-footprint products – a good option for those network engineers that like building high-density high-performance data centers with minimal feature overload.
All the information in this post is based on the briefing I’ve received from Force10 last week, the draft materials they sent me and the subsequent answers to my questions. I haven’t been able to touch the boxes or read the product documentation yet.
Virtual network appliances: benefits and drawbacks
A while ago I decided to figure out how well various vendors support virtualized networking (one of the answers: some of the solutions don’t scale) and what can be done with virtual network appliances (I was pleasantly surprised by F5’s BIG-IP LTM VE and Vyatta). You’ll find some of my other thoughts on this subject in the Virtual network appliances: Benefits and drawbacks article published by SearchNetworking.
Spoke-to-Spoke IP Multicast over DMVPN?
A long-time reader has sent me an intriguing question: “would IP multicast work between DMVPN spokes?” In theory, the answer is “we could make it work”, but we all know theory and practice are not the same thing.
To make IP multicast work between DMVPN spokes, you’d need to configure multicast propagation between them with the ip nhrp map multicast remote-spoke-NBMA command. In a small DMVPN network where you need IP multicast only between a handful of spokes, that might even work; obviously this trick does not scale for a number of reasons:
Published on , commented on July 6, 2022
OpenFlow FAQ: Will the Hype Ever Stop?
Network World has published another masterpiece last week: FAQ: What is OpenFlow and why is it needed? Following the physics-changing promises made during the Open Network Foundation launch, one would hope to get some straight facts; obviously things don’t work that way. Let’s walk through some of the points. While most of them might not be too incorrect from an oversimplified perspective, they do over-hype a potentially useful technology way out of proportions.
NW: “OpenFlow is a programmable network protocol designed to manage and direct traffic among routers and switches from various vendors.” This one is just a tad misleading. OpenFlow is actually a protocol that allows a controller to download forwarding tables into one or more switches. Whether that manages or directs traffic depends on what controller is programmed to do.
Distributed Firewalls: a Ticking Bomb
Are you ever asked to use a layer-2 Data Center Interconnect to implement distributed active-active firewalls, supposedly solving all the L3 issues and asymmetrical-traffic-flow-over-stateful-firewalls problems? Don’t be surprised; I was stupid enough (or maybe just blinded by the L2 glitter) in 2010 to draw the following diagram illustrating a sample use of VPLS services:
Interesting links (2011-04-17)
Data Center
RFC 6165 documents the layer-2 related IS-IS extensions. No more excuses along the “TRILL standards are not ready” lines. Are you listening, Brocade and HP?
Data Center Feng Shui: Architecting for Predictable Performance. A nice introductory explanation of advantages of hardware-based forwarding.
When is a Fabric not a Fabric? Juniper continues the “who’s the smartest kid on the block” game. I thought we’re all adults; stop the “bright future” promises and get the products out.
OSPF Route Selection Rules
OSPF implementation in Cisco IOS deviates slightly from OSPF/NSSA standards (RFC 2328 and RFC 3101). These are the OSPF route selection rules as implemented by Cisco IOS release 12.2(33)SRE1 (all recent releases probably behave identically):
VPLS versus OTV for L2 Data Center Interconnect (DCI)
DJ Spry asked an interesting question in a comment to my MPLS/VPN in DCI designs post: “Why would one choose OTV over MPLS/VPN?” The answer is simple: it depends on what you need. MPLS/VPN provides path isolation between layer-3 domains (routed networks) across MPLS or IP infrastructure whereas OTV providers layer-2 transport (and VLAN-based path isolation) across IP infrastructure. However, it does make sense to compare OTV with VPLS (which was DJ Spry’s next question). Apart from the obvious platform dependence (OTV runs on Nexus 7000, VPLS runs on Catalyst 6500/Cisco 7600 and a few other routers) which might disappear once ASR1K gets the rumored OTV support, there’s a huge gap in functionality and complexity between the two layer-2 transport technologies.
MPLS/VPN in Data Center Interconnect (DCI) Designs
Yesterday I was describing a dreamland in which hypervisor switches would use MPLS/VPN to implement seamless scalable VM mobility across IP+MPLS infrastructure. Today I’ll try to get down to earth; there are exciting real-life design using MPLS/VPN between data centers. You can implement them with Catalyst 6500/Cisco 7600 or ASR1K and will soon be able to do the same with Nexus 7000.
Most data centers have numerous security zones, from external network, DMZ, web servers and applications servers to database servers, IP-based storage and network management. When you design active/active data centers, you want to keep the security zones strictly separate and the “usual” solution proposed by L2-crazed crowd is to bridge multiple VLANs across the DCI infrastructure (in the next microsecond they start describing the beauties of their favorite L2 DCI technology).
(v)Cloud Architects, ever heard of MPLS?
Duncan Epping, the author of fantastic Yellow Bricks virtualization blog tweeted a very valid question following my vCDNI scalability (or lack thereof) post: “What you feel would be suitable alternative to vCD-NI as clearly VLANs will not scale well?”
Let’s start with the very basics: modern data center switches support anywhere between 1K and 4K (the theoretical limit based on 802.1q framing) VLANs. If you need more than 1K VLANs, you’ve either totally messed up your network design or you’re a service provider offering multi-tenant services (recently relabeled by your marketing department as IaaS cloud). Service Providers had to cope with multi-tenant networks for decades ... only they haven’t realized those were multi-tenant networks and called them VPNs. Maybe, just maybe, there’s a technology out there that’s been field-proven, known to scale, and works over switched Ethernet.
vCloud Director Network Isolation (vCDNI) scalability
When VMware launched its vCloud Director Networking Infrastructure, Greg Ferro (of the Packet Pushers Podcast fame) and myself were very skeptical about its scaling capabilities, more so as it uses MAC-in-MAC encapsulation and bridging was never known for its scaling properties. However, Greg’s VMware contacts claimed that vCDNI scales to thousands of physical servers and Greg wanted to do a podcast about it.
As always, we prepared a few questions in advance, including “How are broadcasts, multicasts and unknown unicasts handled in vCDNI-based private networks?” and “what happens when a single VM goes crazy?” For one reason or another, the podcast never happened. After analyzing Wireshark traces of vCDNI traffic, I probably know why that’s the case.
Published on , commented on July 19, 2022
OpenFlow: BIOS Does Not a Server Make
Last week Greg (@etherealmind) Ferro invited me to the OpenFlow Packet Pushers podcast with Matt Davey. I was pleasantly surprised by Matt’s realistic attitude (you should really listen to the whole podcast), it was nice to hear that they’re running a country-wide pilot with OpenFlow-enabled switches deployed at several universities, and some of the applications he mentioned (for example, the capability to download ACLs into the switch from your customized application) definitely tickled my inner geek. However, I’m even more convinced that the brouhaha surrounding Open Networking Foundation has little grounds in the realities of OpenFlow.
DMVPN: How to Get from Zero to Hero?
John (not a real name for obvious reasons) sent me the following e-mail:
I am a Sys Admin who has recently assumed duties as a Net Eng. I am currently expected to perform responsibilities utilizing DMVPN with Cisco routers though I have never worked with DMVPN and have very little router experience. I started with your DMVPN webinar and it has been extremely helpful, but there’s still a huge gap between what I know so far and what I need to know to work with DMVPN.
In a few days I will deploy to Afghanistan to start work for a customer and I was hoping you might be able to give me some advice on the matter, perhaps some how-to documents or good books to purchase that will assist in the huge learning curve.
What is OpenFlow?
A typical networking device (bridge, router, switch, LSR …) has control and data plane. The control plane runs all the control protocols (including port aggregation, STP, TRILL, MAC address learning and routing protocols) and downloads the forwarding instructions into the data plane structures, which can be simple lookup tables or specialized hardware (hash tables or TCAMs).
Brocade VCS fabric has almost-perfect load balancing
Short summary for differently-attentive: proprietary load balancing Brocade uses over ISL trunks in VCS fabric is almost perfect (and way better for high-throughput sessions than what you get with other link aggregation methods).
During the Data Center Fabrics Packet Pushers Podcast we’ve been discussing load balancing across aggregated inter-switch links and Brocade’s claims that its “chip-based balancing” performs better than standard link aggregation group (LAG) load balancing. Ever skeptical, I said all LAG load balancing is chip-based (every vendor does high-speed switching in hardware). I also added that I would be mightily impressed if they’d actually solved intra-flow packet scheduling.
Interesting links (2011-04-03)
General networking
Protecting the router control plane (RFC 6192): among other goodies, this document has a high-level description of high speed routers (sometimes known as layer-3 switches).
Is the network administrator role going away? I’ve heard the “something is going away” prediction too often in the last 20 years. We just end up doing other (more complex) things.
8 hints for using DNS more effectively – another great post from The Lone Sysadmin.
Cisco and Brocade working together on interoperable TRILL products
Reading the blogosphere catfights erupting at the time Brocade announced their VCS fabric, one could never imagine networking vendors pushing toward interoperable implementations of their products. The recent announcements from Brocade and Cisco look way more promising: Brocade will implement standard TRILL with IS-IS and Cisco will include FSPF as an alternate FabricPath routing protocol in NX-OS to ensure interoperability with VCS fabric.
Even more, as a follow-up step to the QFabric project, Cisco and Juniper are working together on a version of ICCP that will solve multi-chassis link aggregation problems in a standardized way. Long-term, we can expect to have a legacy low-cost access switch connected with a single LAG bundle to Nexus 5000 and QFX3500 and both of them exchanging data with FSPF-enabled TRILL with a VDX switch from Brocade.