Virtual network appliances: benefits and drawbacks
A while ago I decided to figure out how well various vendors support virtualized networking (one of the answers: some of the solutions don’t scale) and what can be done with virtual network appliances (I was pleasantly surprised by F5’s BIG-IP LTM VE and Vyatta). You’ll find some of my other thoughts on this subject in the Virtual network appliances: Benefits and drawbacks article published by SearchNetworking.
Spoke-to-Spoke IP Multicast over DMVPN?
A long-time reader has sent me an intriguing question: “would IP multicast work between DMVPN spokes?” In theory, the answer is “we could make it work”, but we all know theory and practice are not the same thing.
To make IP multicast work between DMVPN spokes, you’d need to configure multicast propagation between them with the ip nhrp map multicast remote-spoke-NBMA command. In a small DMVPN network where you need IP multicast only between a handful of spokes, that might even work; obviously this trick does not scale for a number of reasons:
Published on , commented on July 6, 2022
OpenFlow FAQ: Will the Hype Ever Stop?
Network World has published another masterpiece last week: FAQ: What is OpenFlow and why is it needed? Following the physics-changing promises made during the Open Network Foundation launch, one would hope to get some straight facts; obviously things don’t work that way. Let’s walk through some of the points. While most of them might not be too incorrect from an oversimplified perspective, they do over-hype a potentially useful technology way out of proportions.
NW: “OpenFlow is a programmable network protocol designed to manage and direct traffic among routers and switches from various vendors.” This one is just a tad misleading. OpenFlow is actually a protocol that allows a controller to download forwarding tables into one or more switches. Whether that manages or directs traffic depends on what controller is programmed to do.
Distributed Firewalls: a Ticking Bomb
Are you ever asked to use a layer-2 Data Center Interconnect to implement distributed active-active firewalls, supposedly solving all the L3 issues and asymmetrical-traffic-flow-over-stateful-firewalls problems? Don’t be surprised; I was stupid enough (or maybe just blinded by the L2 glitter) in 2010 to draw the following diagram illustrating a sample use of VPLS services:
Interesting links (2011-04-17)
Data Center
RFC 6165 documents the layer-2 related IS-IS extensions. No more excuses along the “TRILL standards are not ready” lines. Are you listening, Brocade and HP?
Data Center Feng Shui: Architecting for Predictable Performance. A nice introductory explanation of advantages of hardware-based forwarding.
When is a Fabric not a Fabric? Juniper continues the “who’s the smartest kid on the block” game. I thought we’re all adults; stop the “bright future” promises and get the products out.
OSPF Route Selection Rules
OSPF implementation in Cisco IOS deviates slightly from OSPF/NSSA standards (RFC 2328 and RFC 3101). These are the OSPF route selection rules as implemented by Cisco IOS release 12.2(33)SRE1 (all recent releases probably behave identically):
VPLS versus OTV for L2 Data Center Interconnect (DCI)
DJ Spry asked an interesting question in a comment to my MPLS/VPN in DCI designs post: “Why would one choose OTV over MPLS/VPN?” The answer is simple: it depends on what you need. MPLS/VPN provides path isolation between layer-3 domains (routed networks) across MPLS or IP infrastructure whereas OTV providers layer-2 transport (and VLAN-based path isolation) across IP infrastructure. However, it does make sense to compare OTV with VPLS (which was DJ Spry’s next question). Apart from the obvious platform dependence (OTV runs on Nexus 7000, VPLS runs on Catalyst 6500/Cisco 7600 and a few other routers) which might disappear once ASR1K gets the rumored OTV support, there’s a huge gap in functionality and complexity between the two layer-2 transport technologies.
MPLS/VPN in Data Center Interconnect (DCI) Designs
Yesterday I was describing a dreamland in which hypervisor switches would use MPLS/VPN to implement seamless scalable VM mobility across IP+MPLS infrastructure. Today I’ll try to get down to earth; there are exciting real-life design using MPLS/VPN between data centers. You can implement them with Catalyst 6500/Cisco 7600 or ASR1K and will soon be able to do the same with Nexus 7000.
Most data centers have numerous security zones, from external network, DMZ, web servers and applications servers to database servers, IP-based storage and network management. When you design active/active data centers, you want to keep the security zones strictly separate and the “usual” solution proposed by L2-crazed crowd is to bridge multiple VLANs across the DCI infrastructure (in the next microsecond they start describing the beauties of their favorite L2 DCI technology).
(v)Cloud Architects, ever heard of MPLS?
Duncan Epping, the author of fantastic Yellow Bricks virtualization blog tweeted a very valid question following my vCDNI scalability (or lack thereof) post: “What you feel would be suitable alternative to vCD-NI as clearly VLANs will not scale well?”
Let’s start with the very basics: modern data center switches support anywhere between 1K and 4K (the theoretical limit based on 802.1q framing) VLANs. If you need more than 1K VLANs, you’ve either totally messed up your network design or you’re a service provider offering multi-tenant services (recently relabeled by your marketing department as IaaS cloud). Service Providers had to cope with multi-tenant networks for decades ... only they haven’t realized those were multi-tenant networks and called them VPNs. Maybe, just maybe, there’s a technology out there that’s been field-proven, known to scale, and works over switched Ethernet.
vCloud Director Network Isolation (vCDNI) scalability
When VMware launched its vCloud Director Networking Infrastructure, Greg Ferro (of the Packet Pushers Podcast fame) and myself were very skeptical about its scaling capabilities, more so as it uses MAC-in-MAC encapsulation and bridging was never known for its scaling properties. However, Greg’s VMware contacts claimed that vCDNI scales to thousands of physical servers and Greg wanted to do a podcast about it.
As always, we prepared a few questions in advance, including “How are broadcasts, multicasts and unknown unicasts handled in vCDNI-based private networks?” and “what happens when a single VM goes crazy?” For one reason or another, the podcast never happened. After analyzing Wireshark traces of vCDNI traffic, I probably know why that’s the case.