Last batch of interesting links collected in 2011 ... starting with Pricing and Trading Networks: Down is Up, Left is Right: A fantastic post from Chris Marget explaining the behavior of trading floor networks. For me, it was an amazing view into a parallel universe with a completely different set of physical laws.
2011 was a fantastic year for a networking geek, and you were awesome – helping me figure out the intricacies of new technologies, fixing my errors, and asking so many great questions that prompted me to dive deeper into the rabbit holes. I owe you a huge Thank you!
I hope you’ll be able to shut down your smartphones and pagers in the next few days and spend a few relaxing moments with your families … and I wish you great networking in 2012!
To keep the geeky spirit: snow angel as seen by Hubble
Recent Cisco IOS releases have significant improvements in DHCPv6 functionality and other IPv6 access network features. These improvements, as well as additional access network methods (including 6rd), will be described in the Building IPv6 Access Networks webinar on January 25th (register).
After I published the Decouple virtual networking from the physical world article, @paulgear1 sent me a very valid tweet: “You seemed a little short on suggestions about the path forward. What should customers do right now?” Apart from the obvious “it depends”, these are the typical use cases (as I understand them today – please feel free to correct me).
15 years after NAT was invented, I’m still getting questions along the lines of “is NAT a security feature?” Short answer: NO!
Longer answer: NAT has some side effects that resemble security mechanisms commonly used at the network edge. That does NOT make it a security feature, more so as there are so many variants of NAT.
I’m planning a series of shorter (~ 1 hour) update-type webinars in 2012. Some of them will cover new features and technologies that have been introduced since the time I last updated some of the most popular webinars (Data Center, VMware networking), others will focus on emerging technologies.
I would appreciate if you could help me plan them by taking a short survey, telling me which of the topics I identified are most important for you, and adding your favorite topics to the list. The survey won’t take more than a few minutes of your time.
The coolest tool of the week: mxtommy/Cisco-SSH-Client. Thomas St. Pierre did a fantastic job - he modified SSH client to colorize the printouts
generated by Cisco routers (similar to what VIM is doing to source code). Download
his SSH client patches, recompile your SSH client, and enjoy!
And here are the other links accumulated in my Inbox, this time in somewhat more structured format ... and a (hopefully interesting) surprise at the end.
After a week of oversized articles, I’ll try to keep this one short. This is a true story someone recently shared with me (for obvious reasons I can’t tell you where it happened ... and no, I’m not making it up). Enjoy!
A few days ago I had the privilege of being part of an VXLAN-related tweetfest with @bradhedlund, @scott_lowe, @cloudtoad, @JuanLage, @trumanboyes (and probably a few others) and decided to write a blog post explaining the problems VXLAN faces due to lack of control plane, how it uses IP multicast to solve that shortcoming, and how OpenFlow could be used in an alternate architecture to solve those same problems.
Anyone serious about high-availability connects servers to the network with more than one uplink, more so when using converged network adapters (CNA) with FCoE. Losing all server connectivity after a single link failure simply doesn’t make sense.
If at all possible, you should use dynamic link aggregation with LACP to bundle the parallel server-to-switch links into a single aggregated link (also called bonded interface in Linux). In theory, it should be simple to combine FCoE with LAG – after all, FCoE runs on top of lossless Ethernet MAC service. In practice, there’s a huge difference between theory and practice.
Every time I write about IPv6 multihoming issues and the need for NPT66, I get a comment or two saying “but I thought this is already part of IPv6 stack – can’t you have two or more IPv6 addresses on the same interface?” The commentators are right, you can have multiple IPv6 addresses on the same interface; the problem is: which one do you choose for outgoing sessions.
The source address selection rules are specified in RFC 3484 (Greg translated that RFC into an easy-to-consume format a while ago), but they are not very helpful as they cannot be influenced by the CPE router. Let’s look at the details.
It’s getting harder and harder to decide whether to choose physical devices to do L4-7 processing (stateful- and web application firewalling, load balancing, VPN termination, WAN optimization) in your virtualized data center, or whether to deploy VM version of the same appliances.
Physical devices usually perform better. Virtual appliances are more flexible, but don’t scale well ... and Embrane just complicated your decision-making process: they launched scale-out distributed virtual appliance architecture and products that combine the best of both worlds.
Isn’t it amazing that we can build the Internet, run the same web-based application on thousands of servers, give millions of people access to cloud services … and stumble badly every time we’re designing virtual networks. No surprise, by trying to keep vSwitches simple (and their R&D and support costs low), the virtualization vendors violate one of the basic scalability principles: complexity belongs to the network edge.
DHCPv6 server on Cisco IOS got several highly useful enhancements since the first time I started looking into its behavior. Seems like most of them are implemented only in 15.xS trains (where they are most badly needed one would assume), but there’s hope those changes will eventually trickle down into mainstream IOS.
I thought Nexus 1000V is like Aspirin compared to VMware’s vSwitch, providing tons of additional functionality (including LACP and BPDU filter) and familiar NX-OS CLI. It turns out I was right in more ways than I imagined; Nexus 1000V solves a lot of headaches, but can also cause heartburn due to a particular combination of its distributed architecture and reliance on vDS object model in vCenter.
My friend Tom Hollingsworth has written another NAT66-is-evil blog post. While I agree with him in principle, and most everyone agrees NAT as we know it from IPv4 world is plain stupid in IPv6 world (NAPT more so than NAT), we just might need NPT66 (Network Prefix Translation; RFC 6296) to support small-site multihoming ... and yet again, it seems that many leading IPv6 experts grudgingly agree with me.
In the VMware vSwitch – the baseline of simplicity post I described simple layer-2 switches offered by most hypervisor vendors and the scalability challenges you face when trying to build large-scale solutions with them. You can solve at least one of the scalability issues pretty easily: VM-aware networking solutions available from most data center networking vendors dynamically adjust the list of VLANs on server-to-switch links.
Let’s start with people who are trying to fix real-life problems. Browser and OS vendors are working around the lack of session layer – happy eyeballs approach solves dual-stack problem in either OS stack (Apple) or in the browsers (Chrome, Firefox). The results are ... interesting ;) ... but it seems most implementations are on the right track.
When I started making my first wobbling steps into the Junos MPLS world, Dan (@Johansfo) Backman took time to explain the differences between Cisco IOS and Junos MPLS implementations (and some of the reasons they are so different). This is my feeble attempt at describing what I understood he told me.
If you’re looking for a simple virtual switch, look no further than VMware’s venerable vSwitch. It runs very few control protocols (just CDP or LLDP, no STP or LACP), has no dynamic MAC learning, and only a few knobs and moving parts – ideal for simple deployments. Of course you have to pay for all that ease-of-use: designing a scalable vSwitch-based solution is tough (but then it all depends on what kind of environment you’re building).