Blog Posts in April 2014
During a recent NetOps-focused discussion trying to figure out where Puppet/Chef/Ansible/… make sense in the brave new SDN-focused networking world I made this analogy: “Puppet manifest is like Prolog, router configuration is like Java or C++.” It’s a nice sound bite. It’s also totally wrong.
As expected, ARIN wasn’t that far behind APNIC and RIPE in IPv4 allocations and is now down to the last /8. Maybe it’s time for the last denialists to wake up and start considering IPv6 (or not – consultants love panicking customers)… and the new IPv6 resources page on ipSpace.net might help you get IPv6-fluent (hint: don’t miss the must-read documents section).
One of my readers sent me an interesting question:
How does one impose a security policy on servers connected via a Clos fabric? The traditional model of segregating servers into vlans/zones and enforcing policy with a security device doesn’t fit here. Can VRF-lite be used on the mesh to accomplish segregation?
Good news: the security aspects of leaf-and-spine fabrics are no different from more traditional architectures.
Network designers (and smart consulting and system integration companies) often use ExpertExpress to get a second opinion on a design someone put together using technologies they’re not thoroughly familiar with. Not surprisingly, some of those third-party designs aren’t exactly optimal.
A while ago I was asked to review a data center “design” proposed to my customer by a system integrator. It had a pair of Nexus 5500 switches connecting servers and storage to a single Nexus 7000, which was then connected to WAN edge routers.
Brook Reams sent me an interesting tidbit: Brocade is the first vendor that actually shipped a VXLAN VTEP controlled by a VMware NSX controller. It’s amazing to see how Brocade leapfrogged everyone else (they also added tons of other new functionality in NOS releases 4.0 and 4.1).
Distributed DoS mitigation is another one of the “we were doing SDN without knowing it” cases: remote-triggered black holes are used by most major ISPs, and BGP Flowspec was available for years. Not surprisingly, people started using OpenFlow to implement the same concept (there’s even a proposal to integrate OpenFlow support into Bro IDS).
During Interop 2014 I got involved in numerous interesting conversations revolving around SDN and new operations models (including the heretic idea of bundling appliances with application stacks and making developers responsible for network services).
During one of those discussions someone said “I think I get the ideas behind DevOps, but I don’t think we should configure our network devices with Puppet or Chef” to which I replied “Puppet or Chef are just tools, DevOps is a lifestyle.”
Paul Unbehagen made an interesting claim when presenting Avaya network built for Sochi Olympics during a recent Tech Field Day event: “we didn’t need MPLS or BGP to implement L2- and L3VPN. It was all done with SPB and IS-IS.”
Most ISPs rolling out large-scale residential IPv6 (bringing US to ~7%, Switzerland to ~10% and Belgium above 16%) agree it’s a no-brainer, but the rest of the world still hesitates.
To help the dubious majority cross the (perceived) shaky bridge across the gaping chasm between IPv4 and IPv6, a team of great engineers with decades of IPv6 operational experience (including networking gurus from Time Warner, Comcast and Yahoo, and the never-tiring IPv6 evangelist Jan Žorž) wrote an IPv6 Troubleshooting for Helpdesks document.
When I first heard about NFV, I thought it was just another steaming pile of hype designed to push the appliance vendors to offer their solutions in VM format. After all, we’re past the hard technical challenges: most appliances deserve to have an Intel Inside sticker, performance problems have been addressed (see Intel DPDK, 6WIND, PF_ring and Snabb Switch), so what’s stopping us from deploying NFV apart from stubborn vendors who want to sell hardware, not licenses?
I always recommended EBGP-based designs for DMVPN networks due to the significant complexity of running IBGP without an underlying IGP. The neighbor next-hop-self all feature introduced in recent Cisco IOS releases has totally changed my perspective – it makes IBGP-over-DMVPN the best design option unless you want to use DMVPN network as a backup for MPLS/VPN network.
In the first half hour of the Infrastructure for Private Clouds workshop at last week’s Interop Las Vegas I focused on business aspects of private cloud design: defining the customers, the services, and the level of self-service you’ll offer to your customers.
Nick Martin published a great summary of these topics @ SearchServerVirtualization; I couldn’t have done it better myself (they want to get your email address, but this article is definitely worth it).
I had a nice chat with Doug Gourlay from Arista during the Interop Las Vegas and he made an interesting remark along the lines of “in leaf-and-spine fabrics it doesn’t make sense to use redundant supervisors in switches – they cause more problems than they solve.”
As always, in the end it all depends on your environment and use case, but he definitely has a point; good engineering always works better than a heap of kludges.
During one of the ExpertExpress engagements I helped a company implement the BGP Everywhere concept, significantly simplifying their routing by replacing unstable route redistribution between BGP and IGP with a single BGP domain running across MPLS/VPN and DMVPN networks.
They had a pretty simple core site network, so we decided to establish an IBGP session between DMVPH hub router and MPLS/VPN CE router (managed by the SP).
Friday roundtables are one of the best parts of the Troopers conference – this year we were busy discussing (among other things) how safe the hypervisors are as compared to more traditional network isolation paradigms.
TL&DR summary: If someone manages to break into your virtualized infrastructure, he’ll probably find easier ways to hop around than hypervisor exploits.
I don’t think it would be too hard to guess the topic of my talk at the recent Troopers conference: SDN was the obvious choice, and the presentation simply had to include security aspects of SDN.
TL&DR summary: We know how to do it. We also know it's not simple.
An interesting startup is launching their SDN solution @ Interop Las Vegas today: Quantum Networks use the latest quantum computing technology to solve some of the hardest problems of controller-based networking.
One of the fundamental problems of hardware-based OpenFlow solutions is the flow update rate – most switches using merchant silicon can insert around 1000 new flows per second into their forwarding tables. Technologies based on quantum mechanics effects change all that – a quantum entanglement technology patented by Quantum Networks can install new flows instantaneously across the whole network.