If you plan to attend the Troopers 2014 conference in two weeks, don’t forget to include my full-day SDN workshop on Tuesday in your agenda (the Troopers conference is sold out, but you can still register for the workshop). The topics of the workshop will include:
- Why do we need SDN and what is it?
- OpenFlow, its advantages, drawbacks and scalability challenges;
- Typical OpenFlow and SDN deployment considerations;
- Real-life SDN use cases, both OpenFlow- and non-OpenFlow ones;
- Network function virtualization;
- Software-defined data centers.
What could a small ISP do to limit failure domains? Metro Ethernet and MPLS Virtual Private LAN service are all the rage, and offers customers the promise of being able to connect all their branch offices together, and use the same set of VLANs with free Layer 2 connectivity between their sites. It's either: extend the failure domains, or lose out in selling the service, b/c the customer will buy from another ISP.
On the very same day that I published the CLI is Not the Problem post I stumbled upon an interesting discussion on the v6ops mailing list. It all started with a crazy idea to modify BGP to use 128-bit router ID to help operators that think they can manually configure large IPv6-only networks without any centralized configuration/management authority that would assign 32 bit identifiers to their routers.
Tags: network management
When Apple launched the new release of iOS last autumn, networking gurus realized the new iOS uses MP-TCP, a recent development that allows a single TCP socket (as presented to the higher layers of the application stack) to use multiple parallel TCP sessions. Does that mean we’re getting closer to fixing the TCP/IP stack?
TL&DR summary: Unfortunately not.
Another pretty-down-to-Earth OpenFlow use case: service insertion. “Slightly” easier than playing with VLANs or PBR (can you tell how tired I am based on the enormous length of this intro?).
The “Is CLI In My Way … or a Symptom of a Bigger Problem” post generated some interesting discussions on Twitter, including this one:
One would hope that we wouldn’t have to bring up this point in 2014 … but unfortunately some $vendors still don’t get it.
When describing Hyper-V Network Virtualization packet forwarding I briefly mentioned that the hypervisor switches create (an equivalent of) a host route for every VM they need to know about, prompting some readers to question the scalability of such an approach. As it turns out, layer-3 switches did the same thing under the hood for years.
One of my regular readers sent me a long list of FCoE-related questions:
I wanted to get your thoughts around another topic – iSCSI vs. FCoE? Are there merits and business cases to moving to FCoE? Does FCoE deliver better performance in the end? Does FCoE make things easier or more complex?
He also made a very relevant remark: “Vendors that can support FCoE promote this over iSCSI; those that don’t have an FCoE solution say they aren’t seeing any growth in this area to warrant developing a solution”.
The first question I tried to answer (and probably failed to) in the SDN 101 webinar was: What exactly is SDN? Is it an architecture with physically separate centralized control plane, or is it more? Does separate control plane make sense, or is it better to program distributed devices? Watch the video recorded during the live webinar session and tell me whether you agree with my answers.
A while ago Sander Steffann and Iljitsch van Beijnum wrote a fantastic document that compared most (somewhat) widely used IPv6-over-IPv4 tunneling mechanisms. The document got published as RFC 7059 in November and is a definite must-read for anyone having to deal with this particular can of worms.
Unfortunately the document doesn’t cover the recent IPv4 sunset developments – numerous mechanisms that transport IPv4 leftovers over IPv6-only access networks (MAP-E, DS-Lite, lw4over6, 464XLAT …). One can only hope Sander and Iljitsch plan to produce a complementary document soon ;)
Interested in IPv4-to-IPv6 transition mechanisms?
If you’re building a Greenfield private cloud, you SHOULD consider using virtual network services appliances (firewalls, load balancers, IPS/IDS systems), removing the need for additional hard-to-scale hardware devices. But can we go a step further? Can we replace all networking hardware with x86 servers and virtual appliances?
My good friend Ethan recently published a blog post rightfully complaining how various vendor CLIs hamper our productivity. He’s absolutely correct from the productivity standpoint, and I agree with his conclusions (we need a layer of abstraction), but there’s more behind the scenes.
I hope it’s obvious to everyone by now that flow-based forwarding doesn’t work well in existing hardware. Switches designed for large number of flow-like forwarding entries (NEC ProgrammableFlow switches, Enterasys data center switches and a few others) might be an exception, but even they can’t cope with the tremendous flow update rate required by reactive flow setup ideas.
One would expect virtual switches to fare better. Unfortunately that doesn’t seem to be the case.
Network tapping and tap aggregation are obviously the OpenFlow equivalent of the Hello World application – almost every OpenFlow controller vendor has a tap aggregation solution. Does that make sense? Sure – tap aggregation network is outside of the production data path and thus a great candidate for semi-production technology pilots.