Category: data center
The Intricacies of Optimal Layer-3 Forwarding
I must have confused a few readers with my blog posts describing Arista’s VARP and Enterasys’ Fabric Routing – I got plenty of questions along the lines of “how does it really work behind the scenes?” Let’s shed some light on those dirty details.
IPv6-Only Data Center: Q&A Time
Not surprisingly, the unorthodox ideas of Tore Anderson generated plenty of questions, so he spent ~20 minutes answering them.
Configure physical firewalls based on VM groups? Sure, use DSE from Plexxi
Plexxi has an interesting problem. They have a shiny new solution that requires unorthodox approaches to network forwarding and allows them to implement potentially cool concepts like affinities (traffic engineering in disguise). They also need to sell these new concepts to the customers, and the first question I would ask after recovering from a hefty dose of cool-aid is "and how do I configure these affinities?"
SIIT – The Magic Behind IPv6-only Data Center
Remember Tore Anderson’s IPv6-only data center design he described in last June’s webinar? Wondered how he got it done? The secret sauce he used is SIIT – the stateless IPv6-to-IPv4 translation technology. His trick: turning it around.
Layer-2 Extension (OTV) Use Cases
I was listening to the fantastic OTV Deep Dive PQ Packet Pushers podcast while biking around the wonderful Slovenian forests. They started the podcast by discussing OTV use cases, Ethan throwing in long-distance vMotion (the usual long-distance L2 extension selling point), but refreshingly some of the engineers said “well, that’s not really the use case we see in real life.”
So what were the use cases they were mentioning?
Plexxi PSI: MAU at Gigabit Speed
Regardless of the advantages of photonic switching (David Husak claims it’s 20.000 times more effective than electronic switching), the programmable optical components remain ludicrously expensive, prompting Plexxi to launch a cost-optimized fixed-topology version of their data center products.
Test Virtual Appliance Throughput with Spirent Avalanche NEXT
During the Networking Tech Field Day 6 Spirent showed us Avalanche NEXT – another great testing tool that generates up to 10Gbps of perfectly valid application-level traffic that you can push through your network devices to test their performance, stability or impact of feature mix on maximum throughput.
Not surprisingly, as soon as they told us that you could use Avalanche NEXT to replay captured traffic we started getting creative ideas.
Migrating a cold VM into a foreign subnet
Moving a running VM into a foreign subnet is Mission Impossible due to stale ARP entries (anyone telling you otherwise is handwaving over a detail or two - maybe their VM doesn't communicate with other VMs in the same subnet), but it's entirely feasible to migrate a cold VM into a foreign subnet if you can fix IP routing. Here's how you can do the trick with Enterasys switches.
Overlay Networks and QoS FUD
One of the usual complaints I hear whenever I mention overlay virtual networks is “with overlay networks we lose all application visibility and QoS functionality” ... that worked so phenomenally in the physical networks, right?
How Much Data Center Bandwidth Do You Really Need?
Networking vendors are quick to point out how the opaqueness (read: we don’t have the HW to look into it) of overlay networks presents visibility problems and how their favorite shiny gizmo (whatever it is) gives you better results (they usually forget to mention the lock-in that it creates).
Now let’s step back and ask a fundamental question: how much bandwidth do we need?
Why Is Network Virtualization So Hard?
We’ve been hearing how the networking is the last bastion of rigidity in the wonderful unicorn-flavored virtual world for the last few years. Let’s see why it’s so much harder to virtualize the networks as opposed to compute or storage capacities (side note: it didn’t help that virtualization vendors had no clue about networking, but things are changing).
OpenFlow Fabric Controllers Are Light-years Away from Wireless Ones
When talking about OpenFlow and the whole idea of controller-based networking, people usually say “well, it’s nothing radically new, we’ve been using wireless controllers for years and they work well, so the OpenFlow ones will work as well.”
Unfortunately, the comparison is totally misleading.
Layer-2 DCI with Enterasys Switches
The second half of the Enterasys DCI Solutions webinar focused on real-life case studies. First the less interesting one: long-distance live VM migration (you know my feelings about the whole concept, but sometimes you just have to do it) and the role of fabric routing and host routing in the process.
Sooner or Later, Someone Will Pay for the Complexity of the Kludges You Use
I loved listening to OTV/FabricPath/LISP Packet Pushers podcast. Ron Fuller and Russ White did a great job explaining the role of OTV, FabricPath and LISP in a stretched (inter-DC) subnet deployment scenario and how the three pieces fit together … but I couldn't stop wondering whether there is a better method to solve the underlying business need than throwing three new pretty complex technologies and associated equipment (or VDC contexts or line cards) into the mix.
The Plexxi Challenge (or: Don’t Blame the Tools)
Plexxi has an incredibly creative data center fabric solution: they paired data center switching with CWDM optics, programmable ROADMs and controller-based traffic engineering to get something that looks almost like distributed switched version of FDDI (or Token Ring for the FCoTR fans). Not surprisingly, the tools we use to build traditional networks don’t work well with their architecture.
In a recent blog post Marten Terpstra hinted at shortcomings of Shortest Path First (SPF) approach used by every single modern routing algorithm. Let’s take a closer look at why Plexxi’s engineers couldn’t use SPF.