Category: Data Center
Queuing Mechanisms in Modern Switches
A long while ago there was an interesting discussion started by Brad Hedlund (then at Dell Force10) comparing leaf-and-spine (Clos) fabrics built from fixed-configuration pizza box switches with high-end chassis switches. The comments made by other readers were all over the place (addressing pricing, wiring, power consumption) but surprisingly nobody addressed the queuing issues.
This blog post focuses on queuing mechanisms available within a switch; the next one will address end-to-end queuing issues in leaf-and-spine fabrics.
Data Center Protocols in HP Switches
HP representatives made some pretty bold claims during Networking Tech Field Day 1, including “our switches will support EVB, FCoE, SPB and TRILL.” I took them three years to deliver on those promises (and the hardware they had at that time doesn’t exactly support all features they promised), but their current protocol coverage is impressive.
OpenFlow Support in Data Center Switches
Good news: In the last few months, almost all major data center Ethernet switching vendors (Arista, Cisco, Dell Force 10, HP, and Juniper) released documented GA version of OpenFlow on some of their data center switches.
Bad news: no two vendors have even remotely comparable functionality.
Load Balancing Across IP Subnets
One of my readers sent me this question:
I have a data center with huge L2 domains. I would like to move routing down to the top of the rack, however I’m stuck with a load-balancing question: how do load-balancers work if you have routed network and pool members that are multiple hops away? How is that possible to use with Direct Return?
There are multiple ways to make load balancers work across multiple subnets:
Whitebox Switching and Fermi Estimates
Craig Matsumoto recently quoted some astonishing claims from Dell’Oro Group analyst Alan Weckel:
- Whitebox switches (combined) will be the second largest ToR vendor;
- Whitebox 10GE ports will cost around $100.
Let’s try to guestimate how realistic these claims are.
Connecting Legacy Servers to Overlay Virtual Networks
I wrote (and spoke) at length about layer-2 and layer-3 gateways between VLANs and overlay virtual networks, but I still get questions along the lines of “how will you connect legacy servers to the new cloud infrastructure that uses VXLAN?”
It Doesn’t Make Sense to Virtualize 80% of the Servers
A networking engineer was trying to persuade me of importance of hardware VXLAN VTEPs. We quickly agreed physical-to-virtual gateways are the primary use case, and he tried to illustrate his point by saying “Imagine you have 1000 servers in your data center and you manage to virtualize 80% of them. How will you connect them to the other 200?” to which I replied, “That doesn’t make any sense.” Here’s why.
Security in Leaf-and-Spine Fabrics
One of my readers sent me an interesting question:
How does one impose a security policy on servers connected via a Clos fabric? The traditional model of segregating servers into vlans/zones and enforcing policy with a security device doesn’t fit here. Can VRF-lite be used on the mesh to accomplish segregation?
Good news: the security aspects of leaf-and-spine fabrics are no different from more traditional architectures.
Why Exactly Would You Want a Nexus 7000 in There?
Network designers (and smart consulting and system integration companies) often use ExpertExpress to get a second opinion on a design someone put together using technologies they’re not thoroughly familiar with. Not surprisingly, some of those third-party designs aren’t exactly optimal.
A while ago I was asked to review a data center “design” proposed to my customer by a system integrator. It had a pair of Nexus 5500 switches connecting servers and storage to a single Nexus 7000, which was then connected to WAN edge routers.
Should We Use Redundant Supervisors?
I had a nice chat with Doug Gourlay from Arista during the Interop Las Vegas and he made an interesting remark along the lines of “in leaf-and-spine fabrics it doesn’t make sense to use redundant supervisors in switches – they cause more problems than they solve.”
As always, in the end it all depends on your environment and use case, but he definitely has a point; good engineering always works better than a heap of kludges.