Category: load balancing
Scalable Load Balancing with Avi Networks on Software Gone Wild
How many times have you received exact specifications of the traffic the e-commerce platform you have to deploy will generate? How do you buy a load balancer (application delivery controller in marketese) to support that (somewhat unknown) amount of traffic? In most cases, you buy a box that’s several times too big for the traffic the site is receiving most of the time, and still crashes under peak load.
Do We Need QoS in the Data Center?
Whenever I get asked about QoS in the data center, my stock reply is “bandwidth is cheaper than QoS-induced complexity.” This is definitely true in most cases, and ideally the elephant problems should be solved higher up in the application stack, not with network-layer kludges, but are there situations where you actually need data center QoS?
Per-Packet Load Balancing on WAN links
One of my readers got an interesting idea: he’s trying to make the most of his WAN links by doing per-packet load balancing between a 30 Mbps and a 50 Mbps link. Not exactly surprisingly, the results are not what he expected.
Case Study: Combine Physical and Virtual Appliances in a Private Cloud
Cloud builders are often using my ExpertExpress service to validate their designs. Tenant onboarding into a multi-tenant (private or public) cloud infrastructure is a common problem, and tenants frequently want to retain the existing network services appliances (firewalls and load balancers).
The Combine Physical and Virtual Appliances in a Private Cloud case study describes a typical solution that combines per-tenant virtual appliances with frontend physical appliances.
Tech Talks: Load Sharing and Entropy Labels in MPLS Networks
Load sharing in MPLS networks is always an interesting topic, and we couldn’t possibly avoid it during our MPLS-focused Tech Talks – watch the video.
After discussing the load sharing intricacies we briefly dabbled with the concept of entropy labels.
Improving ECMP Load Balancing with Flowlets
Every time I write about unequal traffic distribution across a link aggregation group (LAG, aka Etherchannel or Port Channel) or ECMP fabric, someone asks a simple question “is there no way to reshuffle the traffic to make it more balanced?”
TL&DR summary: there are ways to do it, and some vendors already implemented them.
Load Balancing Elephant Storage Flows
Olivier Hault sent me an interesting challenge:
I cannot find any simple network-layer solution that would allow me to use total available bandwidth between a Hypervisor with multiple uplinks and a Network Attached Storage (NAS) box.
TL&DR summary: you cannot find it because there’s none.
Load Balancing in Google Network
Todd Hoff (of the HighScalability fame) sent me a link to an interesting video describing load-balancing mechanisms used at Google and how they evolved over time.
If the rest of the blog post feels like Latin, you SHOULD watch the Load Balancing and Scale-Out Application Architecture webinar.
The beginning of the story resembles traditional enterprise solutions:
So You’re an Open Source Shop? Really?
I carried out an interesting quiz during one of my Interop workshop:
- How many use Linux-based servers? Almost everyone raised their hands;
- How many use Apache or Tomcat web servers? Yet again, almost everyone.
- How many run applications written in PHP, Python, Ruby…? Same crowd (probably even a bit more).
- How many use Nginx, Squid or HAProxy for load balancing? Very few.
Is there a rational explanation for this seemingly nonsensical result?
MPLS Load Sharing – Data Plane Considerations
In a previous blog post I explained how load sharing across LDP-controlled MPLS core works. Now let’s focus on another detail: how are the packets assigned to individual paths across the core?
2014-08-14: Additional information was added to the blog post based on comments from Nischal Sheth, Frederic Cuiller and Tiziano Tofoni. Thank you!
Load Sharing in MPLS Core
Here’s a question that bothered me for years till I finally gave up and labbed it: does ECMP load sharing work in an MPLS core? More specifically, will an LSP split into multiple LSPs?
It’s OK to Let Developers Go @ Amazon Web Services, but Not at Home? You Must Be Kidding!
Recently I was discussing the benefits and drawbacks of virtual appliances, software-defined data centers, and self-service approach to application deployment with a group of extremely smart networking engineers.
After the usual set of objections, someone said “but if we won’t become more flexible, the developers will simply go to Amazon. In fact, they already use Amazon Web Services.”
Load Balancing Across IP Subnets
One of my readers sent me this question:
I have a data center with huge L2 domains. I would like to move routing down to the top of the rack, however I’m stuck with a load-balancing question: how do load-balancers work if you have routed network and pool members that are multiple hops away? How is that possible to use with Direct Return?
There are multiple ways to make load balancers work across multiple subnets:
Scale-Out Load Balancing with OpenFlow
When OpenFlow was still fresh and exciting, someone made quite a name for himself by proposing a global load-balancing solution that would install per-session OpenFlow entries in every core switch around the world. Clearly a great idea, mimicking the best experiences we had with ATM SVCs.
Meanwhile some people started using OpenFlow in real-life networks for coarse-grained load balancing that improves the scalability of stateful network services. For more details, watch the video recorded during the Real Life OpenFlow-based SDN Use Cases webinar.
iOS uses Multipath TCP – Does It Matter?
When Apple launched the new release of iOS last autumn, networking gurus realized the new iOS uses MP-TCP, a recent development that allows a single TCP socket (as presented to the higher layers of the application stack) to use multiple parallel TCP sessions. Does that mean we’re getting closer to fixing the TCP/IP stack?
TL&DR summary: Unfortunately not.