Optimize Data Center Infrastructure: Build an Optimized Fabric
I published the last part of my Optimize Data Center Infrastructure series: build an optimized data center fabric.
To learn more about data center fabric designs, check the new online course or enroll into the Spring 2018 session of Building Next-Generation Data Center course.
Recent posts in the same categories
design
- EVPN Designs: EVPN EBGP over IPv4 EBGP
- EVPN Designs: EBGP Everywhere
- One-Arm Hub-and-Spoke VPN with MPLS/VPN
- EVPN Hub-and-Spoke Layer-3 VPN
- Hub-and-Spoke VPN on a Single PE-Router
- Hub-and-Spoke VPN Topology
data center
- Response: The Usability of VXLAN
- Migrating a Data Center Fabric to VXLAN
- The Mythical Use Cases: Traffic Engineering for Data Center Backups
- Video: What Is Software-Defined Data Center
- Repost: L2 Is Bad
- Path Failure Detection on Multi-Homed Servers
Why are we still hanging around layer 2 scaling and multipathing. Layer 2 stretching is not a viable solution.
instead of layer 2 multipathing, we use anycast dns or regional ips which will suffice the need.
Anycast gateway or regional IP gateway works perfectly fine.
Server provisioning , we use DHCP options will suffice. NO NEED FOR L2 SCALING.
Stretching L2 is not viable because of two reasons.
1. Preserving the Layer 3 Ip address and VLAN mobility.
2. Layer 2 tunnels such as OTV , VXLAN is a extra feature with licensing cost and many vendor doesnt recommend a VxLAN based solution.
Enterprise networks needs to move to cloud which is very easy to implement such as Azure which runs on Layer 3 and have maximum Availability Zones and proper Cluster to AZ mapping which has optimal redundancy fashion.
Whitebox switching or Vendor device switching [ OEM and ODM ] is what everyone is looking for.
Vendor devices in Tor is absolutely fine. The topology with Enterprise shrinks to two layer topology model
where Tor connects to Collapsed aggregation and Layer 2 is the flavour for Enterprise which is absolute mess.
Onboarding a service into Cloud is much more easier and safer approach . In networking terminology , Layer 3 is way to go moving forward. EbGP solution with multipath and allows-as solves the problem which is much more straight forward.
We follow Clos topology. Two layer model doesnt scale as we need more port density and and the failure domains is very evident.
Oversubscription ratio, port density, prefix length { IPv4 and Ipv6 ] , breakout cabling, Scaling of RIB and FIB, Redundancy on device [ Forwarding, RP , Power , Fan ]. Prefix aggregation ,
The best bet for Enterprise is to move into the cloud a it is very fast , scalable , reliable in terms of redundancy and it scale to various regions, DCs,edges . We also offer Caching and Edge last mile connectivity.
What are your thoughts on this.
Nowadays, Router also supports VXLAN solution but at what cost do we need overlay . Why cant use NAT instead of Overlay. Overlay means one header on top of another header. NAT virtualizes the Ip addressing which overlay solution. Backend IPs are NATted which is SNAT [ Source NAT ] one one way direction.
Why we need Overlay technolgies
1. Wrap one header over another for dedicated routing but not dedicated infra.
2. Not exposing inner IP header ie customer IP address on to the dedicated Network Fabric
3. TO preserve TCAM [ LEM & LPM ] on the network fabric devices which is what we presev
VXLAN on router is equivalent to GRE r IPSec as we are going to take decisions based on Dest IP on the Inner header after decapsulating outer header. So VXLAN is equivalent to GRE or IPsec or IPinIP.
VXLAN on hosts is what the flavour as any server to server or any server to host . Host is presented somewhere on th internet.
So now number of VXLAN session on the hosts depends on the number of flows that each server can handle. Nowadays each server has 4 to 6 threads so it can process 6 parallel flows at the same time.
The problem with VXLAN on hosts is basically hosts processor stack has to have the capability to encap/decap at wire rate which is not an ideal solution because we need hosts to process data at wire rate with full disk speed , RAM and default TCP/IP stack which we do these days,
We dont need any processing overhead to be carried on to the hosts.
Thats why i dont endorse VXLAN , NVGRE based solutions as we have much more options to solve the problem.
The problem that VXLAN solves is distinguishing two different flows in host and two different packet level treatments ie on QOS level and memory B.W. level on the hosts which we can solve on the Tor .
Hope this makes lot of sense.
Hope this networking industry move away from this solution.