Network Virtualization at ToR switches? Makes as much sense as IP-over-APPN
One of my blogger friends sent me an interesting observation:
After talking to networking vendors I'm inclined to think they are going to focus on a mesh of overlays from the TOR, with possible use of overlays between vswitch and TOR too if desired - drawing analogies to MPLS with ToR a PE and vSwitch a CE. Aside from selling more hardware for this, I'm not drawn towards a solution like this bc it doesn't help with full network virtualization and a network abstraction for VMs.
The whole situation reminds me of the good old SNA and APPN days with networking vendors playing the IBM part of the comedy.
I apologize to the younglings in the audience – the rest of the blog post will sound like total gibberish to you – but I do hope the grumpy old timers will get a laugh or two out of it.
Once upon a time, there were mainframes (and nobody called them clouds), and all you could do was to connect your lowly terminal (80 x 24 fluorescent green characters) to a mainframe. Not surprisingly, the networking engineers were building hub-and-spoke networks with the mainframes (actually their sidekicks called Front End Processors) tightly controlling all the traffic. The whole thing was called Systems Network Architecture (SNA) and life was good (albeit a bit slow).
The original 3270 terminal (source: Wikipedia)
Years later, seeds of evil started appearing in the hub-and-spoke wonderland. There were rumors of coax cables being drilled and vampire taps being installed onto said cables. Workstations were able to communicate without the involvement of the central controller ... and there was a new protocol called Internet Protocol that powered all these evil ideas.
Vampire biting marks on an original yellow thick coax
Not surprisingly, IBM (the creator of SNA) tried a tweak-embrace-and-extend strategy. First they introduced independent logical units (clients and servers in IP terminology), later on they launched what seemed like a Crazy Ivan (not related to my opinions) to the orthodox hub-and-spoke believers: Advanced Peer-to-Peer Networking (APPN), still using the time-tested (and unbelievably slow) SNA protocols.
What is APPN (Source: Cisco)
At the same time IBM tried to persuade us 4Mbps Token Ring works faster than 10Mbps switched Ethernet. Brocade recently tried a similar stunt, trying to tell us how Gen5 Fiber Channel (also known as 16GB FC) is better than anything else (including 40GE FCoE) – another proof the marketers never learn from past blunders.
Faced with dismal adoption of APPN (I haven’t seen a live network running APPN, although I was told some people were using it for AS/400 networking), and inevitable rise of IP, IBM tried yet another approach: let’s transport IP over APPN (or maybe it's just one of the recurring nightmares I'm having). Crazy as it sounds, I remember someone proposing to run datagram service (IP) on top of layer-7 (LU6.2) transport ... and there are people today running IP over SSH, proving yet again that every bad idea resurfaces after a while.
Update 2013-06-09: IP-over-APPN wasn't just a recurring nightmare. François Roy provided the necessary detail in his comment: IBM implemented it in 2217 Nways Multiprotocol Concentrator. Straight from the documentation: "TCP/IP data is routed over SNA using IBM's multiprotocol transport networking (MPTN) formats."
Regardless of IBM’s huge marketing budget, the real world took a different turn. First we started transporting SNA over IP (remember DLSw?), then deployed Telnet 3270 (TN3270) gateways to give PCs TCP/IP-based access to mainframe applications. Oh, and IBM seems to have APPN over IP.
A few years later, IBM was happily selling Fast Ethernet mainframe attachments and running TCP/IP stack with TN3270 on the mainframes (you see, they never really cared about networking – their core businesses are services, software and mainframes) ... and one of the first overlay virtual network implementations was VXLAN in Nexus 1000V.
And so I finally managed to mention overlay virtual networking ... but don’t rush to conclusions; before drawing analogies keep in mind that most organizations couldn’t get rid of the mainframes: there were millions of lines of COBOL code written for an environment that could not be easily replicated anywhere else. Migrating those applications to any other platform was mission impossible.
On the other hand, all it takes in the server virtualization world is an upgrade to vSphere 5.1 (or Hyper-V 3.0) and a hardware refresh cycle (to flush the physical appliances out of the data center), and the networking vendors will be left wondering where all the VMs and VLANs disappeared. And you did notice that HP finally delivered TRILL and EVB on their ToR switches, didn’t you?
Complexity shifts to a virtual switch layer or TOR layer. Based on who is selling it may dictate which is recommended!
Please feel free to totally disagree with and prove me wrong ... in which case you might decide to stand behind your ideas and stop begin A.Anonymous.
There are functions I too don't believe belong inside the core network, like conversation-based network services. But the core network fabric should continue to innovate in the areas of capacity, scalable address-based connectivity as well as scalable multi-tenancy (QoS & network virtualization) -- even down to the ToR. Multi-tenancy support in the ToR enables possibilities much broader than just DC.