Network Virtualization at ToR switches? Makes as much sense as IP-over-APPN

One of my blogger friends sent me an interesting observation:

After talking to networking vendors I'm inclined to think they are going to focus on a mesh of overlays from the TOR, with possible use of overlays between vswitch and TOR too if desired - drawing analogies to MPLS with ToR a PE and vSwitch a CE. Aside from selling more hardware for this, I'm not drawn towards a solution like this bc it doesn't help with full network virtualization and a network abstraction for VMs.

The whole situation reminds me of the good old SNA and APPN days with networking vendors playing the IBM part of the comedy.

I apologize to the younglings in the audience – the rest of the blog post will sound like total gibberish to you – but I do hope the grumpy old timers will get a laugh or two out of it.

Once upon a time, there were mainframes (and nobody called them clouds), and all you could do was to connect your lowly terminal (80 x 24 fluorescent green characters) to a mainframe. Not surprisingly, the networking engineers were building hub-and-spoke networks with the mainframes (actually their sidekicks called Front End Processors) tightly controlling all the traffic. The whole thing was called Systems Network Architecture (SNA) and life was good (albeit a bit slow).


The original 3270 terminal (source: Wikipedia)

Years later, seeds of evil started appearing in the hub-and-spoke wonderland. There were rumors of coax cables being drilled and vampire taps being installed onto said cables. Workstations were able to communicate without the involvement of the central controller ... and there was a new protocol called Internet Protocol that powered all these evil ideas.


Vampire biting marks on an original yellow thick coax

Not surprisingly, IBM (the creator of SNA) tried a tweak-embrace-and-extend strategy. First they introduced independent logical units (clients and servers in IP terminology), later on they launched what seemed like a Crazy Ivan (not related to my opinions) to the orthodox hub-and-spoke believers: Advanced Peer-to-Peer Networking (APPN), still using the time-tested (and unbelievably slow) SNA protocols.


What is APPN (Source: Cisco)

At the same time IBM tried to persuade us 4Mbps Token Ring works faster than 10Mbps switched Ethernet. Brocade recently tried a similar stunt, trying to tell us how Gen5 Fiber Channel (also known as 16GB FC) is better than anything else (including 40GE FCoE) – another proof the marketers never learn from past blunders.

Faced with dismal adoption of APPN (I haven’t seen a live network running APPN, although I was told some people were using it for AS/400 networking), and inevitable rise of IP, IBM tried yet another approach: let’s transport IP over APPN (or maybe it's just one of the recurring nightmares I'm having). Crazy as it sounds, I remember someone proposing to run datagram service (IP) on top of layer-7 (LU6.2) transport ... and there are people today running IP over SSH, proving yet again that every bad idea resurfaces after a while.

Update 2013-06-09: IP-over-APPN wasn't just a recurring nightmare. François Roy provided the necessary detail in his comment: IBM implemented it in 2217 Nways Multiprotocol Concentrator. Straight from the documentation: "TCP/IP data is routed over SNA using IBM's multiprotocol transport networking (MPTN) formats."

Regardless of IBM’s huge marketing budget, the real world took a different turn. First we started transporting SNA over IP (remember DLSw?), then deployed Telnet 3270 (TN3270) gateways to give PCs TCP/IP-based access to mainframe applications. Oh, and IBM seems to have APPN over IP.

A few years later, IBM was happily selling Fast Ethernet mainframe attachments and running TCP/IP stack with TN3270 on the mainframes (you see, they never really cared about networking – their core businesses are services, software and mainframes) ... and one of the first overlay virtual network implementations was VXLAN in Nexus 1000V.

And so I finally managed to mention overlay virtual networking ... but don’t rush to conclusions; before drawing analogies keep in mind that most organizations couldn’t get rid of the mainframes: there were millions of lines of COBOL code written for an environment that could not be easily replicated anywhere else. Migrating those applications to any other platform was mission impossible.

On the other hand, all it takes in the server virtualization world is an upgrade to vSphere 5.1 (or Hyper-V 3.0) and a hardware refresh cycle (to flush the physical appliances out of the data center), and the networking vendors will be left wondering where all the VMs and VLANs disappeared. And you did notice that HP finally delivered TRILL and EVB on their ToR switches, didn’t you?

9 comments:

  1. Sadly, we still run SNA over IPX around here. And we can't get rid of it, much to my dismay and troubles that ensue.
  2. Pluribus Networks provides network virtualization at the TOR switch without using overlays or tunnels. Segmentation is done using VLANs (with EVB support) or VXLANs, with optimal forwarding of L2/L3 at TOR, with multi-tenant support for dynamic routing protocols and NAT. It can also be managed as a single switch fabric.
  3. I thought the use case for VxLAN at TOR by vendors such as Arista was to connect native servers and things that don't support VxLAN natively. Could also be used as ingress/egress for a virtual lane in the case you wanted to hook a firewall to a virtual switch.
    Replies
    1. That's exactly right, but to *simplify* things for the server guys, maybe network vendors will promote VxLAN overlays as full mesh between TOR devices. This could offer so-called visibility in parts of the physical network.

      Complexity shifts to a virtual switch layer or TOR layer. Based on who is selling it may dictate which is recommended!
    2. And who will provision VLANs on server-to-ToR links and VXLAN gateways on ToR switches ... all based on VM mobility? Why do you think it makes sense to create a Rube Goldberg machine when a simple one exists?
  4. OS-X on this apple laptop isn't running in a hypervisor, and that's just one example. Why must innovation in the data center core network cease while the world figures out how to put a trusted hypervisor in every device out there?
    Replies
    1. I never claimed the innovation should cease, there are plenty of opportunities for innovation in well-structured L3 data center networks, just look at what Arista is doing or how Petr Lapukhov built a DC network based on BGP. I'm just stating which potential solutions make no architectural sense to me based on what I've seen work and fail in the past.

      Please feel free to totally disagree with and prove me wrong ... in which case you might decide to stand behind your ideas and stop begin A.Anonymous.
    2. Well I started out anonymous so should probably stay that way for this one.

      There are functions I too don't believe belong inside the core network, like conversation-based network services. But the core network fabric should continue to innovate in the areas of capacity, scalable address-based connectivity as well as scalable multi-tenancy (QoS & network virtualization) -- even down to the ToR. Multi-tenancy support in the ToR enables possibilities much broader than just DC.
  5. "let’s transport IP over APPN (or maybe it's just one of the recurring nightmares I'm having)" That brings up good (?) memories... In the 90's, I installed a WAN for 15 sites with 56kbps frame relay links, and the "routers" were IBM's 2217s, doing just that (encapsulating IP inside APPN). A few years later these were replaced by 2210s, true IP routers...
Add comment
Sidebar