What is Nicira really up to?

Yesterday New York Times published an article covering Nicira, a semi-stealthy startup working on an open source soft switch (Open vSwitch) and associated OpenFlow-based controller, triggering immediate responses from GigaOm and Twilight in the Valley of the Nerds. While everyone got entangled in the buzzwords (or lack of them), not a single article answered the question “what is Nicira really doing?” Let’s fix that.

Short summary: Nicira is developing a layer-3-aware soft switch for Xen/KVM that could be able to route (not bridge) IP and IPv6 in the hypervisor. Combined with the ability to establish dynamic GRE (or CAPWAP or VXLAN) tunnels between the hypervisors, and an OpenFlow-based scalable controller, they could have the only solution I’m aware of that might be on par with Amazon EC2 VPC. If you want to build huge scalable IaaS cloud infrastructure, Nicira is definitely a company you should be talking to.

Disclaimer: My contacts within Nicira consistently and politely declined my questions about their products. This blog post is a pure speculation reverse-engineered from publicly available documentation.

There’s no doubt the main focus of Nicira is hypervisor-based soft switching and virtual networks built directly between hypervisors. They are behind the Open vSwitch that is now part of official XenServer distribution. Open vSwitch can act like a simple layer-2 learning switch, or it could be programmed through OpenFlow extended with numerous Nicira-developed features (some of them submitted for inclusion in OpenFlow 1.2).

It’s pretty evident to me Nicira doesn’t want to get involved in core network programming. For example, the Open vSwitch has embedded LACP and 802.1ag support (similar to Nexus 1000V LACP offload) instead of running control protocols through the OpenFlow controller like a purist implementation should do. They also rely on GRE tunnels to get data between hypervisors rather than trying to program hop-by-hop flows throughout the network. Obviously these people are smart and well aware of how hard it is to scale solutions with per-flow state.

The documentation for Open vSwitch Open Flow Controller CLI is a treasure trove of information about Open vSwitch features (I don’t have the mental energy to browse through the source code and I haven’t found a document describing Nicira’s OpenFlow extensions explicitly – if there’s one, please add the link in the comments). Assuming everything described there actually works, Open vSwitch can do (among other things):

  • Layer-2 forwarding (bridging),
  • 802.1Q VLAN tagging,
  • 802.1p-based QoS,
  • ARP forwarding to OpenFlow controller,
  • IPv4 and IPv6 matching (ACL) and forwarding,
  • Forwarding to and from GRE tunnels,
  • Modifying source and destination IP addresses and port numbers (NAT, server load balancing),
  • Load balancing across multiple links.

The Open vSwitch seems to supports up to 255 independent forwarding tables, so it’s possible to implement numerous layer-3-aware VRFs within the same hypervisor.

The OpenFlow actions listed in the above document do not include TTL handling, so it seems that although Open vSwitch is layer-3 aware, it’s still not a true router – it can forward packets between hypervisors (using GRE tunnels) based on destination IP addresses, but not route between subnets.

Still, the big question is: “how far did they get with their controller?” The NTT press release and associated diagram hint at VXLAN-like capability (with proper OpenFlow-based control plane). I hope they didn’t stop at yet another L2-over-GRE solution; it’s perfectly possible to implement a solution equivalent to Amazon EC2 VPC with Open vSwitch (including IPv6 support which is still lacking in VPC), but we don’t know whether Nicira already has truly L3-aware OpenFlow controller.

The virtual networking concepts and bigger picture

The concepts and challenges of virtualized networking are described in the Introduction to Virtualized Networking webinar (register or buy a recording); the cloud computing-related aspects of networking in the Cloud Computing Networking – Under the Hood one (register).

For more details, check out my Data Center 3.0 for Networking Engineers (recording) and VMware Networking Deep Dive (recording) webinars. Both of them are also available as part of the Data Center Trilogy and you get access to all above-mentioned webinars (and numerous others) as part of the yearly subscription.

2 comments:

  1. Jonathan Topping18 October, 2011 14:28

    Nicira has the same problem that VXLAN has...there is no native switch-router that talks this protocol where you can "escape the cloud" and bridge to a native VLAN/switch environment. Once our major network vendors provide a "interface vlan\n switchport vxlan(or openflow) <identifier>", that might be a "killer app". Until then, we dream of MPLS-speaking vSwitchRouters. :-)</identifier>

    ReplyDelete
  2. It just might be possible to get hardware termination of GRE tunnels working (need to check the encapsulation OVS uses), but since GRE tunnels mostly get implemented as point-to-point interfaces, the configuration would be a major beast.

    However, if you're a cloud provider, you might not care - you give the customer the option of running yet another VM instance to do L3 routing or firewalling (like VMware does with vShield Edge). I still think virtual appliances are suboptimal, but you really don't care if you charge them by CPU cycles spent and the customers are willing to pay for that.

    ReplyDelete

You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.

Ivan Pepelnjak, CCIE#1354, is the chief technology advisor for NIL Data Communications. He has been designing and implementing large-scale data communications networks as well as teaching and writing books about advanced technologies since 1990. See his full profile, contact him or follow @ioshints on Twitter.