Q&A: Vendor OpenFlow Limitations
I rarely get OpenFlow questions these days; here’s one I got not so long ago:
I've just spent the last 2 days of my life consuming the ONF 1.3.3 white paper in addition to the $vendor SDN guide to try and reconcile what features it does or does not support and have come away disappointed...
You’re not the only one ;)
I was hoping I would have the ability to modify more L3 header fields. As it is, $vendor OpenFlow 1.3 implementation only allows me to modify the source and destination MAC addresses and VLAN tags as well as set the DSCP markings.
There are two reasons for that:
Hardware limitations – low-cost high-speed forwarding hardware doesn’t support all the packet mangling one would like to do. In particular, it’s hard to change IP addresses or port numbers to implement NAT or PAT. Changing MAC addresses is obviously easy, as that’s part of the regular inter-subnet forwarding pipeline, but even there some hardware implementations don’t allow you to change the source MAC address to any value but the MAC address of the outgoing interface (or maybe it’s just “suboptimal” software implementation).
Software limitations - some OpenFlow implementations from major vendors are plain ridiculous. Juniper used to be the worst of them all, and a few others are not far behind. For more details of what individual vendors support, watch the “OpenFlow Support” vendor-specific videos in my Data Center Fabrics webinar.
Keep in mind that I update those videos once or twice a year, and although vendor OpenFlow support is moving at glacial speeds, do check the release notes as well.
I was really hoping for the ability to also modify the source and destination IP addresses as well (want to use $vendor hardware as high-speed NAT tool using OpenFlow). I have scoured the internet and can only find one vendor that makes a switch that allows me to at least modify the destination IP address. Is there literally no other switch that your aware of that can do this?
If you need a NAT tool at gigabit speeds, you’d be far better off using x86-based solution, for example something riding on Snabb Switch (Igalia made 4-over-6 tunneling work at 20 Gbps per core).
For terabit speeds you do need hardware solution, but there are only a few chipsets on the market that can do that, and their NAT table is probably tied to TCAM and thus pretty small. There are a few exceptions that use NPUs or yet-to-be-unveiled hardware riding on unicorn dust, but you might not like the price tag.
Finally, you might have to reformulate the problem (here’s an example of how you can do load balancing at scale). People doing load balancing with OpenFlow (or similar technologies) use anycast IP addresses on the servers with direct server return, like what Coho Data is doing for iSCSI/NFS traffic (for more details, watch the Network Services videos in my SDN Use Cases webinar).
As you've remarked in one of your webinars, NoviFlow's NoviSwitches are the exception to the rule in having implemented the full openFlow 1.3 spec (and most of 1.4 and even parts of 1.5), at high capacity (up to 512 Gbps), at line rate even for 100 Gbps ports. We have also implemented a series of OpenFlow Experimenter extensions to provide many features not covered by the OpenFlow specifications to provide L2-L7 packet header AND payload matching and flow handling, BFD link monitoring , L2 and L3 VLAN support for GRE, VxLAN, MPLS and GTP, and built-in hashing and symmetric hash-of-fields, and then finally a whole slew of OAM oriented features such as TACACS+, RADIUS authentication, SNMP, full CLI, and even gRPC automated provisioning for large installations. Anyone interested in high-performance fully programmable SDN forwarding planes are invited to our website and check our our product specs to verify the above.
How about starting with "I'm VP of Marketing @ Noviflow using a third-party blog to promote my products" ;) Also, I did mention "exceptions using NPU", which is what NoviFlow is doing.
NoviFlow is too small currently to make a real influence on the market. This may change in the future... :-)
Let's see... :-)
Only the Chinese vendors take somewhat serious ONF certifications, but even that does not help much...
If the use case is more about flexibility than raw performance, the latest generation of ASICs in the HPE-Aruba Campus product line (for Datacenter, as Ivan said, you may need to wait for some new ASICs coming out soon) has the ability of offer re-programable OF pipelines and a wide range of matches and transformations, using dedicated TCAMs and Hashes. Your performance millage may vary depending on your table and rule set.
The "HPE Switch Software OpenFlow v1.3 Administrator Guide" has a section about "OpenFlow Custom pipeline" that provides all the technical details and constraints, so you can jump straight into the details and skip the marketing.
Diego
Nice to hear from you after a long while. For everyone else: we covered one of the programmable pipelines (not sure whether it's the same one Diego mentioned) in this podcast:
http://blog.ipspace.net/2015/06/software-defined-hardware-forwarding.html