On SDN Controllers, Interconnectedness and Failure Domains
A long long time ago Colin Dixon wrote the following tweet in response to my Controller Cluster Is a Single Failure Domain blog post:
He’s obviously right, but I wasn’t talking about interconnected domains, but failure domains (yeah, I know, you could argue they are the same, but do read on).
BGP routers are clearly interconnected. They are also loosely coupled – a bogus BGP update generated by a BGP router (or any other BGP speaker) can bring down other BGP routers.
Sometimes an unexpected update triggers a loss of BGP session (as was the case with long AS paths a while ago), sometimes a weird transitive attribute that is passed transparently by some implementations of BGP causes a crash in other implementations (and is thus able to trigger a crash in a BGP speaker several hops away). Hijacking attacks are also nothing new, so it might seem like BGP fares no better than the new centralized controller architectures.
However, as I explain in more details in the SDN Architectures and Deployment Considerations webinar, the crucial questions to consider are (Colin made approximately the same points in follow-up tweets, read the whole thread for his view).
- What happens when a control plane (or controller) fails?
- What is the size of the failure domain?
- What can be done to protect the controller/control plane?
On all three counts BGP performs substantially better than architectures with centralized control plane heavily promoted by hard-core SDN aficionados.
What happens when a BGP router fails? Best case, a single BGP peering session is lost. Worst case, you lose a single router.
In theory, a BGP router might propagate a poisoned update before it falls over; based on anecdata it’s as likely as a round square (but of course do prove me wrong!).
Losing a major peering session is not exactly fun (and sometimes the ripples can be felt throughout the Internet), but it might be a bit better than losing a whole controller-managed network.
What is the size of a failure domain? When a BGP router receives an update with an attribute that causes it to hiccup (drop BGP session, crash, or do something else along these same lines), the update is not propagated beyond that router. The worst-case failure domain in a BGP network (or blast radius, as Jeremy Schulman would call it) is thus a single device.
On the other hand, if you manage to hit a bug in OpenFlow controller that causes the controller to crash after receiving a crafted packet, you’ll easily bring down all controllers in the cluster.
What can be done to protect the controller (or control plane)? It’s pretty easy to protect a BGP router – there are tons of security-related tools and knobs available in BGP (for more details, read BGP Operations and Security RFC) – and the receive-process-send mechanism explained in the previous section easily protects the network core from potential exploits received by edge routers.
A controller-based network is like a single device. You need a single exploit and it’s game over.
Want to Know More?
You’ll find plenty of details in SDN Architectures and Deployment Considerations webinar and other webinars in Advanced SDN Training track.
But what happens at the intersection of BGP and SDN? For example, it seems to me that in Petr Lapukhov's BGP data center design, the BGP route reflector is a de facto SDN controller. Would that approach then behave as you describe SDN rather than as you describe BGP?
However, compared to "centralized control plane" architectures, the BGP-based SDN has a crucial advantage: if the controller fails (or you kill it when it goes crazy), the network settles down to "business-as-usual" behavior.
Disclaimer: I work for HP networking and we do sell SDN controllers, apps, and OpenFlow hardware.
It was great chatting with you at Interop last week. I think you're criticism of centralized controllers is spot-on, assuming that you're talking about an architecture where all control plane functionality is performed by the controller.
HP has recognized this potential issue issue and has taken the approach of a hybrid control plane. Essentially, we put higher priority flow rules into the devices to enable exception based actions with a last priority rule that will forward off to the normal pipeline.
If traffic goes through the OpenFlow table and doesn't match on any other rules, the last rule will say "do what you would normally do" and use the forwarding behavior defined by the traditional network control plane. The end result being, exactly as you described above. In the event that the controller goes away the network will settle down to "business-as-usual" behavior.
We're still early on, but I think it's important to recognize that there are ways for us to mitigate some of the more obvious limitations of centralized controller architectures while still reaping the benefits.
see you on the twitters!
No accident that Juniper and ALU went down that route, from experience they understand what is required to keep networks alive better than most of the originators of controllers based on Openflow or proprietary solutions.