Real-Life Software Defined Security @ Troopers 16
The organizers of Troopers 16 conference published the video of my Real-Life Software Defined Security talk. The slides are available on my web site.
Hope you’ll enjoy the talk; for more SDN use cases watch the SDN Use Cases webinar.
It was once said that you can do what you want with your low cost white box switches. But you have to pay an arm and a leg to get all licenses together.
Of course you can do everything yourself.
In both cases you need power, may it be the power of money or men. In the latter case you gain at least know-how instead of the first one, where you inherit dependency.
I have studied ACI and NSX in pretty good detail and have worked on PoC/LAB implementation of both platforms. I used to think that NSX having VTEPS that could do routing on locally on the host for VMs in different subnets was a pretty awesome feature. Later, as you also mentioned in this presentation, Cisco added the VTEP feature through AVS to the ACI platform which allowed the same function.
Then, one of my very intelligent colleagues pointed out to me that being able to route in between subnets on the same host does not achieve much at all. An example will illustrate this idea well: Assuming an application uses six VMs, 3 DB, 2 APP and 1 Web server. Also assume DRS is used (which it is in most production environments I've seen) and there are 20, 30 or 50 ESXi hosts. Statistically, especially if this application's VMs are resource intensive, they will all end up on different hosts. Then, being able to route within the hosts becomes a moot point. This results in no architectural difference between ACI and NSX assuming AVS/VTEP with ACI is not utilized for 90 or 95%+ of the east-west traffic.
My immediate takeaway is that to avoid the complexity added by VTEPs on every ESXi host (and yes, it does add a little bit of complexity), I'd rather forgo the slight performance gain that I might get on ESXi hosts routing locally for a very small portion of the traffic and have all the routing be handled by the physical fabric.
Even with NSX, in the same scenario, with 6 VMs spread across 20, 30 or 50 ESXi hosts, most of the east-west traffic will end travelling encapsulated over the physical network anyways - and I'd rather have the flexibility, visibility and troubleshooting capability that I'd be achieve through the ACI physical fabric than the logical NSX overlay. With ACI normalization of traffic between VLANs and VXLAN, virtual and physical, I can troubleshoot traffic in between virtual-virtual, virtual-physical and physical-physical all in the same manner. With NSX, the troubleshooting has improved significantly since version 6.2, but, there is probably still a long way to go to get the type of visibility you get in ACI.
What is your take on this situation?
Thanks.