Ever Heard of Role-Based Access Control?

During my recent SDN workshops I encountered several networking engineers who use Nexus 1000V in their data center environment, and some of them claimed their organization decided to do so to ensure the separation of responsibilities between networking and virtualization teams.

There are many good reasons one would use Nexus 1000V, but the one above is definitely not one of them.

Ever heard of RBAC?

Like any other decent management/orchestration framework vCenter supports user roles and role-based access control, making it really easy to configure who can configure uplinks, networks, port groups… and who can see them (or not even that).

Using an extra networking product (preferably running VSM on Nexus-1010 because you couldn’t possibly trust the company-wide virtualization platform to run your precious control or management plane) just to get what you could get from a product your company already uses is somewhat ridiculous… but then it might be easier than actually talking to the other team or reading the manual.

No Nexus-1000V then?

The feature gap between vSphere Distributed Switch and Nexus 1000V is growing smaller as VMware continues to improve vDS (and Nexus 1000V seems to be standing still – the last new feature worth mentioning was added a year ago), but Nexus 1000V still has a HUGE advantage over vDS: it has text configuration file which means that:

  • You can actually see what’s configured on the switch without traversing a hundred dialog forms;
  • You can diff two configurations, and use a source code management system to record the configuration changes;

For more details, read the Plea to Software Vendors from Sysadmins, particularly points 2 and 4 (and keep in mind that it was written by sysadmins, so other IT engineers aren’t as different from yourself as you might think).

I don’t think VMware ever got that memo (they do have export capability, but it creates a binary blob), or maybe I’m missing a cool feature, in which case please write a comment.

4 comments:

  1. You can query a vDS on the command line of a host and write a script using the API's to diff across hosts etc. Pretty trivial ..
  2. Ansible has extensive support for vSphere including dVS. Ansible facts go into your source control.

    http://docs.ansible.com/ansible/list_of_cloud_modules.html
  3. I can see the desire to own the network piece of ESX if you are the network admin. At the same time the coordination of 1000v upgrades/maintenance vs hypervisor upgrade ( an throw in a hypervisor firewall by a third party vendor in that stack) can be a nightmare. On the other hand, where I work we are not allowed to connect to the ESX console of network owned virtual appliances (like F5 virtual LTM) as we don't believe in RBAC :)
  4. We know of an early vBlock adopter who selected the 1000v for exactly this reason. I believe the intent was for the network team to be able to manage the virtual switching (move/add/change and break/fix) in a more familiar manner. It turns out that having 1000v severely restricted the port groups that can be configured. The server team decided to remove it and the network team simply collaborates with them now. Network and Security work to develop the architecture and policies, but don't get involved in much day-to-day work within the platform. Not smooth as silk, but it more or less works.
Add comment
Sidebar