Traditional Leaf-and-Spine Fabric Versus Cisco ACI

One of my subscribers wondered whether it would make sense to build a traditional leaf-and-spine fabric or go for Cisco ACI. He started his email with:

One option is a "standalone" Spine/Leaf VXLAN-with EVPN deployment based on Nexus equipment. This approach could probably be accompanied by some kind of automation like Ansible to ease operation/maintenance of the network.

This is what I would do these days if the customer feels comfortable investing at least the minimum amount of work into an automation solution. Having simpler technology + well-understood automation solution is (in my biased opinion) better than having a complex black box.

To learn how to build and automate such a fabric, start with Leaf-and-Spine Fabric Architectures (to cover the design aspects), continue with EVPN Technical Deep Dive (to master EVPN), and explore how others automate their fabrics in Network Automation Use Cases webinar (or enroll into the automation online course).

The other option is the ACI that is highly promoted by Cisco. This certainly offers some good features and (if it works as they say) it will make our life easier in many aspects. But then you have all the disadvantages of a proprietary solution, that hides some of the complexity and adds another level of complexity as you know better than I do. What are others doing in similar cases?

I’ve seen people doing both. In the end, it comes down to “do you want to control your network because you think it’s a mission-critical infrastructure or do you see it as an unnecessary expense that’s best handled as a black box”. I covered this dilemma in a blog post and a presentation.

For more details, check out the NSX, ACI or EVPN webinar.

Finally, if you’d like to have a second opinion of a CCIE/VCIX-NV with several production ACI, NSX and EVPN deployments under his belt, you could go for an ExpertExpress session with Mitja Robas.

9 comments:

  1. I think we're in month three of the leaf-spine EVPN-VXLAN Ansible adventure and the network isn't in production yet due to vendor bugs and we haven't really started on monitoring yet. We certainly have learned all kinds of things though.
    Replies
    1. Thanks for the feedback - would love to know more if you'd be willing to contact me offline. Thank you!
  2. Eventually you'll get to a fully automated solution in 3 years but at that time you're out of business. Good luck on your efforts.
    Replies
    1. ... and how long do you think it will take to deploy ACI in production? It's so nice to see someone who has his priorities straightened out :)
  3. Unfortunately in some environments, where Ops is totally outsourced and the engineering "footprint" is enough to just handle vendors' solutions and participate in consultants' design and deployment ("0" cycles to start an in-house effort to build a DC grade automation, let alone MNS side never undertaking support of such, within the std minimal pricing for Ops), ACI or the likes (black boxes) may be the only option. I still recall our in-house BSD built firewalls w/x85 running Snort behind, in the early 2000, which - slowly but surely - got replaced with the likes pf PAN + Wildfire, or ASA + Fireye, or ...
    Replies
    1. Agreed. We're back to the "three paths of Enterprise IT" territory:

      https://blog.ipspace.net/2017/11/the-three-paths-of-enterprise-it.html

      However, assuming the situation is that grim, who will operate ACI? It's not exactly a miracle black box with a single red button.
    2. Funny you should ask, while you may like also know the answer ;-) : in very large, outsourced environments it may not be about "who can fix the problem", but "who to hang, if something breaks" (descent of politics into IT). In the scenario I presented it would be: MNS -> TAC -> penalties per contract for breaching SLAs -> suits reporting upstream business approach -> all happy (while engineering bread is slowly extinguishing)
  4. 3 years into an ACI deployment in one part of our business we are embarking on another 3 year cycle of ACI globally. ACI is a very good tool to allow people to migrate from legacy infrastructures, and move into the SDN landscape. It abstracts away a lot of nerd knobs and exposes usable APIs to the organization to they can become more “cloud native” without handing everything over to Amazon et al.

    All our deployments are app centric and driven via YAML/Ansible, so we have taken a nasty hierarchical model made the workloads “well described” and when we hit renewal next year we are better placed to assess the landscape and move in any direction we choose.

    ACI should be considered first and foremost for what it is - SDN lite with a support contract. You can make it complex or you can make it a simple black box, the point is it moves you forward as a Netops platform within your business, without completely reinventing the wheel. You then have baby steps towards a fancy open fabric.
  5. As every time, well, it depends. It depends on a lot of parameters - especially what has to be achieved with the new solution. We're actually building an additional data center and decidet against ACI.

    Here are some reasons:
    * ACI manages the network fabric only
    * Automation tools like Ansible are time-consuming but could manage your infrastructure as a whole
    * There are only the 93 series of Nexus switches ACI capable, so there is no extendet mix and fit (for instance using 3000ers with deep buffer at low cost or anything)
    * ACI clashes together with solutions like NSX or OpenStack platforms, which are using their own approach of network virtualization technologies - so, there must exist a good concept that targets not only the network gear, but also the consumers of network
    * ACI is a monster and don't think this would be easy to soak, you have a good chance to end up in serving coffee for the external consultant, who will do the initial setup

    But by far the substatial argument for not using ACI was, that our data center infrastructure is to small and the tasks in this infrastucture are to few to have a benefit from using it. Just think - you have an additional zoo of servers to maintain providing controller services for ACI.

    I think that we, the network guys, should make friends with the idea, that we would be much more devops engineers in future.
Add comment
Sidebar