Blog Posts in March 2017
During Cisco Live Europe (huge thanks to Tech Field Day crew for bringing me there) I had a chat with Jeff McLaughlin about NETCONF support on Cisco IOS XE, in particular on the campus switches.
We started with the obvious question “why would someone want to have NETCONF on a campus switch”, continued with “why would you use NETCONF and not REST API”, and diverted into “who loves regular expressions”. Teasing aside, we discussed:
I’m running a hyperconverged infrastructure event with Mitja Robas on April 6th, and so my friend Christoph Jaggi sent me a list of interesting questions, starting with:
What are hyperconverged infrastructures?
The German version of the interview is published on inside-it.ch.
Imagine a Flatworld in which railways are the main means of transportation. They were using horses and pigeons in the past, and experimenting with underwater airplanes, but railways won because they were cheaper than anything else (for whatever reason, price always wins over quality or convenience in that world).
As always, there were multiple railroad tracks and trains manufacturers, and everyone tried to use all sorts of interesting tricks to force the customers to buy tracks and trains from the same vendor. Different track gauges and heptagonal wheels that worked best with grooved rails were the usual tricks.
Niki Vonderwell kindly invited me to Troopers 2017 and I decided to talk about security and reliability aspects of network automation.
The presentation is available on my web site, and I’ll post the link to the video when they upload it. An extended version of the presentation will eventually become part of Network Automation Use Cases webinar.
The last presentation during the Tech Field Day Extra @ Cisco Live Europe event was a Cisco-Apple Partnership presentation, and we expected an hour of corporate marketese.
Can’t tell you how pleasantly surprised we were when Jerome Henry started his very technical presentation explaining the wireless goodies you get when using iOS with IOS.
…we’ve found that VMware’s native virtual switch implementation has become the de facto standard for greater than 99% of vSphere customers today. … Moving forward, VMware will have a single virtual switch strategy that focuses on two sets of native virtual switch offerings – VMware vSphere® Standard Switch and vSphere Distributed Switch™ for VMware vSphere, and the Open virtual switch (OVS).
During the Leaf-and-Spine Fabric Designs webinar Roger Lapuh from Avaya explained how Avaya uses SPB technology to build an L2+L3 fabric.
Ansible network modules (at least in the way they’re implemented in Ansible releases 2.1 and 2.2) were one of the more confusing aspects of my Building Network Automation Solutions online course (and based on what I’m seeing on various chat sites we weren’t the only ones).
I wrote an in-depth explanation of how you’re supposed to be using them a while ago and now updated it with user authentication information.
One of the engineers watching my Data Center 3.0 webinar asked me why we need session stickiness in load balancing, what its impact is on load balancer performance, and whether we could get rid of it. Here’s the whole story from the networking perspective.
Remember the All You Need Are Two Switches saga? Several readers told me they’d like to have in text (article) format, so I found a transcription service, and started editing what they produced and publishing it. The first two installments are already online.
On a related topic: we’ll discuss the viability of this approach in April DIGS event in Zurich, Switzerland.
One of my readers watched my Leaf-and-Spine Fabric Architectures webinar and had a follow-up question:
You mentioned 3-tier architecture was dictated primarily by port count and throughput limits. I can understand that port density was a problem, but can you elaborate why the throughput is also a limitation? Do you mean that core switch like 6500 also not suitable to build a 2-tier network in term of throughput?
As always, the short answer is it depends, in this case on your access port count and bandwidth requirements.
In autumn 2016 I embarked on a quest to figure out how TCP really works and whether big buffers in data center switches make sense. One of the obvious stops on this journey was a chat with Thomas Graf, Linux Core Team member and a founding member of the Cilium project.
When Cisco ACI was launched it promised to do everything you need (plus much more, and in multi-hypervisor environment). It was quickly obvious that you can’t do all that on ToR switches, and need control of the virtual switch (the real network edge) to get the job done.
Yannis sent me an interesting challenge after reading my short “this is how I wasted my time” update:
We are very much committed in automation and use Ansible to create configuration and provision our SP and data center network. One of our principles is that we do rely solely on data available in external resources (databases and REST endpoints), and avoid fetching information/views from the network because that would create a loop.
You can almost feel a however coming in just a few seconds, right?
The featured webinar in March 2017 is the SDN Use Cases webinar describing over a dozen different real-life SDN use cases. The featured videos cover four of them: a data center fabric by Plexxi, microsegmentation (including VMware NSX), SDN-based Internet edge router built by David Barroso, and Fibbing - an OSPF-based traffic engineering developed at University of Louvain.
To view the videos, log into my.ipspace.net, select the webinar from the first page, and watch the videos marked with star.
It’s uncommon to find an organization that succeeds in building a private OpenStack-based cloud. It’s extremely rare to find one that documented and published the whole process like Paddy Power Betfair did with their OpenStack Reference Architecture whitepaper.
One of the challenges of designing a controller-based solution is the transport network used to exchange information between controller and controlled devices. Can you do that in-band or is it better to have an out-of-band network (built with traditional components)? Terry Slattery explained some of the pros and cons in the Monitoring SDN Networks webinar.
During the Tech Field Day Extra event at Cisco Live Europe 2017 Fabrizio Maccioni, Technical Marketing Engineer at Cisco, described enhanced programmability available in Cisco IOS XE release 16.x. What really got my attention was the claim that they made NETCONF on Cisco IOS transactional (and Fabrizio mentioned the candidate config and commit).
Here's my initial reaction:
I often get questions from engineers wondering whether my webinars or courses would be too tough for them. Here’s a question I got from an engineer who wanted to attend my Building Next-Generation Data Center course: “What specific prior experience do you expect for this workshop?”
Eyvonne Sharp wrote a great blog post describing Cisco’s love of complexity and how SD-WAN vendors proved things don’t have to be that complex.
The parts I like most (and they apply equally well to networking):
Last year Cisco launched a new series of Nexus 9000 switches with table sizes that didn’t match any of the known merchant silicon ASICs. It was obvious they had to be using their own silicon – the CloudScale ASIC. Lukas Krattiger was kind enough to describe some of the details last November, resulting in Episode 73 of Software Gone Wild.
For even more details, watch the Cisco Nexus 9000 Architecture Cisco Live presentation.
TL&DR: Don’t use OSPF areas if you can avoid them. Don’t use NSSA areas.