Brief Recap: Tech Field Day at Cisco Live Europe 2018

I don’t think I’ve ever been at a Tech Field Day event that’s been as intense as what we went through in the last few days at Cisco Live Europe – at least 17 different presentations in two days. It’s still all a blur and will take a long while to sort out.

First impressions:

Too much marketing, not enough time. As always, there were good and not-so-very-good sessions… but even the best ones left much to be desired – we simply didn’t have enough time to go down any interesting rabbit trail.

The notable exceptions that I can clearly remember (see also: it’s all a blur):

  • Tim Garner (Tetration) found enough time to go into how they collect flow statistics in CloudScale ASICs;
  • Fred Niehaus is a rock star presenter. Unfortunately I know nothing about wireless, so I couldn’t even start to ask sensible questions, but he did sort out a few basics for me. Thanks a million Fred, you made my week!

Products, products, products. Apart from a few exceptions, it was all about products. I completely understand Cisco sells products, but it’s always good to understand architectures and technologies to figure out how things work and how you could use them.

Fortunately I managed to squeeze in a few baseline CampusFabric questions to figure out how the whole thing works (in particular the multi-fabric stuff). Still don’t believe in LISP control plane, but overall things make sense.

Interesting ideas and reinvention of old stuff. The idea of automatically cleaning up ACLs (or firewall rules) using Network Assurance Engine is cool (and badly needed), but what really made me smile was the Zero Config Networking presentation.

Remember Novell SAP or AppleTalk Zones? When Apple moved from AppleTalk to IP they started using Bonjour as a zero-touch service discovery protocol, and of course it was inevitable we’d eventually get Bonjour service caches and Bonjour filters on layer-3 switches. They have arrived.

I was also glad to see we’re learning from past mistakes. Multi-switch networks now use a controller as a central collection-, filtering- and cache updating mechanism.

Lots of common sense and good progress. Regardless of all the marketing slapped on top of good ideas, they still remain good ideas. I liked what Cisco is doing with DNA Center or Tetration, or how TAC is trying to leverage their experience to reduce customer problems.

However:

  • One has to wonder why it took them 20 years to ask the question what do network operators really need and get CiscoWorks (or Prime or whatever) right;
  • Seeing sensible network management products wrapped in Intent-Based halo is borderline ridiculous.

Did I say Intent-Based? It seems like every single presentation had to have a slide titled Driven by Intent, Powered by Context (or something similar). That got me to the point that I made a commentyou keep using that word. I don’t think it means what you think it means.”

Anything that you can configure is now called intent-based, which proves my points that:

  • It’s all unicorn-based glazing and
  • That any configuration file is an expression of intent.

For example, you can define if-then-else rules in DNA Center Policy to specify how the system maps users into virtual networks and security groups (a long overdue idea that seems to be well-executed… at least on the GUI side).

We called that configuration, and then it was called policy (because that sounds so much better), now it’s intent-based.

Machine learning and AI. Every second presentation claimed they use ML/AI, whereas in reality many of those things aren’t anything more than common sense or a decision tree (OK, maybe weighted based on user feedback).

Just to give you an example: building a baseline network behavior and identifying outliers is now called machine learning. Hooray.

Trivialities. Some things were so trivial that it was really hard to figure out why we were spending time on them. I understand they might be relevant when talking to CxOs or trying to woo industry press or scoring points with analysts, but everyone should have known they’d have a bunch of engineers in the room.

The best ones:

Honorable mention:

There’s a reason that demo was limited to loopback interface: with Cisco IOS it’s neigh impossible to match interface names between virtual gear you use in Continuous Integration lab and physical gear you deploy on.

Some people never learn. Introducing HyperFlex stretched cluster and requiring layer-2 transport between nodes when (so they claim) all you need is IP connectivity is a clear winner in this category.

Finally, what we’ve heard nothing about. We had a session on campus switches and another one on access points and wireless innovation, but we heard nothing about routers, SD-WAN, data center switches, UCS servers (apart from cloud-based management), ACI (apart from add-on network management products), firewalls or any other security appliance…

This list might or might not reflect Cisco’s priorities – I have no idea how they selected the presentations.

7 comments:

  1. There was an interesting session on ACI multi-site (not multi-pod): BRKACI-2125
    Replies
    1. I'm sure there were plenty of ACI sessions, it's just that they didn't make it into the stuff they were showing us.
  2. Are these presentations available for general public to take a look?
    Replies
    1. The videos should appear on the pages I linked to. Not sure when they'll publish them though.
    2. The videos are out now in youtube. I really enjoyed your question about the reason on choosing a router loopback interface for testing config on switches :D !
  3. With regards to the CSR Transit VPC: To be fair, troubleshooting traffic flow in AWS is a PITA. There are VPC Flowlogs that help you out figuring out if traffic gets there (they are essentially firewall logs, telling you if traffic was deny or accepted) but in my experience when something goes wrong, nobody has any idea where to look.

    We deployed a CSR-based Transit VPC (there are other options, Aviatrix and Riverbed are AWS partners for this, but you can accomplish the same thing with Juniper and Fortinet) and despite the costs and bottleneck reservations you have (I share those), having a device in path where you can collect flow and do packet capture has been -very- useful.

    I generally agree with your point. The reality of troubleshooting a full mesh of VPCs with the limited tools that AWS gives us, with staff that is just learning the platforms (in our case) the Transit VPC has been worth the $$.
    Replies
    1. Drop in routers at the DX DC's, run dark fibre to AWS and connect all your VPC's to it. Scalable, cheaper than transit VPC's, much less complex and faster re-convergence. You can get all your normal flow logs from the routers and safely control path selection. You can also automate all the VPC provisioning and just give your Dev's an API to call if they need a new one.
Add comment
Sidebar