Opinion: Impact of AI on Networking Engineers

A friend of mine sent me a series of questions that might also be on your mind (unless you’re lucky enough to live under a rock or on a different planet):

I wanted to ask you how you think AI will affect networking jobs. What’s real and what’s hype?

Before going into the details, let’s make a few things clear:

  • AI is way more than just neural networks or Large Language Models (LLM). People have been working on AI technologies and solutions for decades, and have developed all sorts of useful tools, most of which are unknown to the general public. See the Machine Learning for Network and Cloud Engineers book for a few examples.
  • Any AI technology (or product) is just a tool, and like any other tool, has to be used correctly. It also helps if you understand what the tool does, how it works, and its limitations1. The AI Networking Cookbook by Eric Chou might give you a few ideas.
  • More and more AI researchers2 agree that standalone Large Language Models are a dead-end, which does not mean they are not useful as a front-end to a more powerful system (they absolutely are). Gary Marcus has a lot to say about that topic (for example, here, here, and here)3

It seems to me like AI is the new bootcamps, meaning that there are a lot of grifters out there promising success and proudly promising the demise of humans.

Hey, whenever there’s hype and anxiety, you get a mix of well-meaning people peddling quick recipes4, attention grabbers, “influencers”, grifters, and scammers. Figuring out who’s who is the fun part.

I use LLMs every day, for things like transforming data, spotting where I made typos or simple mistakes, asking about error messages, and so on, but that’s easier to do if you already have expertise in something, of course.

So do I. It would be a shame not to use a tool that could make your life easier (like Google Maps), help you get better results (like Grammarly), or help you get the job done faster (like Excel ;) However, you usually need time5 to use a tool optimally, and I think we’re still far away from that point with LLMs.

I fear for the beginners that don’t have the experience or knowledge to tell the LLM when it’s hallucinating, that they keep hearing “You’re right” all the time, and that they could end up with massive gaps in their knowledge.

Hey, remember the “copy/paste from Stack Overflow” crowd? There’s no difference between that crowd and the “copy/paste from ChatGPT” crowd, apart from a psychological blind spot that can result in a perfect FUBAR storm::

  • Because LLMs produce intelligent-looking results, we believe they must be intelligent.
  • We also learned that the computers are not wrong, so whatever an LLM produces must be right. Right? After all, Excel is never wrong6, it’s always your fault7. Oh, wait

Maybe that’s just what every generation of grumpy old people think, though :)

Every (young) generation has a few people who want to understand how things work, work hard to get there, and usually reap some benefits of their hard work8. It also has plenty of people believing in miracles9, quick recipes for success10, and self-help books11. Surprisingly, some of the latter remain mediocre and often blame bad luck or external forces beyond their control.

Likewise, every generation of grumpy old people bemoans how kids these days don’t understand the fundamentals. Some of that is true for some of the kids (the copy-paste crowd), but it’s also true that some fundamentals are no longer relevant when the layers of abstraction stop leaking. For example, decades ago, I was able to:

  • Understand how transistors work, and build logic circuits out of discrete transistors or low-level components like NAND gates.
  • Understand how integrated circuits work, and how you manufacture them.
  • Know way too many details of modem- and Ethernet encoding schemas
  • Design and build (wire-wrap) my own computer
  • Port a compiler to that computer
  • Build a network interface card
  • Write my own file server operating system in assembly language on an 8-bit CPU with 64K of RAM
  • Create my own interpreted language from scratch12 (think Python, but focused on simplifying forms-based UI)
  • Create simple AI systems

Is any of that relevant to whatever we have to do today? Of course not. (Almost) Nobody wants to know how a compiler or an operating system works these days.

Coming back to more relevant topics:

What part of a networking job could a LLM do? What would you trust it to do in production? Do you think we are going to see agents operating networks?

Using AI in networking is not too different from deploying network automation, and some of my Introduction to Network Automation slides are still relevant, for example:

  • Start with low-hanging fruits and read-only access13
  • Use protective measures on devices (role-based access control) to prevent inadvertent changes
  • Until you’re sure things work, keep an operator in the loop to confirm the actions an LLM suggests.

For example, analyzing past tickets to identify the most likely root causes and using that analysis to generate troubleshooting suggestions could be a pretty interesting use case. Identifying devices that could be involved in a (perceived14) network outage and collecting and analyzing initial troubleshooting information would be a huge boon. Even having a system that would write a detailed “this is why it’s not the network” explanation in a bogus ticket15 would be a major win 🤪

Finally:

I don’t know if you’ve seen some posts lately about agents being used as routers, speaking BGP and OSPF, kind of seems like stupid router tricks to me, but maybe there is a use case.

That’s a pure publicity stunt, and light-years away from a sane, well-engineered solution. Collecting BGP or OSPF data in some controlled and secured way (one-way BGP session, BGP-LS, or BMP), exporting that into some usable format, and then having an AI agent work on that data would make perfect sense, and might even be useful. On the other hand, using an AI agent to run a control-plane protocol like BGP or OSPF is approximately as efficient as using a Rube Goldberg machine to make your morning coffee.


  1. Yeah, I know I’m expecting way too much from the world with the attention span of a squirrel. ↩︎

  2. As opposed to pundits, Kool-Aid consumers, evangelists, and product managers ↩︎

  3. His blog posts often remind me of my OpenFlow/SDN blog posts. Everyone with enough experience knew that was a stupid idea that would never work in real life, but nobody wanted to point out the (lack of) state of the Emperor’s clothes. ↩︎

  4. Paraphrasing H. L. Mencken: for every technology challenge, there’s a solution that is clear, easy, and wrong. ↩︎

  5. As in experiencing enough mistakes/blunders and learning from them ↩︎

  6. Intel CPU might be, though ↩︎

  7. Resulting in errors in ~20% of gene-related peer-reviewed papers↩︎

  8. Not being a jerk also helps ;) ↩︎

  9. Winning a lottery is an evergreen sure way to get rich ↩︎

  10. More than half of the guys in my primary school class wanted to become mechanics. Today, they’d probably aim for influencers or some such. The belief in quick paths to success never stops. ↩︎

  11. The fact that there are so many different self-help books and that new ones keep being published should tell you something. ↩︎

  12. I still have the sources in Turbo Pascal. Let me know if you want to have them available on GitHub ;) ↩︎

  13. With guardrails and rate limits – you don’t want an LLM blast your network into oblivion with an SNMP Query DOS. ↩︎

  14. It’s always the network, most often BGP or DNS ↩︎

  15. Caused, for example, by a printer running out of paper ↩︎

Add comment
Sidebar