Repost: On the Advantages of XML
Continuing the discussion started by my Breaking APIs or Data Models Is a Cardinal Sin and Screen Scraping in 2025 blog posts, Dr. Tony Przygienda left another thoughtful comment worth reposting as a publicly visible blog post:
Having read your newest rant around my rant ;-} I can attest that you hit the nail on the very head in basically all you say:
- XML output big? yeah.
- JSON squishy syntax? yeah.
- SSH prioritization? You didn’t live it until you had a customer where a runaway python script generated 800+ XML netconf sessions pumping data ;-)
So, all very correct what you say however having lived all the dreams, I’m still very much for XML. Yes, XSLT and XML matching ain’t for the faint of heart, but unless you properly account for the semantic structure of the data, a reliable, maintainable network automation is looking for a free lunch AFAIS.
Which all boils down to the same underlying principle. You have complex problems, you need to hire smarter and smarter (well-educated) folks to deal with them. AI can help some, and the more structured the data, the more it can help, but complexity calls for general intelligence, something today only smart people bring to the table IME.
Tony is right; AI can help some. ChatGPT generated a correct XPath expression to count the number of IS-IS adjacencies in the Junos show isis adjacency XML printout and flawless Python code to go with it.
And just when I thought we ran out of excuses for not using XML, ChatGPT asked me whether I would like to use ncclient for a full-blown solution, and then generated three versions of the code that failed miserably. Admittedly, it got it right on the fourth attempt. Well, make that a maybe; there were no syntax errors, and I got the expected result back, but who knows what gremlins are still hidden in the code.
Well, someone appreciates my (sometimes thoughtful) rants I see ;-)
ncclient? uff, wasn't aware of that. Looks a bit ambitious across all those platforms.
I'm using pyez we maintain since about forever, especially when I muck together some complex python machinery bringing up large topologies over kathara/cRPD and need to shake some routers to get interesting scale effects ;-) But hey, I'm about 99.99% juniper biased and I love the culture installed here from day one of proper modelling, XML everywhere, NETCONF and good engineering tooling.
Yes, the more complexity, the more AI the more the "squishy" (you know what I mean) stuff in JSON will start to catch people (and for that gRPC). There is a tons of difference between NULL/undefined and 0 and a set is not a list either and the more complex the model the more any object to any object mapping is necessary in proper models ;-) As you can deduce I'm also big proponent of thrift though of course gRPC is for the moment all the rage due to its "simplicity".
All boils down to what database folks learned painfully but learned qujickly going from ISAM to SQL (with the resulting noSQL movement which moved the schema from datqabse into the "code" with some performance gains often [but hey, ACID and transactions are old fashion, right ;-)] paid by much harder maintainance). However, in databases corrupting or loosing one byte looses you the job and hence people learn much quicker that proper algebra allows to reason properly and hence ensure correctness. In networking, eeeehem, we just send things one more time and hope for the best ;-)
ncclient seems to be nothing more than an underdocumented wrapper around NETCONF, so doing it "across all those platforms" shouldn't be too hard (in theory).
And yes, I can totally relate to your wonderful experience with the noSQL crowd. I was struck by that particular strain of Kool-Aid years ago, implemented something simple in MongoDB and regretted it ever since.
>ChatGPT generated a correct XPath expression to count the number of IS-IS adjacencies in the Junos show isis adjacency XML printout and flawless Python code to go with it.
That's rather neat.
Enjoying these rants-to-blog posts. Keep them coming, Dr. P & Ivan.