Worth Reading: Looking Inside Large Language Models
Bruce Davie published an interesting overview article about Large Language Models. It would be worth reading just for the copious links to in-depth article; I particularly like his conclusions:
We mistake performance (producing realistic text) for competence (understanding the world).
Having a model for language is different from having a model of the world.
And that’s a perfect explanation why it makes no sense to expect ChatGPT and friends to produce picture-perfect device configurations or always-working code.
ChatGPT is the ultimate cargo-cult. It can do well mimicking STATIC, symbol-based systems like languages, and by extension, programming languages. It can do well in other symbolic systems like Maths as well. But its model falls shorts when it comes to forming a world-view, which is a lot more complex and DYNAMIC. That's why there's no AI system that can drive a car safely in our current transport networks, and there'll likely never be one.
So basically ChatGPT can be used to generate codes, not perfect but workable codes, with errors of course. Since lots of programmers are just that, bad coders, a lot of what they're doing atm will be replaced by it. There'll be people required to fix the codes generated by ChatGPT, but those numbers will be far smaller. That will put a downward pressure on the pay of programmers.
Of course, there's no replacing competent and innovative coders, but they're by nature, a minority.
Computer networks are also not very static, so ChatGPT can assist with device config, but it's impractical to expect that it'll replace all human elements, esp. in big networks.