Sometimes I have this weird feeling that I’m the only loony in town desperately preaching against the stupidities heaped upon infrastructure, so it’s really nice when I find a fellow lost soul. This is what another senior networking engineer sent me:
I'm belonging to a small group of people who are thinking that the source of the problem are the apps and the associated business/security rules: their nature, their complexity, their lifecycle...
Sounds familiar (I probably wrote a few blog posts on this topic in the past), and it only got better.
You could add the maximum agility at every layer and the orchestrator of your dream, it would not change the complexity of an app. and its security (and small networking) requirements.
The vast majority of the traditional apps was and still is designed with a scale-up and infrastructure redundancy approaches:
- Networking issues generate application disruption : configurations freezes, few software updates
- Security is managed by the infrastructure: firewalls and segmentation
Modern Applications are designed with a scale-out approach:
- Born to be deployed in-house or moved into the cloud
- Layer-3 and DNS are theoretically sufficient
- Fail-Fast, rollback, incremental data model
- Security is managed by the applications: hardening, centralized filtering, patching (hot and cold), fast-restart, self-healing
IMHO, if new apps are deployed with the 2nd approach (in the cloud or in-house), life would be much more easy for infrastructure industrialization, automation and "intent-based" thing stuff.
Not surprisingly, some vendors vigorously disagree, particularly if they’re selling the Aspirin to reduce the headaches of “traditional” app brokenness. On a more optimistic front, even engineers working for said vendors sometimes see the light… and eventually move on.
Want to Know More?
Some network engineers found my Designing Active-Active and Disaster Recovery Data Centers webinar pretty useful when trying to persuade everyone else not to implement vendor marketectures.