Infrastructure as Code Actually Makes Sense
When I heard people talking about “networking infrastructure as code” I dismissed that as yet another Software-Defined-Everything one-controller-to-rule-it-all hype. Boy was I wrong.
Imagine an application development environment where programmers debug and change source code of a deployed application in real time. How reliable would that application be? Someone would immediately put a freeze on that stupidity and allow programmers to change code only every fifth Friday of the month at 5AM (and blame ISO 9001).
Smarter programmer would figure out they still need to get work done between the Magic Fridays and would create a local copy of the application code, test their ideas on the local copy of the code, and then cut-and-paste their changes into the production source code during the Magic Friday maintenance window.
Sounds crazy? That might have been how things were done in the days of punched cards, and yet that’s exactly how we configure our networking devices (replace local copy of the application source code with test lab or simulation).
How the Sausage Is Really Made
I know way too little about proper application development processes (please correct me in the comments), but things usually work along these lines:
- A team of developers uses a central source code repository.
- A developer working on a new feature or fixing a bug works on a local copy of the source code.
- When the development work is done, the developer runs unit tests, potentially also integration and validation tests, and submits the changed source code to the repository.
- In environments stuck in 19th century someone builds the application from the source code once every three months; well-run shops have a build process that automatically collects the source code and builds the application.
- Final tests are run on the new release of the application.
- The new version of the application is deployed.
Infrastructure as Code
Contrary to what some SDN evangelists want you to think, we configure most application infrastructure with text files, CLI commands, or scripts. Applications are built using makefiles, servers are deployed using Ansible playbooks or Puppet or Chef recipes, and cloud-based application stacks are built with infrastructure-as-code systems like Terraform.
Hopefully you already store the configurations and recipes in a source code control system (many people are doing that with router, switch or firewall configurations as well). Now imagine having a build system that automatically creates VM images from Puppet recipes, configures web and database servers using those same recipes, installs the application code into the VM virtual disk from a source code or build repository, and runs automated integration and system tests. All of a sudden, you started treating your infrastructure the same way you treat application source code, hence infrastructure as code.
Network Infrastructure as Code
Can we do the same thing with the networking infrastructure? Not if we use the traditional hardware approach – it’s hard to build local copy of the networking infrastructure for every networking engineer, or an automated test environment.
On the other hand, if we manage to virtualize everything, including networks and network services (load balancers, firewalls…), we can deploy them on-demand. Using cloud orchestration system automation it’s pretty easy to create new subnets, and deploy firewalls and load balancers in VM format. Problem solved – you can recreate a whole application stack, either for the use of individual networking engineer working on a particularly interesting challenge, or to build QA or UAT environments.
Once the modified application stack passes all the tests, it’s easy to deploy the changes in production: shut down the old VMs and start new ones, or (if you made more drastic changes) tear down the old application stack and build a new one using the already-tested build recipes.
How Can I Get Started?
Virtualize everything. You won’t be able to create new application environments on demand till you’re able to create virtual networks and network services on demand. Overlay virtual networks and virtual network services appliances are an ideal solution.
Stop Changing the Hardware Configurations
Finally, it’s time to get rid of cut-and-paste method of network configuration. Make sure you’re not doing anything that is not repeatable and cannot be fully cloned in a development or test environment.
Ideally, you’d totally decouple the virtual networks and services from the physical hardware, and change the physical hardware configuration only when you need to build a new data center fabric or extend an existing one.
Like the Vision?
Now go and evaluate how existing vendor offerings and architectures fit into this vision. I’ll stop here; make your own conclusions.
Need More Information?
We talked about network infrastructure-as-code and continuous integration, delivery and deployment in the Network Automation Concepts webinar.
Once a routable network exists, network overlays can segment, inspect, route, firewall, and audit with the touch of a button. Why would you want to continue to manage hardware? Ahhh, riiiiight, "my turf". VM admins gave up servers and storage administration as a primary duty long ago. "Bring us your resources, we will make the do work" has been the motto for some time.
How does a VM/cloud admin create a new segment of the network? They launch a UI (or API); define common name, subnet, segmentation, and relationship; submit; validate; and release to production. The orchestration engine reaches out to each virtual network device, modifies the configuration, verifies the change, and reports status.
I guess the alternative is to have the network team register their equipment within something like VMware Orchestrator so that we can programmatically control the hardware instead. That should fly, right, hello, anyone here...
The next generation networks are truly flat, any-to-any fabrics with software services riding on top. "What about physical systems?" They receive their own routed network which the rest of the datacenter can reach.
But do you really expect that VMs may replace network hardware where we have a lot of hardware-enabled features, like hardware optimized switching.
Virtualization brings additional (penalties) latency and jitter into your network. Do you think we are ready to replace fast low-latency hardware with software blown VMs; do you think VM solutions are mature for complete network infrastructure virtualization?