Build the Next-Generation Data Center
6 week online course starting in spring 2017

Infrastructure as Code Actually Makes Sense

When I heard people talking about “networking infrastructure as code” I dismissed that as yet another Software-Defined-Everything one-controller-to-rule-it-all hype. Boy was I wrong.


Imagine an application development environment where programmers debug and change source code of a deployed application in real time. How reliable would that application be? Someone would immediately put a freeze on that stupidity and allow programmers to change code only every fifth Friday of the month at 5AM (and blame ISO 9001).

Smarter programmer would figure out they still need to get work done between the Magic Fridays and would create a local copy of the application code, test their ideas on the local copy of the code, and then cut-and-paste their changes into the production source code during the Magic Friday maintenance window.

Sounds crazy? That might have been how things were done in the days of punched cards, and yet that’s exactly how we configure our networking devices (replace local copy of the application source code with test lab or simulation).

How the sausage is really made

I know way too little about proper application development processes (please correct me in the comments), but things usually work along these lines:

  • A team of developers uses a central source code repository.

You should use source code control system like git or svn even if you’re a lone wolf. You wouldn’t believe how many times it saved my day when I was able to remove my blunders by reverting to a working copy of a module. Using github gives you bonus points – you have a backup of your source code in the cloud.

  • A developer working on a new feature or fixing a bug works on a local copy of the source code.
  • When the development work is done, the developer runs unit tests, potentially also integration and validation tests, and submits the changed source code to the repository.
  • In environments stuck in 19th century someone builds the application from the source code once every three months; well-run shops have a build process that automatically collects the source code and builds the application.
  • Final tests are run on the new release of the application.
  • The new version of the application is deployed.

Continuous integration is an improvement of this process that streamlines everything from application build to automatic deployment.

Infrastructure as code

Contrary to what some SDN evangelists want you to think, we configure most application infrastructure with text files, CLI commands, or scripts. Applications are built using makefiles, servers are deployed using Puppet or Chef recipes, and cloud-based application stacks are built with orchestration systems like Cloudify.

Hopefully you already store the configurations and recipes in a source code control system (many people are doing that with router, switch or firewall configurations as well). Now imagine having a build system that automatically creates VM images from Puppet recipes, configures web and database servers using those same recipes, installs the application code into the VM virtual disk from a source code or build repository, and runs automated integration and system tests. All of a sudden, you started treating your infrastructure the same way you treat application source code, hence infrastructure as code.

Network infrastructure as code

Can we do the same thing with the networking infrastructure? Not if we use the traditional hardware approach – it’s hard to build local copy of the networking infrastructure for every networking engineer, or an automated test environment.

On the other hand, if we manage to virtualize everything, including networks and network services (load balancers, firewalls…), we can deploy them on-demand. Using cloud orchestration system automation it’s pretty easy to create new subnets, and deploy firewalls and load balancers in VM format. Problem solved – you can recreate a whole application stack, either for the use of individual networking engineer working on a particularly interesting challenge, or to build QA or UAT environments.

Finally, once the modified application stack passes all the tests, it’s easy to deploy the changes in production: shut down the old VMs and start new ones, or (if you made more drastic changes) tear down the old application stack and build a new one using the already-tested build recipes.

How can I get started?

Virtualize everything. You won’t be able to create new application environments on demand till you’re able to create virtual networks and network services on demand. Overlay virtual networks and virtual network services appliances are an ideal solution.

Stop changing the hardware configurations

Finally, it’s time to get rid of cut-and-paste method of network configuration. Make sure you’re not doing anything that is not repeatable and cannot be fully cloned in a development or test environment.

Ideally, you’d totally decouple the virtual networks and services from the physical hardware, and change the physical hardware configuration only when you need to build a new data center fabric or extend an existing one.

Like the vision?

Now go and evaluate how existing vendor offerings and architectures fit into this vision. I’ll stop here; make your own conclusions.


  1. Welcome to VM admin land. We have been crying for this for years. The last hold outs - networking. We have had the ability for about 3 years, but were forbidden to implement.

    Once a routable network exists, network overlays can segment, inspect, route, firewall, and audit with the touch of a button. Why would you want to continue to manage hardware? Ahhh, riiiiight, "my turf". VM admins gave up servers and storage administration as a primary duty long ago. "Bring us your resources, we will make the do work" has been the motto for some time.

    How does a VM/cloud admin create a new segment of the network? They launch a UI (or API); define common name, subnet, segmentation, and relationship; submit; validate; and release to production. The orchestration engine reaches out to each virtual network device, modifies the configuration, verifies the change, and reports status.

    I guess the alternative is to have the network team register their equipment within something like VMware Orchestrator so that we can programmatically control the hardware instead. That should fly, right, hello, anyone here...

    The next generation networks are truly flat, any-to-any fabrics with software services riding on top. "What about physical systems?" They receive their own routed network which the rest of the datacenter can reach.

  2. Infrastructure as code can be implemented in a variety of ways - AWS has it with Cloud Formations, PaaSs have it with containers and other deployment tools and Ravello Systems has implemented the "infrastructure as code" concept with high performance nested virtualization. VMware represented a physical server as a VM (essentially a file). Ravello extends that concept to the whole environment - VMs, networking and storage. Essentially your entire application environment is a file. The Ravello hypervisor also allows you to run existing VMware workloads (including networking) in AWS completely unmodified. It also allows you to snapshot the whole thing, version control your infrastructure, re-create bugs etc.

  3. Good approach I would say, as it sounds consistent and may meet programming best practices.
    But do you really expect that VMs may replace network hardware where we have a lot of hardware-enabled features, like hardware optimized switching.

    Virtualization brings additional (penalties) latency and jitter into your network. Do you think we are ready to replace fast low-latency hardware with software blown VMs; do you think VM solutions are mature for complete network infrastructure virtualization?

    1. See:


You don't have to log in to post a comment, but please do provide your real name/URL. Anonymous comments might get deleted.