Building network automation solutions

9 module online course

Start now!

Running Network Automation Tools in a Container

Setting up a network automation development environment is an interesting task:

  • You have to install a half-dozen tools, each one with tons of dependencies;
  • SSH libraries like paramiko have to installed manually;
  • Ansible modules for individual network devices might need extra libraries;
  • Parsing tools invoked with Ansible Jinja2 filters have to be installed separately;
  • Add your pet peeve here ;)

Now imagine having to do that for a dozen networking engineers and software developers working on all sorts of semi-managed laptops. Containers seem to be one of the sane solutions1.

There are just two minor hurdles between where you are and a container-based nirvana2:

  • You have to build the container
  • You have to figure out how to use it.

Building an Automation Container

A long while ago I put together a build recipe for a network automation container that contains these tools and libraries3:

  • Git
  • Ansible
  • NAPALM
  • netsim-tools
  • PyATS
  • PyNetBox
  • paramiko, netlike, textfsm, jmespath, ntc-tempaltes, ttp
  • jq, yq, yamllint

I created a Ubuntu-based and a Centos-based version. You can download the containers from Docker Hub; select ipspace/automation:ubuntu or ipspace/automation:centos You can also build them yourself:

  • Download the desired directory from GitHub
  • Modify the requirements*.txt files to select the packages you want to install
  • Modify the requirements.yml file to select the Ansible collections you want to install
  • Run docker build

Running a Network Automation Container

Containers are started with docker run command, and if you want to have an interactive container you should add -it flags. You probably don’t want to have a ton of expired containers lying around, so let’s add --rm flag (remove a container after it exits). For more details, watch Starting, Stopping and Removing Containers part of Basic Docker Commands

The rest of the command line is passed straight to the container – this is how you would run ansible-playbook:

docker run -it --rm ipspace/automation:ubuntu ansible-playbook

The “only” problem: the container cannot access your playbooks. You have to map the directories that should be accessible to your automation tools with the --volume parameter, and make sure the target directory is the current (working) directory when the desired command is executed. You could set the working directory with the -w flag, or map the current directory into the initial working directory specified in the Dockerfile (/ansible) – that’s what we’ll do (more details in Mapping Host Directories into Containers)

You might also want to map your home directory into the container to get default settings like .bashrc. The simple command we started with thus becomes:

docker run -it --rm \
    --volume $(pwd):/ansible \
    --volume="/home/$USER:/home/$USER" \
    ipspace/automation:ubuntu ansible-playbook
As Daniel Justice pointed out in a comment, it’s a terrible idea to give a container access to your home directory unless you can absolutely trust it (for example, you built it yourself).

This approach works (try it out), but there’s one more thing to do: programs inside the container run as root user and create files owned by root in your directories. Cleaning them up is annoying; wouldn’t it be better if we could run the container as the current user? Sure, here’s the final command (based on this blog post):

docker run -it --rm \
    --user $(id -u):$(id -g) \
    --volume $(pwd):/ansible \
    --volume="/etc/group:/etc/group:ro" \
    --volume="/etc/passwd:/etc/passwd:ro" \
    --volume="/etc/shadow:/etc/shadow:ro" \
    --volume="/home/$USER:/home/$USER" \
    ipspace/automation:ubuntu [email protected]

No-one in their right might would want to use such a command on a daily basis, so I created a shortcut script (run-automation) in both directories. Put it somewhere in your PATH and you can do:

run-automation ansible-playbook...

Alternatively, you can create a shell function (for example in ~/.bash_profile4):

run-automation ()
{
docker run -it --rm \
    --user $(id -u):$(id -g) \
    --volume $(pwd):/ansible \
    --volume="/etc/group:/etc/group:ro" \
    --volume="/etc/passwd:/etc/passwd:ro" \
    --volume="/etc/shadow:/etc/shadow:ro" \
    --volume="/home/$USER:/home/$USER" \
    ipspace/automation:ubuntu [email protected]
}

To make the automation container even more convenient to use, create a series of aliases (yet again, ~/.bash_profile might be a good place to store them4):

alias ansible-playbook='run-automation ansible-playbook'
alias ansible-galaxy='run-automation ansible-galaxy'
alias git='run-automation git'

Final tip: if you’re going to execute a series of automation commands, use run-automation bash to start a new shell within the automation container.

For more details on building and running Docker containers, watch the Introduction to Docker webinar.

Revision History

2021-12-13
Added a warning about giving container access to your home directory.

  1. Containers seem to be an an answer to every challenge one might have these days (unless it’s LISP or BGP). ↩︎

  2. Ignoring for the moment the time it will take to get Docker running on MacOS or Windows Linux Subsystem. It usually works… until it doesn’t. ↩︎

  3. Another recipe published by Jaap de Vos builds a minimalistic container with Ansible, napalm, and nornir. ↩︎

  4. Or not. I have no idea how to set up my *nix environment properly; I know just enough to have opinions and do damage. ↩︎

2 comments:

  1. Another approach could be adding and running a script that clones a git repo where you store your data. Obviously this is not beneficial during development (every change had to be commited/pushed) but works well when the container is part of a CI/CD pipeline.

  2. "Containers seem to be an an answer to every challenge one might have these days". Yes, and unfortunately this post is case-in-point. You are throwing the kitchen sink into what should probably be a VM. Mounting your home directory in read-write mode into a container is a fantastic recipe for a really bad headache. How do I know your container isn't going to post my SSH keys to PasteBin after I start it up? I'm not making an accusation; simply stating that not everyone on DockerHub is a saint. Convenience often comes at the price of security, and that is a dangerous tradeoff these days.

    Replies
    1. "You are throwing the kitchen sink into what should probably be a VM" << Sure, I have another recipe somewhere to build a Ubuntu VM with Vagrant, or you could use almost the same recipe.

      Some people are OK with VMs. I use them because I had to set up Vagrant/Virtualbox on my Mac anyway, so why bother with Docker. Other people prefer containers. There is no right answer.

      "...simply stating that not everyone on DockerHub is a saint" << Agreed. You know how you can easily get around that? Build your own container after vetting the build recipe you got off the Internet. That's why my build recipes (Dockerfiles) are on GitHub.

Add comment
Sidebar