Lego Bricks and Network Operating Systems
One of the comments I got on my Lego Bricks & BFT blog post was “well, how small should those modular Lego bricks be?”
The only correct answer is “It should be Lego bricks all the way down” or (more formally) “Modularity is a concept that should be applied at every level of the architecture.”
Today let’s focus on how much easier the life would be if we could take apart the network operating systems instead of just watching them as glued-together Death Stars.
The architectural differences between modern server- and network operating system are minimal. They commonly run on some variant of Unix, and use a number of independent processes (daemon) to get the job done. The real difference between the two is in packaging.
Linux is usually distributed as a zillion of packages (all of them hopefully supported by the vendor who sold you the distribution). You can get most network operating systems only as a single humongous all-or-nothing image (in the spirit of Matt Oswalt’s blog post let’s call them BFI).
The only major exception I’m aware of is Cumulus Linux; Arista might have something similar (if that’s the case, please write a comment), and Juniper started shipping non-core functionality as packages outside of Junos on their QFX10K switches.
Do you want to have Libre Office on your Linux web server? Probably not, and so nobody forces you to install it. Do you want to have VoIP support on your Internet edge router? Probably not, but you can’t get rid of that code, because it’s tightly coupled (or at least packaged together) with the code you need.
Before anyone tries to tell me how impossible it is to support hundreds of independent packages making up a network operating system, let me point out that the same model works pretty well for Red Hat and a few others companies. It’s not that it cannot be done, it’s simply that your networking vendor cannot do it.
Let’s move from initial deployment to troubleshooting. All software has bugs, and sometimes you have to restart a daemon that sprung a memory leak. No big deal on a server operating system. Mission Impossible on most network operating systems. How would you restart OSPF or BGP daemon on Cisco IOS? How about IPv6 RA daemon?
Occasionally, you might have to replace the buggy code that’s been giving you headache. Doing that on a typical Linux distribution is (relatively) easy – you download a new version of that package (and its dependencies), install it in test environment, check whether it works, and roll out the changes into production environment.
In the networking world, the vendors expect us to download a whole new version of the whole operating system (including all the other wonderful bugs … oops, features … they introduced in the meantime) just to fix a simple bug in one process. Nobody in his right mind would do that just because vendor TAC told them to do so – in environments that take networking seriously you’d have to go through a whole release validation and bug scrubbing process before deploying the new image.
It’s hard to replace the whole operating system with a new version without reloading the whole box, which (due to potential significant disruption) triggers all sorts of SNAFU-avoidance procedures (aka maintenance windows). The usual fix for the problem: additional complexity, this time in form of ISSU. Wouldn’t it be easier (and less error-prone) to give us the tools to patch the problematic software components without crashing the whole box?
Supposedly some vendors got the message and allow you to download bug fixes as small patches, but AFAIK Cumulus is the only one that fully embraced the Linux model and started packaging their operating system the way it should have been done: as independent Debian packages available for download from an online repository. Is anyone else doing the same thing? Please write a comment!
You can also get monolithic Cumulus images instead of individual packages. You’d obviously use the monolithic images for initial installations, and might prefer them for major upgrades… but at least you have options.
Finally, keep in mind that what I just described has nothing to do with the “horrors of monolithic vertically integrated stack” that SDN evangelists like to ramble about; it’s a simple consequence of 20-year-old way of delivering software (going all the way back to shipping EPROMs to the customers) that never changed.
Unfortunately, as I see startups launching new products using the same BFI approach, it seems we’ll be stuck with this nightmare for a long time – it looks like almost everyone working for a networking vendor (regardless of how many vendors or startups they worked for in their career) considers this outdated methodology best current practice.
Most of those packages on Linux that you are referring to are not very performance sensitive. While on routers, everything is performance-sensitive. (Either raw pps, or scalability of the control plane). Everything that is raw pps performance related in Linux is hidden in the kernel. In other words: you'd have to upgrade the whole kernel to fix a bug in Linux. And reboot the box. Not much different.
If you release your software as hundreds or thousands of tiny lego bricks, you will run into another problem. Interoperability. Not all lego blocks will be compatible. You'll need to keep track of what works with what. And developers have to keep it in mind too, and document it, and hope they tested all combinations that a customer could up with.
I don't think it's as simple as you describe here.
Of course I do agree what having different components running as different processes, e.g. like in a Unix environment is to be preferred over a monolitic one-memory-space one-image-blob architecture. But it's easier said than done.
Most Linux distributions are using different releases to separate maintenance and important bugfixes from new features. When an issue is discovered in some software and the upstream developer releases a new software version (including bug fixes and new features), the Linux distro folks do hunt for the actual bugfix and apply this bugfix to the software version which has been released for the Linux distribution (e.g. Debian 7.x = Wheezy).
The upstream newer software releases are packaged and compiled for a future major release of the Linux distribution (e.g. Debian 8.x = Squeeze). Once the new major release has been released, the "old" stable release will be supported for some time (e.g. 18 months) with security fixes only. With "enterprise" distributions, you'll typically get support for a few years into the future, but the story stays the same: the release of a new major (with new bugs and features!) still requires you to upgrade your systems to this new major release, if you'd like to continue having security fixes.
Technically, one can upgrade their entire system "by package". However, some packages may no longer be supported in the new major release, but they're also not uninstalled from your box. So when you're installing a new system next to your upgraded system, it might look, act and feel different, due to slightly different installed packages. If you're asked to rebuild this system, you simply can't take the latest major release: some software will be missing or probably change the overall behaviour.
Hence many experienced Linux system administrators do take the "major release" cycle as a chance to re-build their systems from scratch. Some approaches in systems administration apply the same logic for every application release: every time one is deploying a new software release, new VMs/containers will be freshly installed according to some Puppet manifest, Chef recipe or Ansible Playbook, the systems are introduced into the running application cluster and older systems are removed.
Typical networking vendors can also apply the same logic for the BFI approach: a "stable" branch won't receive new features for a long time, but only bugfixes. New features are added to a "fresh" branch. Every other year, "fresh" becomes the next "stable". The "old stable" release continues to receive bugfixes for a year, but that's it: users are asked to upgrade to the "new stable".
Of course: the developers or BFI builders need to apply the same bugfix both in the current release and an older release. For some time, the actual difference is very small and easy, but after about 2-3 years, the differences may become large - but luckily, that's just the point where a new major release will be created. Some vendors may not have notices this point :)
So by installing a stable BFI with the same major version following the scenario above, you'd only install the list of vendor-supplied bugfixes in a reproducible way. Individually composing your system of thousands of different packages looks like a nightmare for traditional vendors: it might be very hard for them to reproduce and fix your bug report.
Of course You still have to download an image with all the features inside so it's not quite it..
With IOS-XR Cisco moved to a [somewhat] modular approach. You can update individual features (read processes) by installing Package Installation Envelopes (PIEs).
Cheers,
Igor