Scalable, Virtualized, Automated Data Center

Matt Stone sent me a great set of questions about the emerging Data Center technologies (the headline is also his, not mine) together with an interesting observation “it seems as though there is a lot of reinventing the wheel going on”. Sure is – read Doug Gourlay’s OpenFlow article for a skeptical insider view. Here's a lovely tidbit:

So every few years the networking industry invents some new term whose primary purpose is to galvanize the thinking of IT purchasers, give them a new rallying cry to generate budget, hopefully drive some refresh of the installed base so us vendor folks can make our quarter bookings targets.

But I’m digressing, let’s focus on Matt’s questions. Here are the first few.

It seems that when you boil it down people are looking for a way to automate network configuration. (VM-aware of course) So that an administrator from either the service provider or tenant can make a few clicks and the network does what it is supposed to do.

Absolutely. I’m just not sure any more that you should be doing those changes in the networking devices (I hope you aren’t reconfiguring your network every time you add a new application or a new IP phone). The final solution should be a total decoupling – network provides IP transport and the virtualized servers and appliances do whatever they like over it. The more I think about it, the more it looks like we need something like the kernel/user space split in operating systems, not an EVB-like ever-tighter coupling between the network core and the edge.

I don't have a problem with automation at all; in fact I think it would be brilliant, but why isn't anyone trying to attack that problem simply.

Until we make the virtual networks into applications running on top of IP, automation will remain a hard problem. You know that every data center’s design (or at least the way it’s wired together) is unique (which doesn’t necessarily mean it’s useful). Writing automation scripts that would generate network device configurations for unknown topology is a tough exercise.

I was involved with an IT-as-a-Service product that did exactly what Matt is asking for: you’d add a new user or company and the provisioning script would configure the switches and firewalls, Cisco’s Call Manager (if the user has an IP phone), create user’s account in Windows AD, create a new VDI VM for the user (and associated disk), install the software in VDI VM, and a few other things.

It worked really well because the underlying infrastructure was tightly controlled – you couldn’t just slap that automation tool on top of a spaghetti hodgepodge of switches, firewalls, load balancers and servers from multiple vendors (because Gartner told you mix-and-match will automagically lower your TCO) and expect it to work. It’s not a problem to handle multi-vendor environments, but there has to be at least some order in the chaos.

Netconf would enable you to write the correct configuration to a switch given a set of changes from a front end. This doesn't seem like it would be too difficult to implement into a management suite.

Netconf is just a transport mechanism (RPC over SOAP or SSH) and a slightly more convenient encoding format (XML instead of text). It simplifies the implementation details (you don’t have to write expect scripts), but not the hard part of the problem (generating device configurations).

If the management suite is aware of virtual machines, the ACL's you have assigned to a given interface/VM can follow the VM where it goes too.

Absolutely, but solving the packet filtering/firewalling problem in the ToR switch is an architecturally flawed approach. It’s a kludge that circumvents the lack of functionality of VMware vSwitch.

Please allow the late-2020 version of me to interrupt this blog post. Xen is mostly dead, OpenFlow is a fringe technology not a world-dominating miracle, vShield is dead (NSX-T is the current cool kid on the block), and I haven’t heard from NEC in a long long while. Just so you know how fast things change when you’re dealing with “emerging” ideas. In the meantime, VMware vSwitch remained a kludge. Some things never change.

You can apply packet filters directly to VM interfaces if you use Xen or KVM and use OpenFlow to download the filters to the hypervisor hosts if your distribution includes Open vSwitch. XenServer distribution includes vSwitch Controller, an OpenFlow controller for the Open vSwitch.

VMware is solving the same problem with vShield Zones/App that has “slightly” lower performance than kernel-based packet filters because all VM traffic has to travel through firewall’s userland.

Finally, it’s just a step from what NEC demonstrated at last Networking Tech Field Day to a fully-automated data center network setup that uses vCenter data to auto-provision VLANs needed by VMs. When will we see a tool like that? Who knows, it might not be sexy enough to get funding, even though it would be extremely useful to mid-range customers (but they are usually the ones getting ignored by the vendors).

More information

I talked about these topics in a CloudCast Episode 34, and you can find more details in ipSpace.net virtualization and data center webinars.

2 comments:

  1. VCD does all of that and doesnt care about your switched network assuming you have a VDS or 1KV.

    The automated Data Center = auto-provisioning VMs (building standard Linux/Windows servers or clients)using a template pre-built VM that will be copied - within a number of pre-assigned layer2 clouds that live behind a virtual edge firewall that have IP4 routes (large ones /23 to /21) from your core already defined. Autoprovisioning also autoprovisions IP addresses for these servers as well as 'pre-defined' firewall rules on the edge device as well as on the servers' nics.

    This lets the Windows or Linux group quickly spin up a machine for a customer to use. The Windows guys cant spend six days(for some reason) building a server anymore.

    I dont feel this takes away much from the network group. Firewall rules still need to be customized for any deployment, the routes for the server or VDI address space still had to be defined, TOR switches will always be around and need to be installed, address space will need to be provided and installed, troubleshooting will always end with the NE. Essentially the storage and network engineer are still doing exactly what they were before except for assigning IPs to individual servers (not sure i agree with this) and not assigning LUNS to servers (not sure i agree with this either).

    Most of the work is taken away from the server 'build' group. Of course they are harping that they still need 5 days to patch as the copied image cant keep up with Linux and MS' monthly patching cycle.
  2. VCD will be a very interesting solution for server (not so much VDI) deployment once it matures. Have you taken a close look at vShield Edge functionality?
Add comment
Sidebar