It’s OK to Let Developers Go @ Amazon Web Services, but Not at Home? You Must Be Kidding!

Recently I was discussing the benefits and drawbacks of virtual appliances, software-defined data centers, and self-service approach to application deployment with a group of extremely smart networking engineers.

After the usual set of objections, someone said “but if we won’t become more flexible, the developers will simply go to Amazon. In fact, they already use Amazon Web Services.

Intermezzo: the usual objections

These are the objections I usually get from the networking and security teams:

  • The developers have no idea what they need;
  • The application teams will misconfigure the firewalls, perhaps adding a “permit any any” at the bottom of an access list when everything else fails;
  • Who knows what load balancing algorithm they’ll choose… and then they’ll complain the performance isn’t what they expected;
  • Who will manage all those firewalls?
  • How will you audit thousands of application-specific firewalls?

Back to Amazon

While we’re seriously pondering the grave implications of allowing uncouth hands to touch the network services devices, and deliberating whether to use packet filters or stateful firewalls, Amazon Web Services solved the problem – you can configure security groups, elastic IP addresses, and elastic load balancing with reasonably simple GUI actions or API calls.

Did Amazon implement every single feature found in an F5 load balancer or Palo Alto firewall? Of course not, but what they offer is good enough to get millions of applications deployed on their infrastructure.

Even more interesting, numerous large enterprises already have live Amazon Web Services deployments (usually done without the involvement of networking or security teams)… and yet there are still questions whether

  • We can trust those same application developers to do the right thing when deploying their applications in the private cloud;
  • We need fancy hardware-based load balancers and firewalls to support those applications.

We’re clearly doing something wrong.

Be conservative, but not rigid

I would be the last one to tell you to use happy-go-lucky approach to network services and security for mission critical applications or legacy **** that’s lying around your data center.

On the other hand, don’t always try to over-engineer your solution to solve the worst case scenario. There are many applications that need just-good-enough performance and security, and if the business owners think it’s OK to deploy them on AWS, it’s perfectly OK to use the same self-service approach when deploying these applications in your private cloud.

Related webinars

If you haven’t watched the cloud networking webinars, check them out. You can buy the whole bundle, or get access to all of them with the webinar subscription.

5 comments:

  1. And realize that networking is not unique to networkers. Many technologists understand load balancing and firewalling sufficiently to support their needs.

    I still hold to the idea that the greatest risk to security in IT is the overt complexity in many of today's systems. Amazon has focused on delivering "good enough" while removing all the extraneous functions. IT in general and networking in specific must trim its portfolio to deliver what is required and drop the extras.
  2. There is a paradigm shift that I don’t think most application developers understand. In a traditional enterprise model, the network is built around the application requirements, now we are saying the application has to build around the network. I have so far seen a set of developers who can’t understand the current dozen choices we give for Load Balancer options and can’t correctly communicate their security needs, but they should be given the keys to our Intellectual Property Kingdom? When things go bad, they claim ignorance and state the network and systems guys should have ensured they were doing it right, and in fairness they are right, why would they have this knowledge? In my experience developers don’t understand the difference between latency and bandwidth, and are amazed that there application works better on their workstation with a smaller CPU, but with local web, DB and app services, then it does when they split these services across the pond, I just don’t understand how we can bridge that gap anytime soon.

    I guess my point is, that we should not hinder growth out onto commodity or “cloud” services, but I don’t think the network and systems guys can give up this control and expect a secure environment.
  3. This is one time I don't fully agree with Ivan.

    We encountered this exact situation in my company. Developers wanted self-provisioning, instant deployments, etc... and our IT was simply not mature enough to deliver. So the Devs got approvals to use AWS. They got what they wanted... for the most part.

    Then they needed dependencies enabled back to our data centers. No clue about how tunnels worked. They discovered that a lot of things don't work when Active Directory isn't fully visible to AWS VMs. They did not put a lot of thought into traffic categorization so it was hell for security to do compliance auditing. And yes, they literally did ask if the Security Group could just permit everything to our internal networks then watch over time to see what's flowing across the tunnels and then develop a policy later.

    At the end, most of it was torn down. Someone finally saw the bills and realized that the developers were so thrilled with AWS that they spun up insane amounts of VMs, VIPs and other services... but they almost never spun them down. Not their problem, they insisted. Elastic services aren't really required if there is no growing/shrinking.

    Now we are tackling some of this in-house with VMWare-based solutions. We have no problem with them defining their security groups or if they want to manage their own LBs. They even manage east/west traffic with security groups. But there are limits to how much they can do via catalogs without engaging IT, and it's intentional. North/south still requires engaging the Security teams. We plan to reduce that dependency over time as we adopt more sophisticated orchestrators and capacity management tools.

    Just our experience, YMMV
  4. Same experience here. No monitoring, endless bickering resulting in fractured groups, and then of course the bills. AWS is great if you're a start up. It was interesting to see what happened when the Devs ran the show.
  5. I consider that the arguments in this post are correct, but the topic is larger and way more complex to be discussed in couple of lines. I like working with other groups (DC, Dev...) and I'm really interested to hear their problems and try to fix it as much as possible.
    What I dislike the most is the "I don't care" attitude. Like, If you cannot deliver, I don't care, I'll take my toys and go to xxxxx.
    Really? Is this what an enterprise environment means?

    I always try to do a parallel with life. So, you have a very powerful car which you want to drive with 300km/h now. Does the entire landscape (streets, restrictions, nature...) change just because you want that now? No.

    Of course the groups which network guys interact sees no problem to have their wishes fulfilled and for most of the time we try to make everybody happy, but sometimes it just not possible. If you have a VPS and do something wrong you shut it down and bring another one up.
    If I modify the entire network to make one group happy and the rest have a problem, I cannot just shutdown the network and bring a new one (I don't mean here SDN...)

    I think the key is cooperation, but this have to come from both parties. We (network guys) cannot lock everything in our world, we need to share our toys up to a certain level. This has the reverse that other groups have to tell us what they need in advance...not 2 days after.
    :)

    If there are doubts, imagine that you go to amusement park and use whatever you want, but do they let you _control_ their toys?
Add comment
Sidebar