Your browser failed to load CSS style sheets. Your browser or web proxy might not support elliptic-curve TLS

Building network automation solutions

9 module online course

Start now!
back to overview

Using CSR1000V in AWS Instead of Automation or Orchestration System

As anyone starting their journey into AWS quickly discovers, cloud is different (or as I wrote in the description of my AWS workshop you feel like Alice in Wonderland). One of the gotchas: when you link multiple routing domains (Virtual Private Clouds – the other VPC) you have to create static routing table entries on both ends. Even worse, there’s no transit VPC – you have to build a full mesh of relationships.

The correct solution to this challenge is automation:

  • Define what prefixes exist in each VPC;
  • Define which VPC has to communicate with which other VPC (or just decide to build a full mesh);
  • Use an Ansible playbook (or a gazillion other tools) to adjust VPC peering sessions and static routes whenever there’s a change in addressing or connectivity requirements.

Sounds like gobbledygook? Amazon Web Services Networking webinar might help you.

Not surprisingly, there are large environments out there that are incapable to get such a simple idea off the ground for Layer 8-to-10 reasons… and whenever vendors identify potential lack of competence, they’re quick to fill that niche with yet another product.

There are orchestration tools out there that do exactly what I described. Then there are networking vendors using “if all I have is a router, everything looks like a routing/forwarding problem” approach. Here’s how Cisco and a few competitors propose you “solve” this challenge:

  • Create transit VPC with public Internet access;
  • Deploy CSR1000V (or an equivalent product) in the transit VPC;
  • Hey, make that two CSR1000Vs – redundancy is important;
  • Create IPsec tunnels between workload VPCs and transit VPC, and run BGP on them (see how they changed an orchestration problem into a routing problem?)
  • CSR1000V in the transit VPC collect prefixes from workload VPCs and advertise them to all other workload VPCs. If you believe in “whatever the question, BGP is the answer” approach, you can use BGP communities and route maps to control the inter-VPC connectivity.

Now let’s see what’s really going on:

  • You’re paying CSR1000V licensing to Cisco;
  • AWS charges you for two extra VMs continuously running in transit VPC;
  • You’re paying for VPN traffic exiting workload VPC;
  • You’re paying for IPsec traffic exiting transit VPC.

All that might be cheaper than buying an orchestration system, or building a solution that provisions VPC connectivity… or not. As always, it depends – in this case on your size and traffic volume.

We dug into these details during a Cisco Live Europe 2018 Networking Field Day session – you might enjoy that conversation (I’m not sure Cisco’s presenter did).

Please read our Blog Commenting Policy before writing a comment.

7 comments:

  1. Transit VPC is a challenge for anyone trying to design a hub-and-spoke topology in AWS. I wouldn't be surprised to see AWS release a new type of VPC specifically for this purpose..

    ReplyDelete
    Replies
    1. Think about how packet forwarding works in overlay virtual networks, and you'll soon figure out why transit VPC is such a Mission Impossible.

      AWS is really good at not giving in to features that would endanger their scalability and stability (layer-2 stuff and IP MC come to mind).

      Delete
    2. I had a similar situation a few years ago for a large client moving their apps into AWS and Azure. Turned into an absolute pain to manage as neither the MSP or client saw the value in automation. On the bright side, we were able to set up a transit VNet within Azure.

      They required IPsec to terminate within the VPC/VNets but also didn't allow public IP addressing into their MPLS VPN due to FUD (these were directconnect & expressroute), so native cloud IPSec (VPGs etc) couldn't be used. Client was Cisco based so deployed CSR1000Vs on the cloud providers into the DMVPN on the WAN.

      Then the client wanted to use more than one VPC/VNet and didn't want to set up N number of peering links between VPCs (again, didn't like automation) or separate CSRs with the security licence and big instances to do IPSec. So we ended up getting sign off on a transit VPC and running a second tier DMVPN network inside AWS without tunnel protection profiles.

      This is really just a story of bringing old culture into the new world. Want a VPC? Sorry, that'll be a new PO to the MSP to create the CSRs with an 8 week lead time, plus 2 weeks design review and 2 week change management lead time.

      Delete
  2. So with your Ansible playbook, you actually invented something called routing protocol. Why do you reinvent the wheel? We have BGP for over 20 years.

    ReplyDelete
    Replies
    1. You don't need a knife to cut that cheese - we had chainsaws for the last 100 years. Yep, makes perfect sense.

      Delete
  3. Don't worry, I'm sure they're working on some kind of APIC to control VPCs (which will cost even more than 1000Vs but will be justified by providing higher throughput). :-/

    ReplyDelete

Constructive courteous comments are most welcome. Anonymous trolling will be removed with prejudice.

Sidebar