Using CSR1000V in AWS Instead of Automation or Orchestration System
As anyone starting their journey into AWS quickly discovers, cloud is different (or as I wrote in the description of my AWS workshop you feel like Alice in Wonderland). One of the gotchas: when you link multiple routing domains (Virtual Private Clouds – the other VPC) you have to create static routing table entries on both ends. Even worse, there’s no transit VPC – you have to build a full mesh of relationships.
The correct solution to this challenge is automation:
- Define what prefixes exist in each VPC;
- Define which VPC has to communicate with which other VPC (or just decide to build a full mesh);
- Use an Ansible playbook (or a gazillion other tools) to adjust VPC peering sessions and static routes whenever there’s a change in addressing or connectivity requirements.
Sounds like gobbledygook? Amazon Web Services Networking webinar might help you.
Not surprisingly, there are large environments out there that are incapable to get such a simple idea off the ground for Layer 8-to-10 reasons… and whenever vendors identify potential lack of competence, they’re quick to fill that niche with yet another product.
There are orchestration tools out there that do exactly what I described. Then there are networking vendors using “if all I have is a router, everything looks like a routing/forwarding problem” approach. Here’s how Cisco and a few competitors propose you “solve” this challenge:
- Create transit VPC with public Internet access;
- Deploy CSR1000V (or an equivalent product) in the transit VPC;
- Hey, make that two CSR1000Vs – redundancy is important;
- Create IPsec tunnels between workload VPCs and transit VPC, and run BGP on them (see how they changed an orchestration problem into a routing problem?)
- CSR1000V in the transit VPC collect prefixes from workload VPCs and advertise them to all other workload VPCs. If you believe in “whatever the question, BGP is the answer” approach, you can use BGP communities and route maps to control the inter-VPC connectivity.
Now let’s see what’s really going on:
- You’re paying CSR1000V licensing to Cisco;
- AWS charges you for two extra VMs continuously running in transit VPC;
- You’re paying for VPN traffic exiting workload VPC;
- You’re paying for IPsec traffic exiting transit VPC.
All that might be cheaper than buying an orchestration system, or building a solution that provisions VPC connectivity… or not. As always, it depends – in this case on your size and traffic volume.
We dug into these details during a Cisco Live Europe 2018 Networking Field Day session – you might enjoy that conversation (I’m not sure Cisco’s presenter did).
AWS is really good at not giving in to features that would endanger their scalability and stability (layer-2 stuff and IP MC come to mind).
They required IPsec to terminate within the VPC/VNets but also didn't allow public IP addressing into their MPLS VPN due to FUD (these were directconnect & expressroute), so native cloud IPSec (VPGs etc) couldn't be used. Client was Cisco based so deployed CSR1000Vs on the cloud providers into the DMVPN on the WAN.
Then the client wanted to use more than one VPC/VNet and didn't want to set up N number of peering links between VPCs (again, didn't like automation) or separate CSRs with the security licence and big instances to do IPSec. So we ended up getting sign off on a transit VPC and running a second tier DMVPN network inside AWS without tunnel protection profiles.
This is really just a story of bringing old culture into the new world. Want a VPC? Sorry, that'll be a new PO to the MSP to create the CSRs with an 8 week lead time, plus 2 weeks design review and 2 week change management lead time.