State of IPv6 in the Data Center Gear
Just in case you haven’t noticed: RIPE region ran out of unallocated IPv4 addresses last Friday. RIPE members (regional registries) can get a single /22 each, enterprises that want to be IPv4-multihomed cannot get provider-independent addresses any more. It just might be time to start considering IPv6 in your data center. Let’s see whether the vendors agree with me.
Data center switches
I have some great news for you: most vendors started supporting IPv6 in their data center switches.
Cisco added IPv6 support on Nexus 5500 in the latest software release (all other Cisco’s DC switches supported IPv6 for quite a while), and Arista EOS documentation has a section on IPv6, although the data sheets still claim IPv6 will be supported in a future release.
All switches from Dell Force10 and HP, as well as EX-series switches from Juniper and MLX switches from Brocade had IPv6 support for a long time, as did Avaya and Alcatel Lucent.
And the products stuck in IPv4-only-land? Juniper’s QFabric has very limited IPv6 support in Junos release 12.2 (static routes on network node group only), Brocade VDX switches will support layer-3 forwarding with the new Network OS 3.0, but only for IPv4, and NEC ProgrammableFlow switches are still IPv4-only devices.
Firewalls and load balancers
Most (physical) firewalls and load balancers supported IPv6 for a while, often with significantly reduced performance. There were some notable exceptions, but that’s gruesome history that we should quickly forget.
These people must be living on a different planet. Obviously the virtual versions of physical appliances (example: BIG-IP virtual edition from F5) support IPv6, but one would hope that the vendors claiming to be focused on network virtualization wouldn’t forget that there’s life beyond virtual MACs and IPv4.
Nexus 1000V, Virtual Security Gateway (VSG) and vShield Edge (oops, VMware vCloud Networking and Security) have no IPv6 support. Juniper has announced IPv6 in its vGW Virtual Gateway, but the supporting documentation hasn’t found its way to Juniper’s web yet.
The clear winner in this category: Cisco ASA 1000V Cloud Firewall does not support IPv6. Let me get this straight: you took ASA code that had IPv6 support since (at least) release 7.0(1) from June 2007 and you removed IPv6 from it? Wow. Just wow.
Do we care?
You might not. You might decide deploying 6-to-4 load balancer in front of your legacy data center is good enough for the next 25 years. However, there are a “few people” that do – the service providers trying to provide common services to their data center clients. They cannot rely on coordinated usage of network 10.0.0.0/8 and thus have to provide the common services on public IPv4 addresses. Somewhat hard if you can’t get them, don’t you think?
For what its worth I learned most of what I know regarding v6 from your videos. That makes you a borg Ivan :)
Well one vendor was quit inexpensive with its products. Of course we asked him about the requested IPv6 support, because IPv6 was not mentioned it its manuals or datasheets.
Of course he exceused himself, the responsible employee must have overseen that topic, but he promised that IPv6 will be implemented soon.
Fortunately, that one was thrown out. (There were other topics, i.e. no active directory integration,...) Looking at its datasheets he is now advertising IPv6 for its wireless controllers. Looking at the manual you can find that the controller itself can now be given an IPv6 adress for management, but no dual stack for its access points, no IPv6 traffic being transfered over tunnel.
(side note: If you let such a vendor win the public tender, he is no longer commmitted to fulfill those requirements, as he admited those ones, although he promised to do so. Strange? No, public tenders in my country...)
- check "IPv6 support"
- vendors will you tell a lot, only to sell you its products
- never do a public tender
For two years support had been saying IPv6 in v5.8 or newer was coming soon, but still nothing. Major fail.
Enjoy the site! You may have forgotten about 100.64.0.0/10 defined in RFC6598 while not as large as a /8 this space is guaranteed not to overlap. Not solving the problem with lack of v6 support but making transitional mechanisms... easier
For security devices I used a tool to craft custom IPv6 packets to beat up on devices to see how the protocol behaved and used this to test ACL processing and logging too.
Some examples from last spring:
Infoblox DNS appliance stated IPv6 support but didn't support some important RFCs. Also, if you changed the IPv6 address it bounced the IPv4 services too, not good for production. We were basically debugging their product in the lab and they corrected some of the items in the 6.4 code.
Mcafee 8.x firewall code. Certain EH passed through when chained and important alerts were not logged or mislogged giving false information. Discovered a fragment reassembly FSM time exceed exploit etc.
Clunky rules for FTP and DNS handling, host and router mode limitations.
Cisco ASR IOS XE. 3.0.2 IPv6 ACL and TCAM bug I included to share with the gang.
One is the ACL with IPV6 and the second issue is the side effect that caused the crash. The crash happened after adding ipv6 ACL failed and it tried to restore the old ACL which it’s not suppose to do. In attempting to restore the old ACL, it triggered a crash. Thus, I’ve filed a bug against the crash. the bug id is CSCua03521. It’s been assigned to a DE already. There is no ETA on the fix. You can view the bug with bugtool http://www.cisco.com/cgi-bin/Support/Bugtool/home.pl
As for the ACL’s, there is a workaround by re-arrange the ACE’s so that it won’t trigger the failure. Would you please send me the current (new) running-config that you want to run? The contents of new ACE’s that you want to add IPV6 ACL’s is important.
From the show tech output on May 2nd, 2012, the ACE with sequence number 440 was causing the problem. If you remove this ACE then you won’t see the error tracebacks or crash problem. From show run output, not sure if ACE sequence 440 is needed since you already have the following ACE in the ACL - deny ipv6 any any log routing sequence 430.
ASR1K processes ACL in TCAM (hardware). The width of largest TCAM entry is 320 bits. The IPV6 ACL src & dst address alone takes 256 bits of these 320. So there will not be enough space in the entry to add all possible other acl matching fields. In order to accommodate all ACL fields we compress IPV6 src & dst address into 64 bits and program these compressed values in TCAM. This compression algorithm has some known limitations. Sometimes compression of address fails. This dependents upon the ipv6 address pattern, address prefix length, order in which these ACEs are configured etc., To avoid this problem for most commonly used ACL fields, we program TCAM without compressing the address. If you uses only following fields in your ACLs, then you never run into this problem.
- IPV6 src address
- IPV6 dest address
- protocol field (ipv6, icmp, igmp, tcp, udp, ipinip, ipv6inip, gre, nigrp, ospf, nos, pim, pcp, sctp)
- (tos, dscp) or TCP flags (not both in the same ACL)
- l4 src & dest ports & port ranges (TCP, UDP src & dst ports)
- ipv6 icmp header types (like pkt too big, echo req etc.,)
- ipv6 routing header presence
- ipv6 destination header presence (later versions code we replaced this with HBH).
Original ACL has fields only from above list. So ASR1k used uncompressed address in TCAM. During edit you added ACE with “routing-type”. This is not part of this list. So this change forced ASR1K to use compressed ipv6 addresses in TCAM. During address compression we ran into this problem.
Following are some options to avoid this problem
1. Use ACL fields given in the above list only. That means you have to remove ACE with routing-type field. As I mentioned before you do not need this as long as you have “deny ipv6 any any routing” ACE in the ACL.
2. Re-arrange the ACEs in the following way. This solves the problem but if you add new ACEs this ACL then they may run into this problem again.
IPv6 access list Internet-Inbound
3. Remove the ACE “deny ipv6 any any routing-type 0 log sequence 440” from this ACL and create a separate ACL with this ACE and apply it in outbound direction (if possible)
4. Remove the ACE “deny ipv6 any any routing-type 0 log sequence 440” and create a separate ACL. Create a QoS policy using this ACL. Use police actopm to drop all these packets. Apply this to the interface in ingress dir along with ACL.
On the Cisco ASR IOS XE rev above reflective ACLs were not supported for IPv6. Could have come in handy.