Building network automation solutions

9 module online course

Start now!


  1. I don't drink this Cisco Kool Aid about interconnecting data centres using an IP backbone. Rather use FC directly over DWDM instead of FCIP on MPLS.
  2. In principle I agree with you ... transporting something natively is always better and cleaner than tunneling/encapsulation schemes. But native FC requires dark fiber and DWDM gear. If you have both, you'd be stupid to use FCIP, if you can't get one or the other one is too expensive, you have to consider the alternatives.
  3. One aspect of extending broadcast domains (VLANs) between Data Centres is operational control of items like STP and bcast storms over this distance - I know cases where what was to be Disaster Recovery setup ended up in melt-down of both sites without any natural disaster.
    New 'clustered' things like VMWare ESX setups ask not only need sync/control/heartbeat VLANs extended, but also user/productive ones too. My feeling is that any reasonably sized environment will eventualy get into problem area, given some growth over time and constant changes..
    Also, think about having consistency of HSRP with STP, included with which servers are active where - in ESX case this could even move automaticaly
    There are so called geo-cluster technologies, e.g. with SUN or HPUX which can handle pure IP routing between DR sites, but this is not so cheap or simple to set-up. It is amazing how vendor landscape (server, OS, app vendors plus those like CISCO) is unwilling to work on solutions to remove expanding VLAN. Your post, Ivan, on missing 'TCP/IP session layer' is very relevant in here too...
    I can see why CISCO is happy to provide all the esoteric ways to 'bridge' the distance - it forces enterprises to use more expensive equipment. Working on the other hand with other industry players to achieve pure 'IP only' way is not that sexy, although was the initial CISCO 'mantra'.
  4. A lot of valid points. Thanks.

    As for "why are server vendors using weird methods": the server SW/HW vendors (or application vendors) just want to get their job done and couldn't care less how their implementation will work in real life; it's important that it works in their lab and in the demo room. Once the sale is closed and the equipment delivered, the problem is "transparently" passed over to the networking team.

    Considering the recent head-to-head clashes in server, virtualization, UC and networking space, it's no wonder that Cisco is not too keen on educating other companies how to develop optimal implementations.
  5. Another example of clueless OS vendor:
  6. If we go on customizing the network based on the black boxes (servers or any other appliance) that are connected to network devices, it will one day become nightmare to manage and even can't fallback and we will be blamed for not providing reliable network.

    Yes, we have to have adaptability but it should also be a collaborated effort from all domains. Network teams have to deal with various operating systems running in various servers with multiple NICs with multiple high available implementations. In most cases we are reactive to incidents arising out of non-standard implementations.

    In my scenario, I have to connect two data centers within the same city using DCI (dark fiber / Cat 6500-VSS). As per Cisco design doc, I have to set aside 4 cat 6500-vss only for DCI, this is hard to justify to extend VLANs.

    Kindly see attached diagram for interconnecting DC using dark fiber and highlight if there are any caveats.

    Thanks, VJ
  7. Hi! Nothing too bad on the first look. If you'd like to have a professional opinion, please contact our Professional Services team (see ).
Add comment