FCoE between data centers? Forget it!
Was anyone trying to sell you the “wonderful” idea of running FCoE between Data Centers instead of FC-over-DWDM or FCIP? Sounds great ... until you figure out it won’t work. Ever ... or at least until switch vendors drastically increase interface buffers on the 10GE ports.
FCoE requires lossless Ethernet between its “routers” (Fiber Channel Forwarders – see Multihop FCoE 101 for more details), which can only be provided with Data Center Bridging (DCB) standards, specifically Priority Flow Control (PFC). However, if you want to have lossless Ethernet between two points, every layer-2 (or higher) device in the path has to support DCB, which probably rules out any existing layer-2+ solution (including Carrier Ethernet, pseudowires, VPLS or OTV). The only option is thus bridging over dark fiber or a DWDM wavelength.
If you’ve been working with long-distance Fiber Channel, you know that the end-to-end throughput decreases with distance and number of buffers the switches have due to the Fiber Channel flow control mechanism (BB_credits). BB_credits are a very safe mechanism: if the transmission latency is too large, the confirmations won’t arrive in time and the transmitter will stop. Worst case, you’ll get dismal throughput.
The Priority Flow Control used by lossless Ethernet is the exact opposite of BB_credits: as the receiver starts running out of buffers, it has to send a PAUSE frame to stop the sender. Obviously the PAUSE frame has to be sent soon enough so that the packets sent before the transmitter will receive the PAUSE frame will not overload the input buffers.
Cisco published a detailed white paper describing the limitations of PFC. Short summary: if you want to run PFC across a 10-km link, you need to stop the transmitter while you still have 350K free space in input buffers (the total amount of per-interface buffers space is 500K). PFC implementation in Nexus 5000 is therefore limited to 300 meters (and the distance is not configurable).
With FCoE out of the picture, we’re left with the will-known IP-based options if you can’t get a dedicated wavelength for your Fiber Channel SAN: FCIP, iSCSI or NFS. Told you iSCSI is better than FCoE, did I not?
More information
- Data Center 3.0 for Networking Engineers webinar (buy a recording or yearly subscription) describes numerous Data Center technologies, including DCB and FCoE.
- Read my Data Center Bridging posts to get more information on DCB and PFC
- I also wrote numerous posts covering FCoE architecture and design caveats.
There's a fundamental problem with your argument. The answer is not that FCoE becomes the only protocol for all environments, but that FCoE is a path to allow for a converged Ethernet infrastructure. We're getting to the point where the line between FC and Ethernet is blurring (Cisco's Unified Ports and HP's Flex Ports) on the switch and on the adapters (CNAs), you'll be able to choose your Ethernet protocol*s* of choice whether it is FCoE, iSCSI or other IP traffic. Replication solutions are usually separate links from the rest of the traffic, so having a different protocol (such as iSCSI rather than FCoE) from the same port is not a limitation. Yes, FCoE is limited to layer 2 - the same companies that have created FC extension solutions are looking at stretching FCoE. iSCSI is a good replication technology and there are many FC environments today that use iSCSI solely for replication, so having the same mix of FCoE and iSCSI seems very reasonable.
Stu
Wikibon.org
the technical part of my argument has no problem (fundamental or otherwise), PFC is not a long-distance technology ... but I admit there is a bait at the end of the post ;)
FCoE is a great technology when you want to integrate new servers with legacy FC infrastructure, but not more than that. As for whether FCoE could scale better than iSCSI, that deserves a separate post.
Ivan
Supported distance for FCoE is now 3km and it will be really difficult to extend it further without hardware modification.
Still, nowhere near close original FC.
K
Apparently, Nexus 5xxx to 5xxx can do 3 km (which is still too close, but at least it *can* be a different DC now! :) Not sure if this is a recent change though.
See here: https://supportforums.cisco.com/docs/DOC-15882
F1 card (N7k) can do 20km (2.3MB buffer/port), will be able to do 40km.
But for 5k - 3km. If you do the math, it's close to the limit (well, theoretically 5-6km could do but it'd be ugly).
The only mistake we made was to not grab as many pairs as we could get.
We have 4 major sites within a 100 km radius of the city that are dual fiber pair connected.
We also have the option to purchase "Ethernet" services from the Electric Utility as well as the Telco for areas that are unlikely to be long term sites.
We are using DWDM gear (Passive MUX) (Glorified wire extension)
We are able to run:
1)Multiple isolated instances of varying speed Ethernet using any mix of vendor ethernet gear we like. We do not need to buy DWDM optics from Ethernet Switch vendors. :)
2) Multiple instances of multifabric FC (4/8/16Gb does not matter)
3) Multiple instances of Analog Security Camera Video and camera control signalling.
4) Have many free lambdas for other services. If a DWDM sevice card is available for a service we can run it.
5) Our is passive so it does not care whoose equipment you coonnect(Right Lambda in right lambda out the opposite end.
6) We can spin up test instances any time we like w/o impacting production.
So do we care about "Converged Ethernet"? A: No, but it is an entertaining movie.
Is having your own fiber cheaper? Do you really need an answer to that. Obviously this does not fit in a multi state, multi country use case scenario. It sure is entertaining to have the convergence sales dudes come knocking and try to sell us a spin.
Do we like FC over DWDM? Yes
Do we like Ethernet over DWDM Yes
Do we like Analog Video over DWDM Yes
Do we like Digital Video over DWDM yes
It was great to be in the DWDM business before the Telco was.
There are some very interesting DWDM service cards and devices available.