iSCSI or FCoE – Flogging the Obsolete Dead Horse?

One of my regular readers sent me a long list of FCoE-related questions:

I wanted to get your thoughts around another topic – iSCSI vs. FCoE? Are there merits and business cases to moving to FCoE? Does FCoE deliver better performance in the end? Does FCoE make things easier or more complex?

He also made a very relevant remark: “Vendors that can support FCoE promote this over iSCSI; those that don’t have an FCoE solution say they aren’t seeing any growth in this area to warrant developing a solution”.

I think FCoE or iSCSI is the wrong question to ask. A better question would be “FC or iSCSI?” If you believe that Fibre Channel management capabilities outweigh the complexity of yet another layer-3 protocol in your network, then it makes sense to stick with Fibre Channel protocol stack, and introduce FCoE in the access network. If you’re a smaller shop with only one or two storage arrays, I don’t think the potential advantages of FC management justify introducing an additional protocol stack.

Regardless of what people think, FC is a full-blown protocol stack with its own routing protocol and hop-by-hop forwarding behavior. Running FC or FCoE in dense mode is equivalent to running IPX (or AppleTalk or DECnet or OSI) in parallel with IPv4 and IPv6 (you do run IPv6, don’t you?).

Assuming there’s a reason to use FC protocol stack, does FCoE make sense? Absolutely, at least in the access (server-to-ToR switch) layer – FCoE reduces the number of physical server interface cards, cables, and switch ports, while offering better performance than 8 Gbps FC (and I have yet to see a server that really needs two 8Gbps FC uplinks).

Beyond the access layer we’re dangerously close to an area polluted with religious wars, and I have no plans to start another one.

However, while FC or iSCSI might have been a crucial question a few years ago, we should start looking past LUN-based storage in heavily virtualized or cloud environments and ask other questions:

  • Should we use SCSI-based (FC, FCoE, iSCSI) or file system based (NFS, SMB) protocols for virtual disk access?
  • Should we use LUN-based storage arrays or storage solutions with a distributed file system (which makes redundant architectures and storage replication infinitely easier than LUN-based solutions)?
  • Should we use dedicated storage devices (storage arrays) or servers with DAS disks and a distributed file system?
  • Should we run DAS-based distributed file system on hypervisor hosts (example: VMware VSAN, Nutanix) for a truly converged solution?

We’ll have a bit more in-depth discussion in the upcoming Designing Private Cloud Infrastructure webinar and during the Infrastructure for Private Clouds Interop workshop, and I know how major public cloud providers answered those questions (hint: they don’t have storage arrays). What would you do if you were planning to build a new data center or private/public cloud?

4 comments:

  1. What I see is Public Cloud provider don't use storage arrays because they use distributed file system to handle capacity. So they use software load balancing technics to distribute workloads.
  2. Very nicely done, Ivan. Always a good reminder to make sure we're asking the right questions.
  3. This comment has been removed by the author.
  4. SANs are slowly becoming a dead horse. There once was a time when shared VMs could only be stored on LUNs. Nowadays Hyper-V supports SMB storage for VMs and people even put databases on file shares.
    Unless you absolutely need native filesystem access and don't want to use a layered on protocol (NFS, SMB, etc) then the entire point of SANs is diminishing as time goes.
Add comment
Sidebar