Is Fibre Channel Still a Thing?

Here’s another “do these things ever disappear?” question from Enrique Vallejo:

Regarding storage, is Fibre Channel still a thing in 2022, or most people employ SATA over Ethernet and NVMe over fabrics?

TL&DR: Yes. So is COBOL.

To understand why some people still use Fibre Channel, we have to start with an observation made by Howard Marks: “Storage is different.” It’s OK to drop a packet in transit. It’s NOT OK to lose data at rest.

That (absolutely correct) mentality resulted in highly reliable black-box systems called storage arrays that had to be accessed (due to their high costs) by many systems. Unfortunately, we had a gazillion of server operating systems in late 1980s, and the simplest way forward was to emulate the existing SCSI adapters and the 50-pin SCSI cable, resulting in Fibre Channel lossless network requirements.

Fibre Channel has been around for ~30 years, and storage has changed in the meantime – we got scale-out software-defined storage (SDS) and hyper-converged infrastructure (HCI). Using Fibre Channel in SDS or HCI would be like using COBOL to develop a new snazzy web app; these solutions use proprietary access methods (example: VMware VSAN) or look like an iSCSI/NFS target (example: Nutanix).

Regardless of AWS S3 durability marketing, some data is too precious to be put into a distributed storage system; you might still want to use a dedicated storage array for your production transactional database.

Do you have to use Fibre Channel if you decide you still need a storage array? Absolutely not. It’s cheaper1 to build a dedicated Ethernet fabric2 to run iSCSI or NFS than it is to build a new Fibre Channel network.

Would Fibre Channel give you better performance? Probably not. Years ago, I was told that FC works better than iSCSI because the transport stack is simpler and more standardized whereas every vendor uses a slightly different variant of TCP stack that has to be tuned for maximum performance. I hope that real-life experience and Moore’s Law brought us way beyond the “good enough” point.

Does that make Fibre Channel dead? Of course not. People who have been building and upgrading their FC-based SAN for the last 30 years will keep doing so. Would I use Fibre Channel in a new deployment? Absolutely not.

Finally, a word on ATA-over-Ethernet: it’s a simple protocol running directly on top of Ethernet. I considered that a bad idea in 2010 (the vendor using ATAoE strongly disagreed) and I haven’t changed my mind even though someone had great experience running ATAoE on Debian.

  1. I wanted to write better, but then remembered that better depends on what metric you like to choose ;) ↩︎

  2. Or two if you believe in strict SAN-A/SAN-B separation ↩︎


  1. I strongly recommend to go back to FC from iSCSi in my company, because no one wanted to pay for even one, not two separate Ethernet fabric. They're more happy to pay for old, refurbished FC gear, now. And I can upgrade switches again without waiting endlessly for storage maintenance window.

    1. I have nothing relevant to contribute, but just wanted to say thank you for the series of posts. Interesting how much valuable information can be triggered from a few questions about "old" technology in current days.

      Oh, and the original question pretended to ask about FC vs iSCSI and NVMe-oF, not ATA-over-Ethernet (that was another example of old technology I was thinking of similar but not as widespread as FC, but I messed up the acronyms, sorry).

  2. As everything - It depends.

    1. Company size matters: Most large corps have a storage team and a networking team. and these two teams do not overlap. Network team handles the communication between servers and the storage team handles the data transport layers. They would never use iSCSI in that methodology.

    2. iSCSI in my opinion is for smaller groups where you have a network engineer who likely has no SAN switch experience. It is easy for them to add a VLAN access port to use iSCSI but would be a big effort for them to do a SAN Zone config.

    This is really what is comes down to. Most Arrays can use eith 25Gb iscsi ports or 32Gb FC ports. If I was building out a new hardware and had the expertise I would NEVER use iSCSI for storage. I do not recall ever seeing an iSCSI array outside an SMB configuration as the Storage Engineers would have dedicated access to their own switches in Enterprise land. Given that why would a storage engineer want a network switch over a FC switch to run their storage on?

    The VMWare folks like iSCSI because they understand it but the FC SAN is simplier to use IF you have a SAN engineer that knows how to do switch Zoning - which to be honest should only take a few hours to train a network engineer to understand as it pretty simple to understand how it works.

    In closing - I have tested iSCSI 25Gb vs 32Gb FC for VMWare storage and hands down the FC I/O at 32Gb is faster with Lower latency than the iSCSI.

    VSAN - I would never use outside a Lab environment. Yes it works but if you have only 3-4 nodes in the cluster you waste a ton of space. If you lose a Node it effects you if you lose 2 nodes then you are down. Using a real SAN you will not have to worry about loosing a Node as it has no effect on your storage as it is where is belongs on the Storage Array. You also will have to eat a lot of CPU cycles that should be focused on computational functions but now also have the handle the I/O layers as well and that folks translates into less performance.

    Doing things cheaper rarely equates to better. And in the iSCSI vs FC this is very true indeed.

    1. I agree with you completely.

      As for vSAN you need an expensive licence, more expensive servers (more CPU, more RAM, disks, RAID controlers), more servers (as you need at least 7 to have real ftt-2). So everything looks great on paper and in VMware trainings but in practice you will most likely end up with SAN.

Add comment