Why would FC/FCoE scale better than iSCSI?

During one of the iSCSI/FC/FCoE tweetstorms @stu made an interesting claim: FC scales to thousands of nodes; iSCSI can’t do that.

You know I’m no storage expert, but I fail to see how FC would be inherently (architecturally) better than iSCSI. I would understand someone claiming that existing host or storage iSCSI adapters behave worse than FC/FCoE adapters, but I can’t grasp why properly implemented iSCSI network could not scale.

Am I missing something? Please help me figure this one out. Thank you!

11 comments:

  1. I see no reason it can't scale to thousands of nodes. Especially if you built a completely separate network just for your iSCSI storage traffic.
  2. I don't think you need a separate network. ETS (WDRR) + separate CoS class for iSCSI should be enough. Plus you might want to limit the size of your bridged domains ;)
  3. Ivan,
    Theoretically iSCSI can scale, but in practice, the management tools have not been put in place. iSCSI is configured on a host level, while FC can be centrally managed. The ecosystem for iSCSI is centered around Microsoft hosts (iSCSI initiator) with storage from Dell (EqualLogic), HP (LeftHand), NetApp and EMC (CLARiiON now VNX) - all of these are midrange products that are not targeted for larger configurations. In talking with many of these vendors, the average configuration for iSCSI tends to be around 20 hosts and it was very rare to find a customer that had deployed 100 servers in a single network. Once again, there is no architectural limitation, but from an operations standpoint it is prohibitive and from what I have seen, no company is putting together the tools to allow simple deployment and management of large scale iSCSI (would probably want to further develop around iSNS). The average FC switch is larger than the typical iSCSI environment, with edge switches going to 80 or more ports and directors to hundreds of ports.
    Cheers,
    @stu
  4. I agree, was just comparing apples with apples, since FC is a completely separate network :)
  5. I like to think of the services that FC provides as the Active Directory of storage. If you have only have 1 storage array and 25 hosts, Zoning, Masking, are not that big of a deal. Fundamenally though, storage is growing a 1.6x compounded per annum. Even if you dedup you are only buying yourself limited time until you have a serious disk problem.

    iSCSI scales the same way that creating user account on everyone desktop scales . . . n^((n-1)/2)*. You need a centralized AAA, TE and security mechanism for block disc access. FC provides that.

    I have customers who in 18 months have hit the wall on iSCSI deployments cause the disc outgrew the network too fast. Like the early days of VoIP.


    * I haven't actually proven that math, but, could do so for a small donation of a bottle of Rye.
  6. AFAIK Amazon uses iSCSI in their AWS, this is a very large deployment with tens of thousand servers, so obviously iSCSI can scale.
  7. Milen - care to share a source of your data? Amazon and Google do not use traditional storage arrays. My understanding is that they use commodity hardware with the disks inside the servers (DAS). AWS EBS may support iSCSI, but that does not translate to Amazon being a single network of thousands of iSCSI nodes.
  8. The reason I setup iSCSI as a separate network is it performs much better with flow control and/or jumbo frames (and preferably, or if some piece along the way can't do both at once).
  9. I like the comments already posted about management.

    I think it's also about control. SAN and FC people like being separate, so that nothing messes up the SAN. That seems to me to be the source of most of the resistance to FCoE. What, share a cable?! How could one possibly troubleshoot in, gasp, someone else's box?! (And the network guys meanwhile worry about FCIP killing their shared WAN link?)

    I gather iSCSI is gatewayed. OTOH, in a sense the FCoE FCF forwarding to native FC is too. OTOH, the latter is arguably simpler (strip or add L2 header).

    Some people argue that iSCSI is higher overhead. I'd think that depends on the NIC.

    Since iSCSI shares a wire with other network traffic, what do you do for QoS? It's the worst case, need it there ASAP but there's a LOT of it. And if it goes lossy on you, or duplicates packets, FC and related SCSI protocols don't generally handle that at all well.
  10. As far as I understand iSCSI, you'd use direct iSCSI session in a usual server-to-storage scenario (no gateway). iSCSI is an application on top of TCP, so duplicate/lost/corrupted packets never reach the SCSI level (you might have performance issues, but that's a different story). FC(oE)? obviously can't cope with packet loss, so you have to make FCoE truly lossless.

    If you have DCB-enabled switches, it would be best to use separate 802.1p priority value for iSCSI traffic, make it lossless (with PFC) and allocate it a fixed bandwidth percentage (with ETS/WDRR).
  11. A separate network has nothing to do with scale, in fact it's make scale worse since dedicated resources are necessary.

    However, a dedicated network makes it deterministic, that is, performance can be determined absolutely in terms of storage traffic parameters. Note that you still cannot guarantee delivery unless you oversubscribe every element of the network..... which is what FC does at great expense to the customer.
Add comment
Sidebar