Storage networking is different

The storage industry has a very specific view of the networking protocols – they expect the network to be extremely reliable, either by making it lossless or by using a transport protocol (TCP + embedded iSCSI checksums) that was only recently made decently fast.

Some of their behavior can be easily attributed to network-blindness and attempts to support legacy protocols that were designed for a completely different environment 25 years ago, but we also have to admit that the server-to-storage sessions are way more critical than the user-to-server application sessions.

Storage session loss can result in large-scale data corruption. If an end-user’s application session fails, you’ll hear some foul language, but the data will remain in a consistent state (assuming, of course, your application uses a decent database server with rollback capabilities). If a storage session is lost, the disk data could be left in some indeterminate state and might be permanently corrupted. Databases are quite good at recovering data; for whatever weird reason file systems are sometimes less robust.

Loss of a disk device can crash the server. If your server loses its network connection, all the user sessions are gone, there will be some data loss, but you’ll probably end with consistent data (transactions that have not been completed will be rolled back). The server will happily continue its (now largely non-existent) work.

If your server loses its disk, a panicky crash is almost unavoidable. Dual HBA (storage adapters) and dual paths to the storage are thus a requirement in a decent data center.

Impact of lost storage is extremely high. Imagine a web server with thousands of concurrent users, tens of web server worker processes and a database server (or you could have the components distributed if you so wish). If you lose an end-user session, a single user will be impacted. If you lose the session with the database server, some web worker processes will be impacted (others might continue working if you have a redundant setup). If you lose connectivity to your disk, all bets are off.

Storage is, well, permanent. If there’s an undetected error in your application session, your program might crash; if the incorrect data is written to a disk, it stays there indefinitely.

However, every decent layer-2 protocol has checksums that should detect transmission errors. IP and TCP also try to detect gross negligence in routers (although these attempts are pretty lame). One has to wonder why the storage people insist on another layer of checksums, be it in iSCSI or FCoE.

The answer is simple: the only end-to-end error detection mechanisms in an IP network are IP and TCP checksums and these are not good enough to detect potential router problems.

Anything else? I’ve probably missed a hundred other reasons why we have to treat storage networks more carefully. Please feel free to add them in the comments.

And the usual addendum: storage networking is just one of the topics described in my Data Center 3.0 for Networking Engineers webinar (buy a recording or yearly subscription).

1 comments:

  1. You also need to consider what happens when a peak race condition occurs. For example, a burst of backup traffic combined with a virus scan at the same time that critical service runs can cause major networking problems where queues can be overrun, switch fabrics overloaded, and more.

    Classic FC copes by using an XON/XOFF method combined with over specification of the network, while IP queueing and buffering handles iSCSI/NFS methods. Two methods to solve the same problem.
Add comment
Sidebar