FTP: a trip down the memory lane

A while ago I’ve bitterly complained about the FTP protocol design. I have decades-long grudge with FTP. If you’re old enough to remember configuring firewalls before stateful inspection or reflexive access lists became available, you probably know what I’m talking about; if not, here’s the story.

When enterprises started using the Internet 15+ years ago, most desktop FTP clients did not support passive mode (although it was part of the FTP standard). When configuring “firewalls” (one or two routers with long access lists), you had to allow all inbound TCP session to ports higher than 1024 just to support FTP data sessions. No problem ... unless you were using Sun workstations or NetBIOS over TCP (both of them use dynamic server ports above 1024), in which case those services were totally exposed to the Internet.

After a while, FTP passive mode became common in FTP clients (it’s the default today, as it works better with NAT). It did not solve the problem, just move it to the other side: now you had to allow any inbound connection to your FTP server. Arguably this was a lesser risk, as you could run FTP service on a hardened host with almost no other services, but it was still a less-than-ideal design.

The FTP problem was solved only after firewalls (whether dedicated boxes or software functionality in routers) started supporting stateful filters and deep packet inspection.

Contrary to what the pundits would like you to believe, DPI started 10+ years ago when the first firewall looked inside the FTP command stream to discover the endpoints of the FTP data session. However, nobody howled at that time as they were only too happy to download whatever it was they were downloading.

The stateful filters are obviously a performance nightmare as the firewall has to keep state of every single application session crossing it, making it a nice target for a DoS attack. However, there’s nothing we can do as long as we have to support contraptions that use embedded network addresses in application data stream (SIP being another prime example) ... apart from yammering and complaining about the shortsightedness of people who decided opening two TCP sessions is easier than developing a decent session layer (to be honest, I truly believe they did what they thought was the best option at the time ... but we still have to bear the consequences).


  1. You can always configure your ftp server to supply a specific range of tcp ports for the data connections eg. passive ports command in wu-ftpd ftpaccess.
    This is also true for SIP rtp ports, and it might be safer to configure static acls than place a SIP-aware firewall which might open dynamically something you don't want.
  2. Great idea. Thank you!
  3. Don't forget that FTP sends username/password in the clear so is not the most secure of options! You could use FTPS instead of course but then none of your firewalls will be able to snoop the ports and you're back at square one. The answer of course is just to use SSH - SCP or sFTP and all of these problems are sorted.
  4. Speaking of memory lane, I thought this was great -

    Proverb referenced in the TED talk.

    Why FTP is the way it is:)

  5. Yes horrible SIP, that is why I am stuck configuring Acme Packet session border controllers (voip / sip firewall). Why the SIP people can't figure out a better protocol instead of recreating SS7 on IP is beyond mw
  6. Yeah, I believed that guy and wrote this post ...


    ... only to be thoroughly (and deservedly) spanked in the comments. After spending some time thinking about FTP, its intended usages and its design limitations, one has to conclude that they did a great engineering (and a lousy architectural) job.

    The story promoted in the TED talk could have some basis in reality, but it's mostly an oversimplified fairy tale. I was truly sad to discover that; previously I took TED talks seriously, now I watch them only for their entertainment value.
Add comment