FTP: a trip down the memory lane
A while ago I’ve bitterly complained about the FTP protocol design. I have decades-long grudge with FTP. If you’re old enough to remember configuring firewalls before stateful inspection or reflexive access lists became available, you probably know what I’m talking about; if not, here’s the story.
When enterprises started using the Internet 15+ years ago, most desktop FTP clients did not support passive mode (although it was part of the FTP standard). When configuring “firewalls” (one or two routers with long access lists), you had to allow all inbound TCP session to ports higher than 1024 just to support FTP data sessions. No problem ... unless you were using Sun workstations or NetBIOS over TCP (both of them use dynamic server ports above 1024), in which case those services were totally exposed to the Internet.
After a while, FTP passive mode became common in FTP clients (it’s the default today, as it works better with NAT). It did not solve the problem, just move it to the other side: now you had to allow any inbound connection to your FTP server. Arguably this was a lesser risk, as you could run FTP service on a hardened host with almost no other services, but it was still a less-than-ideal design.
The FTP problem was solved only after firewalls (whether dedicated boxes or software functionality in routers) started supporting stateful filters and deep packet inspection.
Contrary to what the pundits would like you to believe, DPI started 10+ years ago when the first firewall looked inside the FTP command stream to discover the endpoints of the FTP data session. However, nobody howled at that time as they were only too happy to download whatever it was they were downloading.
The stateful filters are obviously a performance nightmare as the firewall has to keep state of every single application session crossing it, making it a nice target for a DoS attack. However, there’s nothing we can do as long as we have to support contraptions that use embedded network addresses in application data stream (SIP being another prime example) ... apart from yammering and complaining about the shortsightedness of people who decided opening two TCP sessions is easier than developing a decent session layer (to be honest, I truly believe they did what they thought was the best option at the time ... but we still have to bear the consequences).
This is also true for SIP rtp ports, and it might be safer to configure static acls than place a SIP-aware firewall which might open dynamically something you don't want.
Proverb referenced in the TED talk.
Why FTP is the way it is:)
... only to be thoroughly (and deservedly) spanked in the comments. After spending some time thinking about FTP, its intended usages and its design limitations, one has to conclude that they did a great engineering (and a lousy architectural) job.
The story promoted in the TED talk could have some basis in reality, but it's mostly an oversimplified fairy tale. I was truly sad to discover that; previously I took TED talks seriously, now I watch them only for their entertainment value.