DDoS chow on Packet Pushers

Yesterday I finally found time to listen to the DDoS chewing podcast on Packet Pushers. While I know quite a bit about the technical solutions, their focus on the big picture and Service Provider offerings was a truly refreshing one (after all, if you’re under attack, it’s best if your upstream SP filters the junk).

They also mentioned a few interesting application-related issues that will definitely help me streamline my web sites (for example: once your load goes above a certain threshold, start serving cached data instead of retrieving it live from the database) and discussed an interesting case study where a networking engineer (Greg, if I’m not mistaken) managed to persuade the programmers to optimize the application, thus saving the company a lot of money in the long run.

Even if DDoS protection might not be relevant to your current job position and although a lot of their discussion was spinning around SP offerings and application-level solutions, I would strongly recommend that you listen to the podcast. After all, it never hurts to glance around your sandbox and consider other perspectives (and I definitely enjoyed the view).

4 comments:

  1. Yes, I did learn a lot from that podcast. Plus, it was nice to hear them not confused because a certain someone kept schooling them on MPLS :P
  2. Yeah, I've realized too late that you need a whiteboard to properly explain MPLS topics :-[ They are not complex, just hard to describe without a picture. I would usually need 10 pictures x 1000 words = too much.
  3. "serving cached data instead of retrieving it live from the database" - It would seem that the text of your posts should be relatively long lived objects with age set via expires header or cache control header - if the object hasn't or wont change for a given time period then mark it as such. Have a look at http://www.mnot.net/cache_docs/ for more info.
  4. Thanks for the link; it's a great document. I'm somewhat aware of how caches work, after all, I wrote a few articles after implementing it on my web sites:

    http://www.informit.com/articles/article.aspx?p=663080
    http://www.informit.com/articles/article.aspx?p=665932
    http://www.informit.com/articles/article.aspx?p=669598

    The idea mentioned in the podcast goes beyond web caching. Imagine you build the page content dynamically: it might be retrieved from a database or you have to combine static descriptions with comments, reviews and the like. Sometimes it takes a while to retrieve the data and build HTML (or, in my case, XML), so if your site is overloaded or under attack, it's better to serve a cached snapshot to NEW (not RETURNING) visitors instead of spending time trying to build it from scratch.

    BTW, Mediawiki (the Wikipedia engine) has implemented something very similar for high-performance retrievals.
Add comment
Sidebar