Does dedicated iSCSI infrastructure make sense?

Chris Marget recently asked a really interesting question:

I’ve encountered an environment where the iSCSI networks are built just like FC networks: Multipathing software in use on servers and storage, switches dedicated to “SAN A” and “SAN B” VLANs, and full isolation of paths (redundant paths) between server and storage. I understand creating a dedicated iSCSI VLAN, but why would you need two? Isn’t the whole thing running on top of TCP? Am I missing something?

Well, it actually makes sense in some mission-critical environments.

read more see 7 comments

Happy Eyeballs – Happiness Defined by Your Perspective

It seems that most people not having a vested interest in status quo agree the socket API is broken. After all, why should every single application ever written have to deal with the idiosyncrasies of two address families?

Not surprisingly, the browser vendors got sick and tired of waiting for a fixed API or a standardized session layer (nothing happened in the last two decades) and decided to implement happy eyeballs – a simple mechanism that creates two TCP sessions (one over IPv4, another one over IPv6) and uses whichever one works better.

read more see 1 comments

Start Reading V6OPS Documents

You might not have to deploy IPv6 in your network tomorrow (if you’re an ISP I sincerely hope you do), but that’s no excuse for not getting prepared for the eventual inevitable deployment (Tom Hollingsworth has way more to say on this topic).

Don’t believe in the “inevitable” part? Maybe you should spend some time with people who were running SNA and IPX networks two decades ago and living in blissful IP denial.

read more see 4 comments

Controller-Based Packet Forwarding in OpenFlow Networks

One of the attendees of the ProgrammableFlow webinar sent me an interesting observation:

Though there is separate control plane and separate data plane, it appears that there is crossover from one to the other. Consider the scenario when flow tables are not programmed and so the packets will be punted by the ingress switch to PFC. The PFC will then forward these packets to the egress switch so that the initial packets are not dropped. So in some sense: we are seeing packet traversing the boundaries of typical data-plane and control-plane and vice-versa.

He’s absolutely right, and if the above description reminds you of fast and process switching you’re spot on. There really is nothing new under the sun.

read more see 3 comments

NEC ProgrammableFlow Scalability Features

Once you get rid of spanning tree and associated kludges (not too hard in OpenFlow-based networks), BUM flooding becomes your biggest enemy. NEC’s engineers implemented some interesting features in the ProgrammableFlow switches and controllers: rate-limiting of unknown unicast frames, flooding control, and ARP snooping (if only they’d go for ARP proxy).

add comment

Predicting the IPv6 BGP Table Size

One of my readers sent me an interesting question:

Are you aware of any studies looking at the effectiveness of IPv6 address allocation policies? I'm specifically interested in the affects of allocation policy on RIB/FIB sizes.

Well, we haven’t solved a single BGP-inflating problem with IPv6, so expect the IPv6 BGP table to be similar to IPv4 BGP table once IPv6 is widely deployed.

read more see 7 comments

Evolution of IP Model

I stumbled upon a fantastic RFC - Evolution of IP Model (RFC 6250) - that should be made mandatory reading for everyone remotely involved with networking. It describes numerous "truths" (politely called misconceptions) that everyone from programmers to network designers still rely upon. Some of my favorites: reachability is symmetric and transitive, loss is rare, addresses are stable, each host has a single interface and a single IP address ... Enjoy!
see 2 comments

Example: Multi-Stage Clos Fabrics

Smaller Clos fabrics are built with two layers of switches: leaf and spine switches. The oversubscription ratio you want to achieve dictates the number of uplinks on the leaf switch, which in turn dictates the maximum number of spine switches and thus the fabric size.

You have to use multi-stage Clos architecture if you want to build bigger fabrics; Brad Hedlund described a sample fabric with over 24.000 server-facing ports in the Clos Fabrics Explained webinar.

see 4 comments
Sidebar