Category: data center

Next Chapter in Data Center Design Case Studies

When I published the Data Center Design Case Studies book almost exactly a month ago, three chapters were still missing – but that was the only way to stop the procrastination and ensure I’ll write them (I’m trying to stick to published deadlines ;).

The first one of the missing chapters is already finished and available to subscribersand everyone who bought the book or Designing Private Cloud Infrastructure webinar (you’ll also get a mailing on Sunday to remind you to download the fresh copy of the PDF).

The Amazon Kindle version will be updated in a few days.

add comment

Why is IPv6 layer-2 security so complex (and how to fix it)

After the excellent IPv6 security presentation Eric Vyncke had @ 9th Slovenian IPv6 summit someone asked me: “Why is IPv6 first-hop security so complex? It looks like the developers of IPv6 protocol stack tried to make users anonymous and made everyone’s life complex while doing that.

Well, he was totally surprised by my answer: “The real reason IPv6 first-hop security is so complex is the total mess we made of L2/L3 boundary.”

read more see 21 comments

Trident 2 Chipset and Nexus 9500

Most recently launched data center switches use the Trident 2 chipset, and yet we know almost nothing about its capabilities and limitations. It might not work at linerate, it might have L3 lookup challenges when faced with L2 tunnels, there might be other unpleasant surprises… but we don’t know what they are, because you cannot get Broadcom’s documentation unless you work for a vendor who signed an NDA.

Interestingly, the best source of Trident 2 technical information I found so far happens to be the Cisco Live Nexus 9000 Series Switch Architecture presentation (BRKARC-2222). Here are a few tidbits I got from that presentation and Broadcom’s so-called datasheet.

read more see 8 comments

Can We Just Throw More Bandwidth at a Problem?

One of my readers sent me an interesting question:

I have been reading at many places about "throwing more bandwidth at the problem." How far is this statement valid? Should the applications(servers) work with the assumption that there is infinite bandwidth provided at the fabric level?

Moore’s law works in our favor. It’s already cheaper (in some environments) to add bandwidth than to deploy QoS.

read more see 10 comments

How Line-rate Is Line-rate?

During yesterday’s Data Center Fabrics Update presentation, one of the attendees sent me this question while I was talking about the Arista 7300 series switches:

Is the 7300 really non-blocking at all packet sizes? With only 2 x Trident-2 per line card it can't support non-blocking for small packets based on Trident-2 architecture.

It was an obvious example of vendor bickering, so I ignored the question during the presentation, but it still intrigued me, so I decided to do some more research.

read more see 7 comments

Queuing Mechanisms in Modern Switches

A long while ago there was an interesting discussion started by Brad Hedlund (then at Dell Force10) comparing leaf-and-spine (Clos) fabrics built from fixed-configuration pizza box switches with high-end chassis switches. The comments made by other readers were all over the place (addressing pricing, wiring, power consumption) but surprisingly nobody addressed the queuing issues.

This blog post focuses on queuing mechanisms available within a switch; the next one will address end-to-end queuing issues in leaf-and-spine fabrics.

read more see 5 comments

Load Balancing Across IP Subnets

One of my readers sent me this question:

I have a data center with huge L2 domains. I would like to move routing down to the top of the rack, however I’m stuck with a load-balancing question: how do load-balancers work if you have routed network and pool members that are multiple hops away? How is that possible to use with Direct Return?

There are multiple ways to make load balancers work across multiple subnets:

read more see 6 comments
Sidebar