Category: data center
Vertically Integrated Musings
Packet Pushers podcast is a constant source of inspiration for my blog posts. Recently I stumbled upon Rob Sherwood’s explanation of how they package Big Cloud Fabric:
It’s a vertically integrated solution, from Switch Light OS to our SDN controller and Big Cloud Fabric application.
Really? What happened to openness and disaggregation?
Case Study: Scale-Out Cloud Infrastructure
I helped several customers design scale-out private or public cloud infrastructure. In every case, I tried to start with a reasonably small pod (based on what they’d consider acceptable loss unit – another great term I inherited from Chris Young), connected them to a shared L3 backbone (either within a data center or across multiple data centers), and then tried to address the inevitable desire for stretched layer-2 connectivity.
You’ll find a summary of these designs in my next ExpressExpress case study: Scale-Out Private Cloud Infrastructure, and if you need more details, I’m usually available for online consulting.
Don’t Be Overly Enthusiastic about Vendor Claims (This Time It's Brocade)
I was running the first part of the Data Center Fabrics Update webinar last week, mentioned that Brocade VDX 6740 supports Flex ports (a port you can use as Fibre Channel or 10GE port), and someone immediately wrote a comment saying “so does VDX 6940”. I was almost sure Flex ports aren’t available on VDX 6940 yet, and as always turned to vendor documentation to figure it out.
As expected, the data sheet is a bit vague, somewhat reflecting reality, but also veering into the realm of futures instead of features. Here’s what they say:
Replacing Central Router with a Next-Generation Firewall?
One of my readers sent me this question:
After reading this blog post and a lot of blog posts about zero trust mode versus security zones, what do you think about replacing L3 Data Center core switches by High Speed Next Generation Firewalls?
Long story short: just because someone writes about an idea doesn’t mean it makes sense. Some things are better left in PowerPoint.
Rearchitecting L3-Only Networks
One of the responses I got on my “What is Layer-2” post was
Ivan, are you saying to use L3 switches everywhere with /31 on the switch ports and the servers/workstation?
While that solution would work (and I know a few people who are using it with reasonable success), it’s nothing more than creative use of existing routing paradigms; we need something better.
Design Challenge: Multiple Data Centers Connected with Slow Links
One of my readers sent me this question:
What is best practice to get a copy of the VM image from DC1 to DC2 for DR when you have subrate (155 Mbps in my case) Metro Ethernet services between DC1 and DC2?
The slow link between the data centers effectively rules out any ideas of live VM migration; to figure out what you should be doing, you have to focus on business needs.
New Webinar: vSphere 6 Networking Deep Dive
The VMware Networking Deep Dive webinar was getting pretty old and outdated, but I always managed to get an excuse to postpone its refresh – first it was lack of new features in vSphere releases, then bad timing (doesn’t make sense to do a refresh in June with new release coming out in August), then lack of documentation (vSphere 6 was announced in August 2014; the documentation appeared in March 2015).
Arista EOS Available on Whitebox Switches
A few months ago Gigamon did the right thing: they figured out that their true value lies not in the hardware boxes, but in the software running on them, and decided to start offering their GigaVUE-OS on whitebox switches.
So far, Arista is the only other networking vendor that figured out it doesn't make sense to resist the tide - Arista EOS is now available on Open Compute Networking whitebox switches.
Update 2015-04-02: If you followed the links in this blog post, you probably figured out that it’s an April Fools’ one. However, that’s not the end of the story…
Whitebox Switching: Follow the R&D Budget
A few weeks ago HP announced that they’d start selling branded whitebox (brite-box) switches, and as expected the industry press was immediately full of opinions. As always, it makes sense to follow the money (or, in this case, the R&D budget) to understand what’s going on behind the scenes.
Cisco ACI – a Stretched Fabric That Actually Works
In mid-February a blog post on Cisco’s web site announced stretched ACI fabric (bonus points for not using marketing grammar but talking about a shipping product). Will it work better than other PowerPoint-based fabrics? You bet!
What’s the Big Deal?
Cisco’s ACI fabric uses distributed (per-switch) control plane with APIC controllers providing fabric configuration and management functionality. In that respect, the ACI fabric is no different from any other routed network, and we know that those work well in distributed environments.
Response: Why Technology Still Matters
My good friend Tom Hollingsworth wrote a great blog post about hypermyopia in the networking industry. I agree with most everything he wrote (I have to – I’m always telling people to focus on business needs and to change their mentality before relying on shiny new gizmos), but I still think it’s crucial to consider the technology used in products we’re looking at.
Do We Need QoS in the Data Center?
Whenever I get asked about QoS in the data center, my stock reply is “bandwidth is cheaper than QoS-induced complexity.” This is definitely true in most cases, and ideally the elephant problems should be solved higher up in the application stack, not with network-layer kludges, but are there situations where you actually need data center QoS?
Let’s Get Rid of the Thick Yellow Cable
Whenever I write about the crazy things vendors are trying to sell us, and the kludges we have to live with, I keep wondering, “Is it just me, or is the whole industry really as ridiculous as it seems?” It’s so nice to see someone else coming to the same conclusions, like Mark Burgess (the author of CFEngine and the Promise Theory) did in a lengthy essay on whether SDN makes sense.
Whiteboarding Cisco ACI on Software Gone Wild
Late last year David Gee and I wanted to test another interesting gizmo: an online virtual whiteboard. David was pondering some interesting aspect of Cisco ACI and they seemed like a perfect topic for an impromptu discussion.
Big Cloud Fabric: Scaling OpenFlow Fabric
I’m still convinced that architectures with centralized control planes (and that includes solutions relying on OpenFlow controllers) cannot scale. On the other hand, Big Switch Networks is shipping Big Cloud Fabric, and they claim they solved the problem. Obviously I wanted to figure out what’s going on and Andy Shaw and Rob Sherwood were kind enough to explain the interesting details of their solution.
Long story short: Big Switch Networks significantly extended OpenFlow.