8 comments:

  1. Thanks Ivan for the post. Indeed there are many customers who could not cross the mental bridge that it is doable. It kinda reminds me of the 4-minute mile barrier :-)
    Have a great day!
    e1
  2. Ivan,

    I'm disappointed that I don't see any mention in that post about my primary concern for those types of racks: power and cooling. AnandTech suggests the Xeon 2699v3 uses around 175 watts (TDP is 145 watts). That means that, just for CPUs, rack power is 19.6 kilowatts, when you throw in motherboards, RAM, drives, etc, 30 kilowatts for a rack doesn't seem out of the question, especially factoring in two switches. 30kw is not un-doable in this day and age, but caveat emptor if you buy this and find out your datacenter only budgeted for 8kw per rack.

    Replies
    1. Tom,

      Thanks for the comment. I know I'm always missing something, and you just pointed out yet another caveat. Let me explore this a bit further.

      Kind regards,
      Ivan
    2. Spot on! Thank you Tom for the correction. How did I miss that! I've updated my blog with an acknowledgement to your correction (and thank you for that), and link back to this post so readers can see your comment.

      Your comment does echo what I always tell fellow virtualisation engineer. The physical stuff matters.
    3. It becomes an even bigger concern if you are factoring co-lo into your international footprint. We were designing regional data hubs for a large enterprise that required very dense VM footprint with a minimal network edge. It wasn't 1,000 VMs per rack, but it did require 30kw. In the US, we found that most of the big shops would accommodate us. In EMEA and APAC... our selection was extremely small and in some cases it was treated as a custom job and priced accordingly. We even used a co-lo broker to help us locate willing providers. (This was a few years ago, it might be better now.)

      As an early adopter of vBlock, we can concur with Ivan's point: you can build a very dense network with a surprisingly small footprint, especially if you're willing to leverage VM-based ADCs/Firewalls/Routers
  3. To autarch01
    Please try very flexible operator in baltics www.datainn.lt/en. Great service and network www.baltichighway.com
  4. 1000? you're thinking too small :-)
    I work for a cloud provider, we shove 250 VMs per HV, and 40 HVs per rack. Try 10,000 VMs per rack :-)
    Replies
    1. A typical enterprise virtualization engineer would faint at what you're suggesting ;) But yeah, I agree with you - those numbers are definitely reachable if you're pushing the envelope (and let me guess: most of the VMs probably have 4GB of RAM or so, right?)
Add comment
Sidebar