Building a Small Cloud with UCS Mini

During the last round of polishing of my Designing Infrastructure for Private Clouds Interop New York session (also available in webinar format) I wondered whether one could use the recently-launched UCS Mini to build my sample private cloud.

The UCS 6324 fabric interconnect provides more than enough bandwidth: each module has 80 Gbps of uplink connectivity (for a total of 160 Gbps) and 20 Gbps toward each server (for a total of 40 Gbps of uplink capacity per server).

The server blades definitely look promising:

  • Up to 24 cores per blade, for a total of 192 cores per UCS Mini chassis – probably enough to run ~1000 VMs;
  • Up to 768 GB of RAM per blade, for a total of over 6TB of RAM per chassis – yet again, more than enough for ~1000 VMs

… and then I stumbled upon the disk specs: two disks, the largest one being 1 TB (HDD) or 800 GB (SSD), for a total of 8 TB of redundant storage per chassis. Meh.

Stephen Foskett found an interesting way around this problem: add a C3160 rack server for a total of 360 TB of storage. Alternatively, use two C240 M3 rack servers, connect them to the fabric interconnect, and stuff them full with disk drives (up to 24 1.2TB disk drives per server for a total of 28.8 TB of fully redundant disk storage).

And now for the tough question: how do you make the storage servers accessible to the compute nodes? Here are a few ideas:

  • If you plan to use OpenStack, run Ceph or Gluster on the storage nodes and make them iSCSI or NFS targets. Problem solved;
  • If you’re a Hyper-V user, you don’t have a problem. Windows Server has all the components you need;

vSphere is a tougher nut (it’s evident who VMware’s parent corporation is): you could use one of the ideas I mentioned above to build a distributed storage system that vSphere host could connect to through iSCSI or NFS, but that clearly reeks of home-brewed kludgeitis.

VSAN might be an alternative, but I’m not sure how well it would perform in environment where the majority of virtual disk traffic goes over the LAN network. Comments highly appreciated!

Related webinars

Designing Private Cloud Infrastructure covers numerous design options, including scale-out storage solutions. Virtualization and Data Center webinars discuss individual technologies you might consider in your cloud infrastructure design (and you can always engage me if you need a design review, technology discussion, or a second opinion).

4 comments:

  1. Nice post and great suggested solution. Just want to clarify that the C3160 isn't supported with UCS Mini today. The C240 M3 is, up to seven, with 48TB raw storage.

    Disclosure - I'm the Cisco Sr. Marketing Manager working on UCS Mini.
  2. VSAN aggregates local attached disks either with a HBA controller in pass-through (aka IT mode) or RAID adapter in RAID0 mode (aka JBOD mode).

    Anyway, a minimum of one SSD + one HDD is required.

    Therefore I doubt if ever this UCS mini will make it into VSAN HCL in the current state... Maybe in an all-flash VSAN version though but this is only speculation :)

  3. I think VSAN is not a right option in that case. For VSAN we need at least 3 (or better 4) rack server nodes. And also we need to license all our blade servers in order to access the VSAN cluster.
    It will be really expansive solution.

    IMHO the best option here is to buy any NFS/iSCSI storage. Thus we'll get a modified version of FlexPod/VSPEX architecture - FlexPod-mini/VSPEX-mini :).

    And of course we always have an option with OpenStack + Swift/Ceph.
  4. There are rumors of fitting UCS Invicta (former Whiptail) within UCS chassis. 24T RAW SSD storage sounds nice so close to servers.
Add comment
Sidebar