Finally: Juniper Supports a Leaf-and-Spine Virtual Chassis

The recent Juniper product launch included numerous components, among them: a new series of data center switches (including a badly-needed spine switch), MetaFabric reference architecture (too meta for me at the moment – waiting to see the technical documentation beyond the whitepaper level), and (finally) a leaf-and-spine virtual chassis – Virtual Chassis Fabric.

Why is Virtual Chassis Fabric important?

The existing Junos Virtual Chassis architecture evolved from stackable switches connected with Virtual Chassis Ports (VCPs). Later software releases allowed you to use 10GE ports as VCP ports, and the QFX-series Virtual Chassis has no dedicated VCP ports, but it seems the inter-switch connectivity requirements never made it past the stacking mentality.

More than 18 months ago I published a leaf-and-spine architecture that could use virtual chassis to manage the whole fabric as a single device, first in Server Guys Guide to Data Center Fabrics webinar, and later in the design section of the Clos Fabric Explained webinar. The size of the fabric was limited by the Virtual Chassis limitations (10 devices in a virtual fabric), but you still got several hundred server ports, which is more than enough for most environments.

Unfortunately, I never got a definitive “yes, this architecture is supported” answer from Juniper. Merging the spine switches in a virtual chassis and keeping the leaf switches independent is obviously a supported configuration, but then we don’t get the benefits of a single configuration/management entity that I wanted to have. It seems the switches in a virtual chassis are still supposed to be connected in a linear or ring topology (even the latest Junos 13.2 release notes hint at that), which definitely sucks in a data center environment.

The Virtual Chassis Fabric technology removes all the inter-switch connectivity limitations – you can connect the switches in any way you wish (according to a recent Juniper blog post) and even though monkey designs might look enticing, I would stick with the more traditional leaf-and-spine architecture.

How relevant is Virtual Chassis Fabric?

The same blog post claims (and I’ve seen similar claims in other Juniper documents) you’ll be able to connect all QFX-series switches in a Virtual Chassis Fabric (the prerequisite software is not yet available), removing the major objection Juniper was facing with QFabric (including the scale-down version).

You can buy individual switches as you need them, merge them in a fabric whenever you feel the need to reduce the management burden, and there’s no need to buy dedicated fabric components when you start the journey.

More information

You might find these webinars valuable if you’re building a new data center fabric:

  • Data Center Fabrics webinar describes hardware and software features of data center switching products from all major vendors;
  • Clos Fabrics Explained webinar has in-depth design and deployment guidelines that will come handy once you start building a leaf-and-spine fabric;
  • Data Center 3.0 for Networking Engineers webinar introduces you to a wide variety of data center-related topics, going all the way from application architectures to fabrics, storage and lossless transport.

Finally, if you need design help or a quick review of your design, check out my ExpertExpress consulting service.

4 comments:

  1. Ivan , thanks for picking up the Juniper blog. You can also connect EX4300 for 1GE access in Virtual chassis fabric besides all the QFX switches.

    Salman Zahid
    Replies
    1. You can do this, but the entire VCF then gets the MAC limit and various other limits of the 4300
    2. one of them being no vxlan ;(
  2. Interesting. So now there's no dedicated interconnect nodes, no external control plane and you can start a fabric with two switches back-to-back and grow as you need... Seems a lot like Brocade VCS, only 3 years later and limited to 20 nodes, with no modular chassis building block possible...

    Anyway, a few questions: what does it use internally to provide L2 ECMP? And how does it resolve MLAG for external devices? Can external devices terminate a LAG in more than two nodes within the VFC? How many? Any nodes?
Add comment
Sidebar