The Cost of Networking Hardware (and Disaggregation)
Eyvonne Sharp wrote an interesting blog post describing the challenges Cisco might have integrating Viptela acquisition, particularly the fact that Viptela has a software solution running on low-cost hardware.
Guess what… Cisco IOS also runs on low-cost hardware, it’s just that Cisco routers are sold as a software+hardware bundle masquerading as expensive hardware.
As I explained in the whitebox switching part of my SDN 101 webinar, networking companies prefer to sell software bundled with hardware while pretending they’re selling you awesomesauce-based hardware (as opposed to Oracle & co who are happy to sell expensive software that runs on whatever hardware you have the leftover budget for).
Some of the reasons are historical: in the early days of networking you had to optimize the hardware and tightly couple software to underlying hardware (even if it was only a generic computer in disguise) to get the best possible performance. However, since the days of early Cisco PIX, networking software was really running on commodity hardware.
Don’t get me wrong. There’s significant difference between x86 server called router and cheap x86 servers you can buy on eBay. After all, the box called router has to survive in a dusty rack with no ventilation let alone airconditioning for a decade. Still, it’s in many cases still just an x86 server.
The “real” reason networking vendors continue to use this charade is probably the habits and psychology of selling networking gear: customers believe they’re buying unicorn-based expensive hardware, whereas in fact they’re really buying the zillions of man-years invested in software development. The vendor sales (oops, account) teams are so used to that mentality that they have a real problem selling anything but boxes. Did you ever see them add some services for free just to get the deal? If you did, what does that tell you?
Finally, please note I’m not picking on Cisco. Everyone was using the same business model, and while everyone is moving away from it, most vendors do it at glacial speeds. Oh, and F5 charges you even more for a VM-based product than for equally-fast hardware product.
To summarize: can we please stop talking about low-cost or expensive hardware and focus on what really matters: the total acquisition cost and total cost of ownership?
Want to know more about software/hardware disaggregation? Russ White and Shawn Zandi did a great job describing the details in Open Networking webinar, and you’ll find even more materials in the Building Next-Generation Data Center online course.
also because some customers prefer spend money for something to kick.
Even sub 10-kEUR routers may have unique ASIC chips inside doing the forwarding. Bigger routers are especially not just generic boxes, but rather with distributed processing in line cards. Also the high-end interfaces are not just generic hardware.
High accuracy time synchronization is more and more required. This also based on special time stamping hardware in the interface cards.
A generic x86 server is very far in capabilities from all these. Of course, for most of the SMB companies an x86 server might be more than enough, but large enterprise and service providers would need such special hardware architectures, that a generic x86 server would not provide.
However, white-boxing for data plane devices is possible and could lead a better cost structure...
All routers were doing CPU-based packet forwarding till high-end AGS+, Cisco 7000 and ASRs (I'm not going into high-speed service provider gear). Most routers (including all the ISRs) still do.
On the Enterprise side, the story is very similar, the reuse of merchant silicon line card architectures can be seen from the 1RU to the 10 RU, in many cases the same hardware is reused in SP platforms with different operating systems.
The simple fact is that the Open Compute Networking products have the ability to perform as well as most Vendor specific gear and it won't be long before some of those manufacturers will look to expand those platforms into chassis architectures.
This is what is forcing change in the large Vendors, and why dis-aggregation is important for them to change cultures both at the customer side and the internal sales/services side of the business. For some vendors that changes is going to take a long time, and there will likely be causalities along the way.
From my perspective, for some Vendors in this space the changes is identical to what IBM had to go through when moving from being a hardware vendor to a Software and Solutions company.
If you see vendors that don't have large services organisations, you might question how diversified they are in order to manage that transition.
In a dis-aggregated world Services, Testing and Compliance become one of the most critical & expensive deliverables.
Do not forget, that it is quite expensive to hire hard real-time software experts. Your average Linux or Windows programmer has no idea about hard real-time systems. How many programmers are available for operating systems like QNX or VxWorks? Very few...
But for other vendors just as Ivan stated with F5 why the cost if the HW is just OTS and if the VA is close enough in terms of performance? Same with Bluecoat etc.
But back to Cisco, that was the Cisco premium customers are willing to pay for - their software baked to their hardware for optimal performance and one company to beat on for recourse if it didn't perform well.
Well maybe a few of the bad vendors are like the proverbial car lot salesman and you don't know what's under the hood. Bring your mechanic.
You may expect that over time, all the network vendors will need to start providing certified and compliance tested revisions against specific Open Compute Network chassis.
The "one throat to choke" offering will likely continue and be one of the main corner stones of the value position of these vendors. Unlike Cloud and Application Services, there continues to be a requirement for on premise network infrastructure, at both the SP and Enterprise level. That demand will continue to grow for most Tier 2 and below SP's. The infrastructure equipment is not their core business and there is limited value in maintaining an internal expertise in dis-aggregated software/hardware support lifecycles. The sheer lack of availability of resources will limit this to Tier 1 SP customers and a few selected Global Enterprises mainly in the OTT and IT businesses that have a need for scales of economy.
It's also likely that those chassis may ship with Cisco/Juniper/Ericsson etc badges on them and be spared and distributed by the same supply channels.
This is in effect one of the core values of the multinational network equipment vendor, the ability to spare and support a hardware device almost anywhere in the world.
This is something that the white box equipment vendors don't appear to be that interested in since they don't gain a lot out of the process. The distribution of professional services and the hardware distribution chain is been heavily tied together. That may change over 2nd and 3rd generation cycles as those whitebox manufacturers look to grow and expand their offerings.
They incidentally made the worst router in history the 6611
www.fpgadeveloper.com/2014/03/comparison-of-7-series-fpga-boards-for-pcie.html
There is considerable cost to take monolithic, single thread/core software and making it scale across low-cost, multi-core platforms. Intel's DPDK is a great help, but there are still challenges in parallelizing network services like stateful connection tracking, in-order delivery, deep-packet inspection, fragmentation and reassembly, shaping, encryption, and so on. Plus, the software must take advantage of any available hardware offloads (SR-IOV, TCP RSS, Crypto, VMXNET3, CRC/checksum).
Of course, it's worth the effort. Personally, I find it amazing that the exact same Cisco IOS-XE source code runs so well on hardware ASICs (ASR 1000), Intel CPUs (ISR 4000), and cloud (CSR 1000v) platforms.