BradReese.Com Cisco vs. ZTE Price Quote Comparisons

Home About Repair Power Supplies Refurbished Blog Quick Links Site Map Contact Us

Douglas Gourlay speaks out
Learn more about Douglas Gourlay...


Power Supplies

VoIP Gateways

Cisco Repair

Refurbished Cisco

Cisco CPQRGs

New Cisco

New HP ProCurve

Cisco Tools

Competitive Lab Tests

Tech Forums

How-to Tutorials

CCIE Gossip


View the archive of Douglas Gourlay speaks out

Subscribe to Bloggers speak out on BradReese.Com

Fabrics are faster: Arista vs. Brocade vs. Juniper

Simply put, from a 'Fabrics are Faster' perspective, the claim doesn't hold up.

Sat, 1/21/12 - 2:55am    View comments

Arista NetworksBrocadeI'm Douglas Gourlay, Vice President of Marketing for data center, high-performance computing and high-frequency trading network switch vendor Arista Networks.

Recently I was fortunate enough to be in a debate with some industry peers around the topic of data center network design.

The customer requirement was for building a new set of data centers, implementing broad reaching virtualization where the workloads supported it, and of course, creating a stable and efficient operating environment. Three of the four vendors presented and extensively talked about their approach to a 'data center fabric.' Now I didn't use the term 'fabric' to describe my recommended data center networking architecture, I instead used the apparently boring and unfashionable term - 'network'.

This got me thinking a bit about what actually is a data center 'fabric' and does it really differ from a 'network' as my peer from Brocade suggested. Here are the top points and/or claims I've heard, ordered in no particular priority.

  1. Fabrics are faster
  2. Fabrics enable large-flat layer 2 broadcast domains
  3. Fabrics deliver a single point of management
  4. Fabrics reduce network tiers and collapse them
  5. Fabrics are more reliable
  6. Fabrics have no packet drops under congestion
  7. Fabrics have linear cost and power scaling
  8. Fabrics deliver any-to-any non-blocking connectivity
  9. Fabrics are auto-provisioned
  10. Fabrics use all links actively
  11. Fabrics are designed for the east-west traffic in a modern data center
I may have missed a few of the more detailed items but I think the above list has captured the more salient points. So lets take each point, examine the claim, the reality, and see how everything holds up. I will take one topic per week, skipping weeks with major holidays when Brad is ice skating and such, since hopefully no one is reading blogs then.

Also, a quick disclaimer:

I work for Arista Networks, while the views here are mine, I am certain some of the opinions I will be expressing will be colored by my exposure to technology developments at Arista. I will absolutely try to keep this as objective as possible, but please bear with me as this is a rather contentious topic and there are a variety of well formed opinons that can be expressed.

Thanks in advance for keeping the comments section constructive :)

Claim #1 - Fabrics are Faster than Networks

Juniper NetworksThis is a pretty broad claim, and I think it stems primarily from the Juniper message around only requiring one lookup and the Brocade message on using all active uplinks and tier reduction.

As far as requiring one lookup goes, a QFabric is comprised of QFX3500 edge nodes based on first generation Trident chipsets from Broadcom and a core device based on the HiGig2 chips from Broadcom. It requires five header lookups, one full lookup at the ingress switch, three lookups in the Clos tree within the QF/Interconnect, and a final lookup at the egress switch. Only the first lookup is a L2/L3 lookup, subsequent are port-of-exit header lookups that tell the transit nodes in the QFabric which port to forward the frame to.

The Brocade statement that all uplinks are active in a fabric implies that they are not all active in a network. This was true in flat L2 networks of the past based on STP. But since the late 1990s vendors have delivered proprietary solutions that made all links active and since 2008 blended proprietary/open-standard solutions such as MLAG have become available. MLAG requires that two switches be the same make/model from a vendor, they share learning and state information via a proprietary protocol between the two switches, and then express standards-based LACP to all adjacencies. With MLAG the size of the proprietary system is two switches, with current fabric solutions it's the entire system of switches across the fabric.

The Brocade VCS Fabric has a scaling challenge though - it has nothing in their fabric beyond a top-of-rack switch. Yes you can build a flat L2 forwarding network with what is offered - but not a very large one, nor a very scalable one. The maximum number of switches supported is 24 with NOS 2.1 making this the most limited of the fabric offerings. It is also only capable of forwarding at Layer-2 and uses a completely proprietary control plane for topology construction.

Fact:   For those to whom latency directly correlates to application performance - financial trading, cluster computing/modeling and deep analystics, you can build a 2-tier network with a latency as low as 1.5 microseconds using fixed-configuration rack switches, you can build a very scalable 2-tier network with latency as low as 5 microseconds using a mix of rack and modular switches. The fastest fabrics are a bit more than that.

Fact:   You can build a 2-tier L2 network based on Multi-Chassis Link Aggregation and have all uplinks active. The main constraint in MLAG scaling is the density and performance of the spine switch - the denser and higher performance the systems are the larger a L2 domain you can build - currently these are as large or larger than the biggest fabrics.

The other factor that is very important to note here is the need for effective congestion management and buffering in the spine tier. In a multi-stage network with east-west traffic patterns it is quite common for some large number of ports to need to access the resources on another port - the larger the buffers are on the spine switches the more effectively they will perform before they have to drop traffic or issue a PAUSE frame to slow down traffic.

It is conceivable that a vendor can build an MLAG system that makes the spine 'wider' than 2 switches by intelligently controlling learning and handling all broadcast/unknown/multicast forwarding on a subset of the devices - but no one has implemented this type of system yet - largely because it grossly exceeds 99% of the markets requirements for scale.

Summary:   Networks offer the flexibility to be designed at the absolute lowest latencies if required (trading off the use of modular large-buffer systems for fixed-configuration low latency systems in the spine is the most effective way of achieving absolute lowest latency/fastest forwarding). Current network solutions scale as well or better than the current fabric solutions - at L2, and certainly as you introduce routing at L3.

Simply put, from a 'Fabrics are Faster' perspective, the claim doesn't hold up. The fastest fabric is a bit slower than an equally scalable network, the fastest network is 2-3x lower latency than the fabric alternative.

Story series:

Part 1 - You're viewing it now.

Part 2 - Do Fabrics enable a large and flat Layer-2 broadcast domain?

Related stories:

The Fabric Hype Versus Cloud Networking Simplicity

Are Fabrics Faster or Better Than Networks? Are they the right choice for your nextgeneration Data Center?

Juniper QFabric - A Perspective on Scaling Up

The Evolving Network: SDN and Network Fabrics, are we done yet?

Fabric wars: Cisco vs. Brocade vs. Juniper

Comparing Data Centre Fabrics From Juniper, Brocade and Cisco

Related blogs:

Arista Blogs

Douglas Gourlay Blog

Default Gateway Blog

What's your take?

Subscribe to Bloggers speak out on BradReese.Com

Favorite Blog Story Picks

  1. Changing the conversation: It's solutions, not boxes, that matter to enterprises - Jean-Luc Ronarch
  2. Threat detection with NetFlow: IP reputation - Mike Patterson
  3. Free iPhone and iPad TFTP Server for downloading and uploading Cisco configs - Andy Salo
  4. Why is Cisco's top cloud talent bolting?
  5. US Senate and Congress have stopped Cisco CEO John Chambers from totally wasting Cisco's $44 billion in cash
  6. Does starring in a scotch whiskey ad make Cisco CEO John Chambers narcissistic?
  7. Cisco's largest shareholder BlackRock voted against re-electing Stanford University President John L. Hennessy to Cisco's Board
  8. Cancer chemotherapy and a new direction for BradReese.Com
  9. Is John Chambers the top performing Dow CEO?
  10. How John Chambers is portrayed in the Steve Jobs biography
  11. Cisco CEO John Chambers and the mysterious Harvey L. Armstrong
  12. Compare the pricing and features of Cisco Catalyst 3750-E switches vs. ZTE ZXR10 5900E switches
  13. Cisco's a no show in the new Lippis 10/40GbE switch performance/power test report
  14. To CCIEs it may appear Cisco's General Counsel Mark Chandler speaks with forked tongue
  15. Is Wall Street calling for the ouster of Cisco CTO Padmasree Warrior?
  16. Cisco may drop new WAAS appliances after market share thrashing by Riverbed
  17. Cisco inadvertently reveals never before seen FY11 product revenue results
  18. Cisco's Q1'FY12 gross margin, switching and NGN routing revenue declined year over year
  19. ZTE passed Cisco to lead 3Q10-2Q11 Asia Pacific IP/Ethernet SPSR market
  20. View the archive of Bloggers speak out on BradReese.Com
blog comments powered by Disqus

CCIE available Metro DC

Supplement Cisco SMARTnet Contracts


©2013 BradReese.Com - Home - About - Repair - Power Supplies - Refurbished - Blog - Quick Links - Site Map - Contact Us