Does Cloud-Scaling Really Demand a New Ethernet Speed Standard?

Steven Hill

Steven Hill

Summary Bullets:

  • 10 Gigabit Ethernet has been around for over a decade, but sales of 10GbE ports are only beginning to move the needle seriously in terms of sales. This is, in part, because many server vendors are finally offering on-board 10GbE LoM ports.
  • While this type of incremental Ethernet speed advancement might provide benefits at the mega-data center level, it is hard to imagine how the introduction of another set of “open” standards will be of value to the IT industry as a whole, especially given the length of time it takes for these standards to gain acceptance and wide-scale adoption.

On July 1, 2014, a quintet of technology companies – Arista, Broadcom, Google, Mellanox, and Microsoft – announced the formation of the 25G Ethernet Consortium to support a new 25GbE single-lane and 50GbE dual-lane standard targeted at server to top-of-rack switching. This interesting approach appears to circumvent the typical standards process of the IEEE, or even the IETF, by saying that this new standard will be “open” – a word I’m REALLY starting to dislike – “while leveraging many of the same fundamental technologies and behaviors already defined by the IEEE 802.3 standard,” without even bothering to submit it for comment. Well, I suppose close counts.

I certainly understand how a lengthy certification process would hold these fellows back a bit when it comes to the freedom they need to deliver to you, the oppressed and bandwidth-starved customer, all the benefits of a faster new Ethernet standard that it’s your right to enjoy. After all, look how long it took everyone to agree on 10/40/100GbE and all its fancy-schmancy compatibility; surely, a vendor-driven consortium could accomplish that in a fraction of the time. Okay, fine. Standards and sarcasm aside, I still have to question the real-world demand for yet another PHY speed when so many companies are still in transition to 10GbE as I write this.

It is possible there are web-scale production environments today that are actually taxing the performance of a pair of 10GbE links, but I would honestly be interested in hearing how many regular, everyday customers are even coming close to saturating their 10GbE ports; that is, if they have them at all. And if they are, is it really more efficient to develop a completely new 25/50GbE network specification when 10/40/100GbE is mature and pretty much universally available from every networking vendor? Perhaps it might just be easier to add more 10GbE ports or even jump to 40GbE, rather than replacing large segments of the network to accommodate yet another performance specification. We have been doing that for years with 1GbE, packing servers with four, six, or even eight Gigabit ports, and no one even once mentioned that we should look at a 2.5G or even 5G standard as a stopgap until 10GbE was ready… because Ethernet didn’t roll that way.

I admit to being suspicious when they broke the base-10 chain with a 40GbE option. There was something vaguely comforting about the 10/100/1000/10000 progression that Ethernet had going on, but at the same time, I also felt that something inexpensive and faster was needed to aggregate 10GbE ports. 40GbE was an easy solution that only required binding four existing 10G PHYs into a common link – something any vendor could do without a major redesign of the underlying hardware. Clean, simple, and now pretty much available at any density from any vendor, 40G appears to be an easy option for applications where 10G to the server is not sufficient – that is, if those applications actually do exist in the wild.

I have to acknowledge that the economics are different in the mega-data centers, but they seem to do whatever they want without regard for what might be best for IT in general. They certainly do not buy off-the-rack hardware, and if they want a 25/50GbE standard, they will just go to their ODM partners and have them custom-make the switches and motherboards with on-board networking ASICs that best suit their unique workloads. Unfortunately, most of the IT world does not have the same performance requirements or purchasing power, so the idea of changing the entire networking landscape to accommodate the biggest 5% seems to be somewhat suspect to me. But that’s just me, what do you think?

About Steven Hill
Steven is responsible for covering the emerging technologies that are remapping the traditional data center landscape. These include software and hardware products that are needed to support public and private cloud infrastructures, as well as the underlying virtualization and orchestration technology required to enable process automation and workload management.

What do you think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: