Steven is responsible for covering the emerging technologies that are remapping the traditional data center landscape. These include software and hardware products that are needed to support public and private cloud infrastructures, as well as the underlying virtualization and orchestration technology required to enable process automation and workload management.
There’s no question that SDN can solve a number of problems facing the next generation of network administrators, but those problems may not be as earth-shaking as vendors would like you to believe.
SDN solutions, regardless of vendor, have been facing slow adoption, perhaps because the pain points they address may only be minor irritations to the typical enterprise today.
Those of us who have been involved in the IT industry for more than a decade or two have seen some pretty substantial changes to the fundamental way we get things done. Major tectonic shifts such as graphical user interfaces and server virtualization have reshaped the way we build our infrastructure by abstracting all the underlying minutia that it actually takes to get something done. As a rule, however, those types of changes impact a large set of technology users on a much grander scale by efficiencies that are easily measureable. Yet, as far as SDN goes, the operation of the network itself is the responsibility of a very small group of individuals who are charged with keeping the system up and running to whatever number of ‘nines’ your business requires, so whatever benefits SDN may actually provide will only directly affect them… with the hope that the rest of the benefits will appear in the form of improved network performance that may or may not be apparent to the end user. Continue reading “Real World to SDN Vendors: Give Us a Break, Already”→
10 Gigabit Ethernet has been around for over a decade, but sales of 10GbE ports are only beginning to move the needle seriously in terms of sales. This is, in part, because many server vendors are finally offering on-board 10GbE LoM ports.
While this type of incremental Ethernet speed advancement might provide benefits at the mega-data center level, it is hard to imagine how the introduction of another set of “open” standards will be of value to the IT industry as a whole, especially given the length of time it takes for these standards to gain acceptance and wide-scale adoption.
On July 1, 2014, a quintet of technology companies – Arista, Broadcom, Google, Mellanox, and Microsoft – announced the formation of the 25G Ethernet Consortium to support a new 25GbE single-lane and 50GbE dual-lane standard targeted at server to top-of-rack switching. This interesting approach appears to circumvent the typical standards process of the IEEE, or even the IETF, by saying that this new standard will be “open” – a word I’m REALLY starting to dislike – “while leveraging many of the same fundamental technologies and behaviors already defined by the IEEE 802.3 standard,” without even bothering to submit it for comment. Well, I suppose close counts. Continue reading “Does Cloud-Scaling Really Demand a New Ethernet Speed Standard?”→
Interest in SDN is growing in the APAC market, but its impact is first being felt among the carrier community.
SDN offers the internal efficiency needed for APAC carriers to provide more dynamic and flexible network options to their customers, but it is their data center customers which get the benefits.
I just returned from a long trip to Hong Kong and Singapore, where I chaired some sessions of the 2014 SDN & OpenFlow Asia-Pacific Congress and had the opportunity to speak at the ONF Workshop that ran the day before the Congress. Aside from the truly remarkable hospitality I enjoyed in both cities, the thing I found most interesting was the preponderance of carrier companies in attendance at these events. This makes a ton of sense, considering the somewhat different nature of the APAC market and how the network infrastructure is changing for them. Continue reading “Dynamic WAN Bandwidth Provisioning Powered by Pacnet’s SDN Impresses”→
Flash technology has proven itself in the role of the high-performance pinnacle of the storage stack, but a few companies are looking to redefine it to accommodate the growing number of large, in-memory applications.
Upcoming offerings from several vendors are looking to move flash even closer to the compute layer, and this has the potential to remap the traditional CPU/memory architecture in a server near you.
I’ve had the pleasure of attending a number of technology conferences recently, and aside from “hybrid cloud” the one message that crosses over all of them is the overwhelming interest in flash memory technology and how it can improve the performance and flexibility of your data center. We all know that flash isn’t really all that new, flash-based SSDs and memory cards have becoming the de facto storage platform for almost every portable device, but adopting flash for enterprise-level applications has taken a little longer as vendors scrambled to insure that it truly meets the reliability, performance, and cost structure expected for high performance, high duty cycle applications. The obvious first-case applications focused on storage vendors who found that the SSD technology and form-factor was a perfect match for use as a caching layer at the very top of the storage stack in the conventional NAS/SAN model. There were also a few forward-looking vendors like Fusion-IO and OCZ Technologies that chose to decouple flash from the storage bus and upgrade it directly to the PCIe bus. Continue reading “Flash, Flash, Flash, Flash…Oh, and Did We Mention Flash?”→
• Assuming that you can simply combine two important job functions into a single entity isn’t necessarily the best or smartest way of managing IT resources.
• Your environment may need a lot of work before you can effectively cross that line.
As IT professionals we’re constantly challenged to do more with less, and no one can argue that all of the wonderful flexibility offered by virtualization hasn’t fundamentally changed the nature of the data center in a remarkably short period of time. But simplifying the physical concerns of standing up servers and applications doesn’t necessarily mean that you can simply merge developer and operations functions into a single entity with a unified purpose. This is an evolutionary process, and — because bean counters are always looking for things like this to thin head counts — smart IT managers might want to head this off until they’ve taken an honest look at their environment. Continue reading “In Search of the Rare and Elusive DevOps Beastie”→
Tech vendors have been sending mixed and confusing messages to consumers about the cloud at every level.
Surely, there must be a better way of marketing cloud offerings that separates consumer cloud services from IT cloud initiatives.
My position about the cloud from day one had nothing to do with hyping the ‘wonder’ of the cloud, focusing instead on the broader philosophical changes that cloud technologies bring to IT in general: for example, the elegance and simplicity of choosing whatever resources you need from a tidy little menu and having them available on demand. Reducing spin-up times from weeks to minutes, now that’s magic. I come from a computing background of building everything from scratch, because I am an old nerd and that was pretty much the only way to get things done, so I cannot help but be a rabid fan of elegance and simplicity. Of course, that degree of simplicity always has a price and we are paying that now in the form of competing standards and inconsistent terminology, making it difficult for IT managers and consumers alike to understand their options. Instead of making the conversation easier, this makes it more complex. Continue reading “The Cloud Is Out There… But Is It the Same to Everybody?”→