Mike is Service Director for the Current Analysis Business Technology and Software service. Mike and his analyst team monitor and evaluate activities in the markets for Application Platforms, Collaboration Platforms, Data Center Technology, Enterprise Mobility Technology, Enterprise Networking, Enterprise Security, and Unified Communications and Contact Centers. Additionally, Mike reports on major technological, strategic and tactical developments of companies that provide networking solutions deployed on premise to support enterprise business operations.
While software-defined networking (SDN) offers a means to normalize and simplify/automate a portion of the switch infrastructure configuration, work remains on policy deployment and security.
The deployment and integration of management software has been a severe pain point for enterprise infrastructure, with cloud services only compounding the issue.
Historically, enterprises either “built their own” management tools for script control and device configuration, used vendor-specific element managers and resigned themselves to running many different platforms (usually one per vendor), or attempted to integrate some of each of these into a framework from one of the major management platform vendors (HP OpenView, Computer Associates, BMC, IBM Tivoli, etc.). This issue has only been compounded with the needs driven by cloud applications and associated experience management (centrally managed QoS, security policies, and of course, bandwidth assurance) across a virtual infrastructure which has its own challenges. Part of the answer lies with SDN, particularly the OpenFlow initiative, which will provide a common management framework across a multi-vendor infrastructure, enabling consistent policy deployment (QoS, security, etc.) and configurations. However, the vendor-agnostic orchestration and automated deployment of an end-to-end cloud experience, whether application-driven or organization-dictated, is still in the process of jelling and remains the purview of startup vendors. There has been a significant amount of venture interest in this space in the last few years (VMTurbo, Puppet Labs, and Joyent, to name just a few of the many). These start-ups will also be competing to some degree with the larger framework players mentioned before, as they, too, seek to address this growing need. Continue reading “Cloud Computing Fabric Capacity May Be Here (Debatable), But Is It Manageable?”→
The combination of the “bring your own device” (BYOD) phenomena and higher-speed WLAN access will further exacerbate IT challenges.
Processing power on the next generation of tablets and phones will change the paradigm for how enterprise users interact with their applications.
If you have not yet heard, the IEEE’s 802.11ac wireless LAN standard is imminent, and while the standard still may not be ratified for up to a year, this will not stop product developers from taking advantage of early release chips in the hyper-competitive consumer space. However, this also matters to the enterprise IT department, as some of these devices are incredibly powerful, such as the ASUS Transformer Prime, the first quad-core tablet (though it does not have an .11ac radio). There is also speculation that the iPad 3 will possess a comparably powerful chip. This processing power opens up new potential opportunities for malicious damage to be done via rogue security software (or ‘rogueware’). Still, with the advent of this much faster WLAN specification (speeds up to 1.3 Gbps will be possible), we may also see radical changes to the ways in which users access applications and interact with these resources. Consider that these new tablets, phones, etc. will have processing power surpassing the desktops of just a couple years ago, as well as a mobile 1G throughput capacity. This throughput and CPU performance should smash any limitations from a performance perspective for VDI and alter the ways in which we interact with critical applications. Continue reading “One Gigabit WLAN Speeds Coming Sooner Than You Think; Is Your IT Shop Ready?”→
Mobility will continue to drive enterprise network spend aggressively in the campus.
Data center fabrics will further coalesce, and interoperability between carrier and private networks will improve (performance and simplicity).
Last year saw a number of new initiatives and exciting developments drive the network industry forward. WLAN growth was enormous, due to both the proliferation of wireless-enabled devices being brought into the enterprise and the sheer economics of not having to re-cable a plant or campus for increased productivity. This year will only see this investment increase, and the WLAN sector will benefit for it. Look for Aruba, Cisco, HP, Motorola, and more to articulate and differentiate based on their existing market share and technology breadth. As device proliferation accelerates and more enterprise applications become tablet-friendly, enterprises will begin to establish processes and some will even go so far as to provide company-sponsored tablets which they can control and administer directly. Whoever purchases the tablet is secondary to the additional increase in WLAN traffic that will come with these devices. If your architecture is more than two or three years old, it is time to evaluate and plan for traffic contingencies to ensure that productivity applications will have sufficient bandwidth to perform well. Those applications will include VDI, mobile video conferencing, and new and creative use cases for bi-directional multimedia (multi-site classrooms, multi-site faculty participation, etc.). Continue reading “If You Thought 2011 Was Exciting for Networks, Wait Until You Get a Load of 2012”→
Cloud-based applications and controllers reduce complexity and move costs from CapEx to OpEx.
Cloud-based control and management are likely to provide one of the most secure job roles for the next decade.
Since the beginning of the “cloud era,” new use cases for applications have been created nearly overnight. It has evolved the application hosting market and created new IT service juggernauts (Salesforce, Amazon, etc.). However, an area seeing increased attention from both vendors and start-ups/VC is that of hosting infrastructure within the cloud. I am referring to wireless LAN controllers, security gateways, and other technologies that were often appliance-based and located on-premises nearly 100% of the time. By virtualizing the location, IT achieves a range of benefits: fewer assets committed in the data center (if they host offsite), a greatly simplified support model, reduced “truck rolls,” and less hardware required on-premises. Continue reading “How Much IT Can We Host in a Cloud?”→
You do not leave your lights on when you are not at home. Why leave your network on when it is not in use?
Standards and products make possible 30-80% reductions in power used by network switches when idle.
Every port, every device, every wire plugged in consumes some amount of trace power. While this awareness has been present for decades, only in the last five to ten years have vendors taken a close look at what can be done about it. Initially, there were proposals and early standards that did make rough attempts at efficiency curves and requirements, sort of like ‘Energy Star’ for home appliances. However, in the last year, a standard called Energy-Efficient Ethernet (EEE) has been finalized, and vendors have begun releasing products based on it. In essence, the standard enables a switch port to go dormant and consume very little power by listening for a trigger signal that indicates data is coming or on the wire. Few offices are occupied 24×7, and it does not make sense to have the port turned on fully on the switch side, just waiting for the transmission. With some quick napkin math, it is apparent to many that this can result in significant energy savings over the course of a year. Cisco, Juniper, HP, Brocade, Broadcom, and others support EEE in some platforms.
Since the advent of networking, customers have always weighed the cost of throughput vs. the effort of traffic optimization, creating a pendulum effect.
The market has reached a point where both sophisticated traffic management and performance are required.
In the modern enterprise, the average IT manager has many goals, but a few in particular have been coming up frequently: alignment of the function with business needs (IT acting as a business partner); agile application and solutions deployment; and an infrastructure that will scale and grow with the customer over time.
OpenFlow will move from the academic to the commercial in the next 24 months
Vendors’ perception of OpenFlow will determine whether they resist or embrace the technology
From its beginnings at Stanford as a research project to becoming a technology movement that has start-ups building businesses around it, OpenFlow has emerged as a topic of discussion in many networking circles today. OpenFlow is essentially the proposal to add a hook into an existing network device that enables control and forwarding actions to be centrally managed off-device and then implemented identically across all devices in the network. Whether this control includes table replication, routing actions, security policies, or even access control list (ACL) population, all are possible with the OpenFlow architecture. Consider that a device could merely execute packet handling directions versus actually having to determine which decision to make. This, in turn, could radically reduce the processing requirements on the device itself, in addition to enabling a consistent policy application across a very large number of devices (theoretically, tens to hundreds of thousands), which would vastly simplify management. OpenFlow also offers the operator the ability to integrate intelligence into the network without relying on the network device’s operating system or even application awareness that can then execute and apply QoS and security policies. While some vendors offer devices that possess this ability today on a select portion of their portfolio, there are exceptionally few environments that are 100% standardized on a single vendor’s latest generation of products. Continue reading “OpenFlow: Distributed Network Nirvana or Academic Science Project”→
Data center networking technologies are moving at a pace that few enterprises can keep up with
The networking provider of choice will impact cloud deployment plans and virtualization scale – so choose wisely
No one will argue that there have been more changes in networking coming out of the data center in the last 24 months than in the last ten years for the enterprise campus. This doesn’t demean the value of the campus, but rather highlights the standards and technology explosion inside the data center. Topics of debate and battlegrounds for vendor differentiation range from port speed and scale (1Gig to 100G) to protocol support and networking virtualization. Regardless, the standards remain in motion and a few standards in particular will have significant impact on the network architecture of choice for an enterprise. These include SPB & TRILL (competing standards to address spanning tree limitations), FCoE & DCB (storage over Ethernet and improvements to enhance iSCSI and existing storage over Ethernet), and of course virtualization insight and management of virtual switches. As several of these are not yet ratified, vendor support can only be gauged by stated intent (versus actually implemented). Continue reading “Data Center Fabrics: Enterprises Often Need a Networking Tailor”→
Bring your own device phenomenon challenging WLAN bandwidth
Networks architected for 802.11a/b/g may be limiting worker productivity and therefore efficiency
A satisfied, network-connected worker is a valuable resource in practically any industry. This has been the reality since wireless LAN (WLAN) technology, or indeed any network technology, was first brought to market. Over time, the network service quality improved based on technology advancements, client end-point support grew and ultimately worker productivity increased. However, this didn’t just happen overnight. The IT department worked hard to deploy 802.11a, then 11.b/g and now 802.11n networks to provide this powerful productivity tool. Those in IT also know how painful it was behind the scenes with early management tools, intermittent radio noise reducing performance, security concerns and interoperability between client radios and access point radios.