- OpenFlow will move from the academic to the commercial in the next 24 months
- Vendors’ perception of OpenFlow will determine whether they resist or embrace the technology
From its beginnings at Stanford as a research project to becoming a technology movement that has start-ups building businesses around it, OpenFlow has emerged as a topic of discussion in many networking circles today. OpenFlow is essentially the proposal to add a hook into an existing network device that enables control and forwarding actions to be centrally managed off-device and then implemented identically across all devices in the network. Whether this control includes table replication, routing actions, security policies, or even access control list (ACL) population, all are possible with the OpenFlow architecture. Consider that a device could merely execute packet handling directions versus actually having to determine which decision to make. This, in turn, could radically reduce the processing requirements on the device itself, in addition to enabling a consistent policy application across a very large number of devices (theoretically, tens to hundreds of thousands), which would vastly simplify management. OpenFlow also offers the operator the ability to integrate intelligence into the network without relying on the network device’s operating system or even application awareness that can then execute and apply QoS and security policies. While some vendors offer devices that possess this ability today on a select portion of their portfolio, there are exceptionally few environments that are 100% standardized on a single vendor’s latest generation of products.
One of the greatest fears about OpenFlow is that is will commoditize the network by eliminating the ‘secret sauce’ each vendor has within its switch operating system. If OpenFlow is only perceived as a switch and routing technology, offloaded, then that fear would be justified. However, the reality of what OpenFlow can provide is much more than just simple routing and switching decisions. Consider the complexities that virtualization has introduced, such as the requirement of tightly integrating switch element management into the virtualization platform management in order to facilitate consistent QoS across the physical infrastructure during a virtual machine move. This is challenging enough in the context of a single enterprise’s infrastructure. When multi-tenancy is introduced, and further complicated with the mix of multiple vendor networking devices, tracking and application of a consistent QoS model is quite difficult. This is the reality faced by service providers (cloud inclusive) today, and it will be the reality for enterprises soon, if not already.
Few large networking vendors have strongly stated their support for OpenFlow; yet, many are involved, if only subtly. While it is unclear which ones will incorporate specific elements of OpenFlow to solve tomorrow’s problems or how their messages will play out in both their installed base and the marketplace at large, I’d wager that within the next six to 12 months, we will see position statements from nearly every major networking vendor regarding OpenFlow, including their planned support, at which point I say the technology will be well on its way to commercialization for either mass-market applications or more specific niche applications. Today, the commercial options are limited and only from smaller players, which in turn means that the solutions are generally not simple to deploy yet or ready for primetime. However, in the coming months, the big network hardware vendors will weigh in, and the industry will see exactly how OpenFlow will influence and affect architectures in the data center as we move ahead.