Despite OpenFlow’s Promises, Switch Architecture Still Matters
September 26, 2012 Leave a comment
- In the race to get OpenFlow and SDN onto new networking RFPs, enterprises must remember that controlling flow-based traffic patterns will serve to address a couple of weaknesses of networks past; however, edge-to-edge switching latency, performance, and more remain crucial.
- For the first two to three years, as enterprises prove OpenFlow and early SDN technologies within their environments (and to themselves), the prevalent model will be a hybrid one, in which a vendor’s high-speed fabric and flow control run concurrently on a device (Cisco, Brocade, Juniper, Arista, etc.).
I find it amusing that the OpenFlow discussion has polarized pockets of the IT industry so completely. It is a great innovation, absolutely, and it will address certain limitations and free up otherwise locked networking resources. However, you get the sense that any given author of one of these articles is slightly biased to applications, servers, or networks. The application purist who consumes all resources for the purpose of application architecture wishes to remove inhibitive deployment times from the infrastructure and therefore does not focus on the minutia of each domain’s critical factors. The server teams have long sought to enable their own domain constituency to deploy high-speed interconnect between adjacent servers; in fact, several technologies exist from the biggest server vendors to provide for just such an answer. The network team members, who have found themselves thrust into the infrastructure limelight due to the efficiencies to be gained, struggle with this newfound stardom and the education that they must gain in order to elevate all of the network attribute qualities for which they are responsible. Many enterprise IT buyers who are writing RFPs are in the process of adding (or have already added) some flavor of SDN language to the mix, which is good, but there is merit in having the discipline expertise contribute to the RFP itself. Server administrators are the best at defining the understanding memory riser architectures and how best to deal with firmware ‘fun’ on their platforms, while network administrators are best suited to defining the wired architecture and intricacies and application guys can best address acceleration needs and OSI 4-7. OpenFlow and SDN are amazing, but fundamental architecture needs remain.
Early SDN and OpenFlow protocol-based architectures in particular are going to operate in what is being referred to as ‘hybrid’ mode. This is the ability to operate both the control set required for the OpenFlow protocol to work and portions of the native operating system still functioning at L2 and L3. Why does this matter? All of the vendor fabric solutions used to operate in the context of their OS and almost exclusively at L2. OpenFlow can address link congestion and optimize traffic patterns, as well as other attributes previously difficult to address. This leaves latency, L2 scale, and network resiliency to the fabric or inherent networking architecture. The last one in particular is critical, as OpenFlow 1.0 has not yet addressed controller resiliency; therefore, the fabric must offer operational failover should the need arise. These criteria are categorized within the minutia of network detail and demand that deep discipline be retained on the IT staff (whether on payroll or retainer) to accommodate this requirement. The role of a cloud architecture team is a diverse one in which many disciplines are required. This ‘IT Team of Tomorrow’ will consist of a cloud architect (alongside a cloud business counterpart) managing domain disciplines, but I certainly hope we do not swing the pendulum too far away from the domain expertise that is mandatory for each of the technologies to be optimally deployed for cloud enablement.