OpenFlow demystified

28.10.2011

In an OpenFlow network, the various control plane functions of an L2 switch -- , MAC address learning, etc. -- are determined by server software rather than switch firmware. The went even further when defining the protocol, allowing an OpenFlow controller and switch to perform many other traditional control functions (such as routing, firewalling and load balancing).

Today, the OpenFlow protocol has moved out of academia and is driven by the , a nonprofit industry organization whose members include many major networking equipment vendors and chip technology providers and has a board of some of the largest network operators in the world like Google, , Yahoo, Facebook, Deutsche Telekom and Verizon.

Three years ago, OpenFlow was driven entirely by a small number of universities and (gracious) switch vendors who believed in supporting research. OpenFlow allowed those researchers to test brand-new protocol designs and ideas safely on slices of production networks and traffic.

Two years ago, the programmability of OpenFlow started to attract interest from hyper-scale networking teams looking for a way to support massive map-reduce/Hadoop clusters. These clusters have a very specific network requirement: Every server needs equal networking bandwidth to every other server, a requirement commonly known as "full cross-sectional bandwidth." (Note this is not exactly the norm in large data centers today -- over-subscription of 8x-32x is often seen to control costs.)