Tip:
Highlight text to annotate it
X
Hi, my name is Rob Sherwood and today I'm going to show you how my tool
"FlowVisor". FlowVisor allows researchers to test out new experiments on
their production network at scale and with real users. In fact, all of
the other demos on this site run on top of FlowVisor on our production
network at Stanford. OpenFlow is a protocol for managing how packets
are controlled in your network. It works by standardizing the interface
between control and data planes [high-light control and data plane
as we say them], and then by moving the control plane off-box and to a
centralized service call the OpenFlow controller. In this video, we show
how our tool-- "FlowVisor"-- can logically slice an OpenFlow network, and
allow _multiple_ controllers to concurrently manage different subsets---or
"slices"--- of network resources. Critically, FlowVisor ensures strong
isolation between network slices so that actions in one slice do not
affect another. FlowVisor works as a transparent layer between switches
and multiple controllers. That is, it speaks the OpenFlow protocol both
down to the switches [animate "OpenFlow" arrow down to switches] and also
up to the controllers [fade first arrow, animate second "OpenFlow" arrow
up to the controllers, then fade]. In this way, much like a hypervisor
acts in standard machine virtualization, FlowVisor intercepts all control
messages to and from the datapath, sanity checks and rewrites them to
ensure isolation. Let's see how this works in practice. In OpenFlow,
when a packet arrives at a switch that does not match any cached Flow
Entries, the switch generates a message to the controller asking what to
do with packets of this form. The FlowVisor then intercepts this message
and makes a policy check to determine which controller is responsible for
this packet. The message is then forwarded to the appropriate controller,
which makes some forwarding decision, i.e., send all packets that look
like this out port #5, and sends a corresponding new forwarding rule
back down to the switch. The FlowVisor again intercepts the rule and
does another policy check: this time to ensure that the new rule does
not infringe on the traffic for the other slices. Once the rule is
approved, it is forwarded on to the switch, cached, and the packet is
then forwarded on appropriately. New packets match the cached rule and
are then forwarded without going through this process. Thus, slicing
with the FlowVisor imposes no performance penalty on packet forwarding,
and packets are forwarded at full line rate. So, if we use FlowVisor
to slice every switch and router in our network, we can create logical
_copies_ of the same physical network. This allows potentially faulty
experiments to run along side existing production network services
on the same physical network. In other words, FlowVisor will allow
researchers and network operators to try out new ideas on their real
networks without interfering with its normal, day-to-day operations.
What you're seeing here is a live graphical monitoring program of the
sliced stanford OpenFlow network. The physical network---shown in
the top middle ---is sliced into four experimental slices---shown in
each of the four corners---and one production slice, shown in red in
the bottom middle. Note that this is the same physical network that
we read our email and surf the web on, so if the FlowVisor does not
correctly maintain isolation between the slices, then we are the first
to know about it. Given FlowVisor's strong isolation capabilities,
we now have a novel technique to roll out new services. Specifically,
users can selectively delagate control of a subset of their traffic to
a new service, that is they can "opt-in" to a new service. For example,
user Doug may decide that he wants his voice over IP traffic handled by
KK's wireless for low latency [highlight KK's slice], his web traffic
handled by Nikhil's load-balancing slice [highlight Nikhil's slice]
optimized for high throughput, and the rest of his traffic handled by a
default, production slice. Not just a good idea on paper, we actually
deploy FlowVisor on real equipment on our own production network.
In addition to Stanford, FlowVisor is deployed on 7 college campuses
and in two backbone networks. If you would like to learn more about
FlowVisor, please check out our OSDI2010 paper "Can the Production
Network Be the TestBed?", the FlowVisor's website, or just download the
code from our code repository.