Tip:
Highlight text to annotate it
X
In this video, I want to talk about Meru's contention management schemes, both within
a single access point and also with multiple access points that happen to be on the same
channel. So what I want to do is first start with how we do contention adaptation to minimize
collisions with one AP, and then as a second phase, talk about how it extends to multiple
APs. So before we get to that, let's start with some core principals.
So as you are probably aware, 802.11 is based on CSMA/CA, Carrier Sense Multiple Access
with Collision Avoidance, which is a wireless variant of CSMA/CD, which is sort of the core
premise for Ethernet. The standard Ethernet graph, which most students of the art are
familiar with looks like this, where the X access is the number of contenders and the
Y access is the channel utilization. Broadly, this equates to throughput.
So the idea is that for a few contenders, as the number of contenders increases, the
utilization actually goes up, but then it starts to drop off. This really is the peak,
and notice of course that the ideal is to flat line at the peak. So let's take a quick
digression into why this goes up and then comes down, and then let's look at the impact
on 802.11. So as you know, a standard CSMA/CD or CA mechanism is based on the following.
You sense the carrier, the carrier might be busy for a while, and then when the carrier
becomes free, depending on the protocol, the protocol might mandate that a transmitter
wait for some period of time to make sure the channel clears up, and then using a contention
resolution, a multiple access protocol which is based on back offs or randomized back offs,
the transmitter picks a certain value -- a back off value -- which is a random value
from zero to some contention window. Notice this is a randomized value that is uniformly
distributed from zero to X.
So let's assume that there is a device that picks four, because let's say it had to pick
from zero to seven, so it picked four. So three, two, one, zero. So it waits for four
slots. If the channel is still free, it starts to transmit. This is the basic idea that governs
all carrier-sense multiple access protocols. Now the point is the following. Notice that
this number, this contention window number, is highly a function of the number of contenders.
If you have only one contender, you don't need to back off. You can basically pick zero.
If you have, correspondingly at the other extreme an infinite number of contenders,
this value should be infinity.
As a specific example, if you have two contenders and they pick values between zero and seven,
notice each of them has up to eight choices and in fact there is a 1/8 probability of
collision, because each device can choose up to eight values, zero through seven, which
means there's a total of 64 combinations, any eight of which will cause a collision.
Because if they happen to pick the same value there's a collision. So the probability of
collision in this case is 1/8. So the larger the contention window, the lower the probability
of collision, but if there aren't multiple transmitters, the longer time you wait. So
there is a tradeoff between waiting too long and colliding, and as the number of contenders
starts to increase, it is very difficult to predict this value correctly. So all of this
is really collision-based loss.
And again, what is collision? When there are two devices that happen to pick the same value,
that is collision. Now let's look at 802.11. It turns out that the 802.11 graph is actually
much steeper, and the reason is unlike Ethernet there is no collision detection in 802.11.
If I'm a transmitter, I can't transmit and receive at the same time, and even if I could
it's irrelevant, because if I'm transmitting to you the collision at your end is what's
important, not the collision at my end. So the way 802.11 works is when a device starts
to transmit, it transmits a data frame, and then it expects an acknowledgement from the
receiver. So you transmit, you wait for some amount of time, you timeout, if the acknowledgement
didn't come in then you declare a collision and you move on. Essentially, you retry.
So notice that the amount of time it takes to resolve a collision is a heck of a lot
longer in 802.11 than it is in Ethernet where you detect a collision right here. So the
penalty for collisions is higher, and hence, as the number of collisions start to increase,
the throughput starts to drop off. This tells us something really critical. It tells us
that it's crucial -- even more crucial in 802.11 than in Ethernet, in terms of picking
up this contention window. So the WMN standard, which is an enhancement to the basic 802.11
standard, actually allows or provides the infrastructure for access points to advertise
these contention window parameters. In fact, it has four parameters that it allows an AP
to advertise. This initial wait time, so originally this was called a distributed inter-frame
spacing, and since then with WMN, depending on the class of service you might have a different
amount of wait time. So for the purpose of this discussion, let's call it an initial
wait time and this value is adjustable.
For those of you more technically inclined, go look up AIFS, which is an arbitrary inter-frame
spacing, and that maps to this initial wait time. So once you pick this initial wait time,
you pick a value -- again, a randomized value between something called zero to contention
window minimum, and for every subsequent retry you double this to the point where you reach
zero to CWmax. And once you succeed with WMN, you might not transmit just one frame, you
might transmit a sequence of frames for a length of time that is called TXOP. To summarize
then, WMN allows you to customize four parameters. This initial wait time, governed by this AIFS,
CWmin, and CWmax which for the purpose of this discussion let's just think of it as
contention windows, and TXOP which is the transmit opportunity, which is nothing but
the length of time that you transmit, once you get to win a contention. The point here
is if you know how to adapt the contention window to the amount of over-the-air traffic,
you can really reduce the amount of loss.
So Meru, over a period of seven years with a lot of research and several patent-pending
algorithms, has figured out a way to effectively estimate the number of on-the-fly contenders.
The number of devices that at any given microsecond are effectively contending for the air, and
on that basis, customize the CWmin and CWmax. By customizing these values and advertising
them over the air, we are able to nearly flat line. I would not say we have flat lined it,
that would be utopia for us -- we have nearly flat lined the aggregate utilization as the
number of devices goes up. This is something that' s a dramatic differentiator for us because
pretty much everybody else in the industry sort of tops off at some point.
So, this is one very significant aspect where we are using contention window adaptation,
what we call adaptation of the WMN parameters in order to minimize collisions. Step one.
Now let's get to step two. All of this stuff works within the concept of a single access
point. Now, we might have multiple devices that are contending, some might be contending
more or some might need to be suppressed more than others. So how do you do that? If you
go to the video that talks about Virtual Port, you will notice that each of these devices
in a Meru system is allocated its own virtualized access point, what we call a BSSID, and these
values that are advertised on a per-BSSID or virtualized access point basis.
Effectively what it does is it allows us to take these parameters and customize them on
a per-device basis. So if you have four devices, you have the ability to say device one can
get a low contention value while device two will have a high one, if you want to give
preferential access to device one over device two. The ability to manipulate these values
on a per-device basis is unique to Meru and is one of the key ways that we are able to
replicate switch-like behavior.