Tip:
Highlight text to annotate it
X
- I’m going to start off by telling you the end of the story so that you have a framework
in which to understand the rest of the talk. - Databases are commonly thought of as repositories
of information but if we allow them to *evaluate* rules based on the current environment, they
can actually enhance the whitespaces. - As you will see, there is a potential for
huge gains for both primaries and secondaries if we create regulations which take into account
aggregate interference and the local state for each whitespace device.
- When a whitespace device contacts a whitespace database, it provides its location in exchange
for a list of available whitespace channels. - In between the devices and the databases,
we have an API which defines the interface between the two. Having an API allows either
side to change while being transparent to the other.
- It is important to get the API right the first time because although the databases
can change and new devices will be introduced, the API will be more or less fixed for compatibility
reasons. - We argue that we must include a way to change
the maximum transmit power in the API for two reasons: (1) international harmonization
(Ofcom in the UK and the ECC are already thinking about this) and (2) for future compatibility.
We don’t need to have it do anything interesting yet but we do need to have it in there.
- Suppose max. power is in the databases. What can we do with it?
It’s certainly not obvious at this point what the optimal power allocation would be.
As this map shows, the variation just in the number of available channels is quite high.
In cases like this, economists turn to trading in order to find efficient solutions. The
Coase theorem states that, in the absence of transaction costs, trading will lead to
an efficient outcome. We now turn to a thought experiment on just how such bargaining might
play out in the whitespaces.
- Let’s take a very specific example in which we have a one-dimensional world with
two whitespace channels, red and blue. - Each channel has a primary transmitter or
two, each with a protected region (shown as colored circles) and a no-talk region (shown
in black). If a primary is “protected,” it means that any TV receiver inside the protected
region will still receive and be able to decode TV signals from that tower.
- For now, we’ll consider a set of users which is evenly spaced along the one-dimensional
world. Each user is represented by a dot (or a pair of dots if he has two channels available).
Note that we will use the terms “user” and “location” interchangeably.
- Finally, we need to define a currency before we can begin trading.
- Since the only constraint is the primary’s protection, the natural currency is the amount
of interference caused to the primary’s receivers. Each primary receiver can tolerate
a certain amount of interference: any more and reception will be lost. This interference
“budget” - combined with the attenuation (for example,
we use the ITU propagation model) - leads to a natural relative pricing on transmit
power. In general, the cost of a unit of power is inversely proportional to the distances
to protected regions. For example, in the blue channel we see the cost per unit power
increasing as we go toward the right. In red channel, the power decreases and then increases
again as we travel between the two primary transmitters.
- Here we see an example of one transaction that might take place between users A and
B. Note that the interference for each TV transmitter is shown next to it as a stacked
bar graph. - In the red channel, A decreases its transmit
power and B increases its transmit power. In exchange, B gives A some money which can
be used to buy power from other users. Notice that the total amount of interference to the
primaries remains the same but the proportions have shifted from A to B.
- In a second transaction -- which need not be with B but we’ll assume it is for simplicity
-- A pays B in exchange for power in the blue channel. Again, note that the interference
has shifted but not increased. Because each user is now using “less expensive” power,
the total transmit power (and therefore utility) has increased while staying within the interference
budget.
- Let’s fast-forward a bit to get an idea of the end solution.
- First of all, we see that this particular user had no choice but to use the blue channel
since he was in the protected region on the red channel. Thus he is unable to shift his
power away from the blue channel. - Next, we see locations which are in the
opposite situation: for them, power is inexpensive in the blue channel and very expensive in
the red channel - so they opt to use only the blue channel.
- The locations in the middle are in a similar situation but the price differential is low
enough that their optimal strategy is to use both channels.
- Finally, we see that there are a few locations which are close to the protected region on
both channels, making them the most difficult to serve.
The Coase theorem seems to give the solution to our problem: let the whitespace devices
trade among themselves in order to arrive at an efficient allocation of the resources.
However, we haven’t actually met the conditions of the theorem: in the real world, there are
transaction costs and here they are significant. There are potentially many participants in
this trade, all of whom would need to be able to communicate with one another which imposes
an additional and unnecessary constraint on the whitespace devices. … so it’s time
to look at an alternative to trading.
- Although we can’t do actual trading -- at least not as the default -- whitespace databases
have near-global knowledge which would allow them to simulate the trades between whitespace
devices. - Unlike humans, wireless devices have very
simple desires: they want as much data rate as they can get since all of their value comes
through data rate. Because of this, we have a hope of giving a solution to this problem.
- We cannot possibly account for everything in our simulated trading so we should hope
to offer a good default which maximizes the greater good. The real question is: what is
the greater good?
- One natural goal is to maximize the total power used by whitespace devices while providing
sufficient protection for primaries. However, the solution to this maximization problem
is easy to predict given the relative prices of power:
- On each channel, the location or locations with the cheapest power will receive inordinate
amounts of power while all other locations receive none.
- This is clearly not a good solution.
- Another natural goal is to maximize the average data rate across locations. When we
do this, we see the following power and rate allocations. We’ve shown this for a very
simple one-dimensional world with a single primary at the left. We see that as the distance
to the protected region increases, so does the power and hence the data rate.
- This seems great until you notice that there’s approximately 50 km where the data rate is
ZERO and the results are very similar regardless of the specific objective function. Although
these objective functions promote data rate equality, they result in a lot of wasted whitespace
and very unequal data rates. What is it about the whitespaces that renders this common objective
function undesirable?
- To see why maximizing the average data rate does not work well in the whitespaces, let’s
compare the whitespaces with homogeneous spectrum. - As we’ve already seen, the aggregate interference
constraint is a *weighted sum constraint* rather than a sum constraint as in homogeneous
spectrum. Since the cost depends on location, we can get more “*** for the buck” by
using power far away from the protected region. Note that we don’t see a point mass for
the power as with the previous objective function since the logarithmic nature of the capacity
means we see diminishing returns. - The other main difference between homogeneous
spectrum and whitespace spectrum is that devices operating in the whitespaces will experience
additional interference due to primary transmissions. This compounds the problem of power “price”
by making a unit of power worth *less* the closer it is to the primary transmitter.
- At this point, it seems as though it may be important to explicitly incorporate fairness
into the objective function if fairness is our goal. As a result, we will look at the
most fair objective function: the maxmin data rate. This objective function seeks to provide
all locations with the same maximal quality-of-service guarantee.
- The optimal solution will have the following properties:
- Locations which are inside of the protected region on all channels are not considered.
- However, any location which it outside of the protected region on at least one channel
will be given the quality-of-service guarantee, even if it is an “expensive” location.
- Finally, each location should seek to use its “cheapest” channel first and only
migrate to “expensive” channels if it has no other option. This will help maximize
the overall system utility and thus the maxmin rate.
Rather than using a toy model as in our previous examples, we will test this solution using
real-world TV tower data and the ITU propagation model. For simplicity, we have restricted
ourselves to the line shown but we expect our results to hold in two dimensions as well.
Here we see in blue the protected regions of each TV tower along the line so you can
see there are a lot more towers near the east and west coasts than in the center.
Here we see the variation in this one-dimensional slide of the United States. On the left, in
Berkeley, there are very few whitespace channels. As we travel east through the less-populated
states, we see that there are more channels available. The number of channels decreases
again as we near the populous east coast.
- Now that we’ve seen the setup, let’s take a look at the objective function. The
goal is to maximize the minimum data rate across locations by varying the secondary
transmit powers. At each location, we consider the *total* data rate available rather than
the rate achievable on each channel. To estimate the data rate, we use the theoretical Shannon
capacity formula, which as you can see in the graph here is logarithmic with SNR.
- The only constraint in our problem is that of primary protection (also called the aggregate
interference constraints). These take the form of weighted sums of the powers at each
location so we see that the constraints are linear in nature.
- The results of the optimization problem are shown in this graph. For now, let’s
look at just the far-left points and I’ll explain the x axis in a minute. The y axis
is the maxmin data rate achievable with the chosen power allocation. I’m not going to
go into the details of the algorithms but essentially we created a set of algorithms
which solve the optimization problem in slightly different ways. There are two different types
of awareness that these algorithms can have: spatial and frequency awareness. Spatial awareness
simply means that you’re aware of what other secondaries “nearby” are doing in your
channel. In particular, the spatially-aware algorithms take aggregate interference into
account and therefore protect the primaries, unlike the others which do not guarantee primary
protection. An algorithm which is frequency-aware makes intelligent decisions with regards to
which channels to use based on the relative cost of power. Of the algorithms that are
spatially aware, we see that the frequency-aware algorithm does much better than the others.
This is because it recognizes the interplay between channels rather than treating each
as an all-or-nothing game. - Now let’s look at the x axis. In order
to rule out any pathological cases in our toy model, we allowed each algorithm to remove
the minimizing point in an iterative way. We see that the context-aware algorithm is
still far outperforming the other algorithms.
What we’ve seen is that with the right APIs, the databases can actually turn into an agile
spectrum-as-a-service architecture that not only creates more value but also paves the
way for real-time spectrum markets when the whitespaces become congested. For now, we
can try to give good defaults by creating context-aware regulations which will be evaluated
in real-time by the “databases.”