Tip:
Highlight text to annotate it
X
But the question might be, is it really that simple?
Is it just that you can map out the network, figure out which nodes to target,
and you're done. Well, when some very smart physicist
started doing this on, for example, the physical topology of the Internet,
computer scientists didn't really take it that seriously because they said, well,
it's, You know, how these ISPs or how these
different nodes are connected, is dynamic. So you have routing tables and depending
on you know, where the packets are getting through and where they're not, the routing
tables are continuously updated. So you could in fact, inflict what you
think is a lot of damage and the, the network simply organically adapts to what
is going on. Similarly,
You know, it would be nice, Ideally, you map out a terrorist network
or a criminal network, for example, individuals involved in the drug trade.
However, there was recently a report, you know, the ten year long effort by the US
to target you know, exactly this kind of drug trade network and, they found that no
matter how many of those nodes they removed by incarcerating them, sending
them to prison, The network adapted,
Just new people came in and filled that space.
So, it's not that the network is sitting there and saying, oh, okay, I lost this
node, I'm not going to do anything about it.
They're, they're very dynamic and, so, yes, you know, perhaps during a short time
period, if you can simultaneously take out a lot of the nodes, and the network has no
way of replacing them, then you can predict, you know, that you're going to be
able to break up the network into small non, non-functioning parts.
But, in general, You know, you have to be careful about
this, This particular assumption, what power you
have to bring the network down. So let's look at a particular example and
not as of the power grid. Here again, there you know we're going to
dismiss some assumptions that we had with other networks and we're going to
introduce some new simplifying assumptions.
Here, what we're talking about is that where the electricity is produced, is not
always where it is consumed and you need intermediate distribution actually
transmission stations that will bring the power from the generators to the kind of
the end distribution stations that then deliver the electricity locally to
customers. And, as you probably are aware,
Failures of the power grid can be quite dramatic.
So, if you have a power plant, go down in, or, or one station go down in, at, at one
location, You can get what's called a cascading
failure, Where, you know, entire spots of the
country will be without power, Because this is a network,
And what happens at one node, actually influences what happens at, you know, to
the other nodes. So, in a way, the, here the damage is
greater. It's not that removing one node, now
you've lost the paths that go through that node, and the edges involved, with that
node. Because electricity flows, you know,
basically simultaneously through all possible paths one removal of one node
actually effects all of the, all of the net, the rest of the network to a greater
or lesser extent. So let's see how that happens.
What happens is that, when a node goes out, it will have some load and a
capacity, and say that the load exceeds the capacity, the, the node will fail, and
now its load is redistributed around to its neighbors.
So I'll just follow this paper by Kinney et al and this is Reka Albert from the
Barabasi-Albert model who's on this paper as well.
So again the node's are generators, transmission substations, and distribution
substations and the edges are high voltage transmission lines.
And there basically thousands of these nodes and about 20,000 edges.
So. What happens?
Well, first of all, use see this straight line?
You might think, oh, that's a power lot. Well, no, it's not.
Actually, the power grid is an exponential network and you can tell because this is a
linear axis and this is logarithmic. And so that means that there aren't really
these huge hubs, Right? The highest degree node has fewer than 25 connections,
Which kind of makes sense. Ccan you really imagine, one, you know, one of these
transmission stations being wired to a thousand others, right?
You just can't really imagine this topology of, of high voltage wires really,
really doing that, So we have an exponential network.
And now we need to know, where is the electricity flowing?
So, it's not just that it flows through all paths.
So we're going to see which, Where, which paths is it really flowing
through, depending on their capacity. And so, we're going to have certain
efficiency of the edge, and we can compare,
And then we have efficiency of the path that is composed of a bunch of these
edges. And the simplifying assumption in the end
is that we're going to assume it only goes along the most efficient path.
But in practice, it actually goes through all of them, but primarily through the
high efficiency one. So,
We, we can compare three different paths. Path A has two edges, and each of those
edges has efficiency 0.5, which gives the efficiency of the path of one-fourth.
Path B has three edges, each with efficiency 0.5, which gives the efficiency
of the path as one-sixth. And then, path C has two edges, one with
efficiency zero and the other with efficiency one, and the efficiency of the
whole path is zero, because if part of the path lets no electricity through, then,
you know that path lets no electricity through. And it's nice that you, instead
of, you know, averaging these efficiencies, we have that the shorter
path is more, is more efficient. So,
We can then look at the efficiency of the overall network.
And we're going to just look at all the paths between all the generating stations
and all the kind of end distribution stations.
And we're going to look at the efficiency of the most efficient path between i and
j, And that's how we'll measure how well the
network as a whole is doing in delivering the electricity.
What we're also going to say is that the capacity of each node is proportional to
its initial load times alpha, and you want alpha to be greater than one, because you
want its capacity, capacity to be greater than its initial load. And then what's
going to happen as the dynamics unfold, is that if the load is less than the
capacity, then the neighbors are unaffected, right?
The node is functioning as usual and the neighbors are just carrying whatever they
would normally be carrying. However, if the load of this node exceeds
the nodes capacity, then that node shuts down temporarily.
As, as long as the capacity, As long as the node exceeds the capacity,
it will be in a degraded, degraded state and instead, its neighbors are going to
have to carry some of the load, which is going to degrade their, their efficiency
by this factor, which is how much, the original node was overloaded.
And so, this is kind of this distribution which then can produce the cascading
failure. One node exceeding its capacity now puts
that, load on its neighbors, who in turn can fail and those nodes in turn might,
you know, then distribute their load to their neighbors, etcetera.
And what in this paper, they measured simulating, but on the real topology is
what happens if you just remove nodes at, at random.
[laugh]. So, you just assume one, one or more of
them are failed for some reason, And this is as you target.
You know, you get to pick which one fails and you can see that this is this is
worse, because the network efficiency drops down further and this is random.
And, here is the overload tolerance. So, just look at this fgure closely and
see what you can conclude about the network efficiency as a function of this
overload tolerance that you've given to the network.
So specifically the quiz question is how much higher would the capacity the average
capacity of each node need to be relative to the initial load in order for the
network to be basically unaffected by the removal either targeted removal of a node
or a random a random failure of such a node.
So hopefully, what you saw was that, increasing the average capacity just 30%
or 40% above the typical load, what we call the initial load, would render the
network com. You know, relatively safe to the random
and even targeted failure of a particular node.
Now, why do or why did in the past these cascading failures occur?
Well, that was because power companies had little incentive to provide this
additional capacity. Aall right? Because they were kind of charging for the
load they were carrying, not for additional capacity they could provide,
which would kick in, in situations where there were failures.
But, you know, research such as this can show that, that additional capacity need
not be that large and it would still render the, the network pretty resilient
to failure. Now, to recap.
We saw that resilience really depends on topology.
And this was, you know, not just the degree distribution,
Which then rendered the network, say, more resilient to random attack if there were,
if the degree distribution was skewed, But less resilient to targeted attack, in
that case. But, it also depends on what happens when
a node fails. It could be that,
That's it, just a node fails, but all the other nodes are unaffected, or it can have
this cascading effect, for example, what we see with the power grid,
Or it can have the opposite effect, which is what we saw with, say, crime networks,
where the node is quickly replaced. In fact, someone was probably eagerly
waiting to occupy that spot, and so the damage isn't even as bad as, you know,
having that, node no longer there.