Tip:
Highlight text to annotate it
X
In this last segment what we'll cover are interactions over network topology that
effect how the nodes are innovating and solving problems.
As you might well imagine networks play a crucial role in how agile organizations
are, how adept they are at innovating, how adept they are at coordinating. And first
we'll look at innovation. And, of course, as with everything else,
we're going to look at a very simple model that we hope may represent some aspect of
the real world. And this is the basic conundrum.
When individuals are innovating, they can collaborate on.
One extreme end of this you have everyone in the same room brainstorming and that
can you know. Mean that you can really rapidly get new
ideas and prototypes etcetera. But you also have a danger of group think, right?
Where everyone is, starting to think the same thing and it may not be the very best
solution that they could have achieved. On the other extreme you have for example
you have, you could have a the network where no one is connected to anyone else.
And everyone is working individually. You might well imagine that their progress
would be very slow. Because they have to,
They can't take advantage of what others have figured out.
They basically have to figure out everything by themselves.
On the other hand, many people independently working on a, on a problem
means that you have many diverse approaches to the same problem and perhaps
one of those approaches will be optimal. So in a network context if we have most
individuals in the room together communicating, the network might look
something like this. Everyone is connected to everyone else.
On the other hand, and this isn't quite the same thing here, this individual has
not connected to anyone else. But here individuals will only talk
locally to a few colleagues. They won't be, you know, broadcasting
every solution throughout the company. So.
One way to model the problem space that these individuals are moving in is with
Kaufman's NK model. You have an n dimensional space.
Which just means you have a bit of, a string of bits that, that is zero and one.
So, for example, you know? If you, if you have a car, the first bit
can be, is the car black or white? The second bit could be, is it a sedan or
a station wagon? The third bit could be, does it.
Have a powerful engine, or a modest engine etcetera, right?
So they're just trying to figure out what the best car could.
B, for example. And the key parameter describes the
smoothness of the fitness landscape. That is, if you flipped one of those bits.
If you make it a sedan, as opposed to a station wagon, is that completely going to
change how well the car sells or is it going to be pretty similar sales based on
the fact that the other very mini-bits are the same including the engine and the
amount of room inside and you know, automatic versus manual transmission,
etcetera. So,
Just another way of looking at this. If k is large you have a very smooth
landscape. And, I can't really represent the n
dimensional space so this is just roughly so that you can imagine what's going on.
This means that there is some combination that's optimal, and if you tweak it a
little bit, you'll just get a little ways away from that optimum.
The more you tweak it, the more, the further away it will go but what this also
means is that, it's fairly easy to find what, what the optimum is,
Alright. So you flip the color from white to black, and boop,
Here you are, you've got in to a better solution etcetera.
It's not that you know once you put the color.
The fitness might jump over here, Right? If you've just slipped one bit
here, you're moving a little bit. On the other extreme, where K is small or
K is zero, it means that precisely this unpredictability happens.
You flip one bit and all of a sudden, you're in a completely, you know, you have
completely different fitness. So making a station wagon instead of a
sedan, means that people hate it, for example.
And then K medium means that there's some smoothness.
So if you're here and you flip a bit most likely, you will, you'll end up in.
Having somewhat similar fitness. Now, the general problem is that, so here
this solution's easy to find, because you're basically just doing hill climbing,
right? You, you keep flipping bits until you get
to the top. Right?
Over here you just don't know, right? You might be up here, it doesn't tell you
whether you're any closer to say, the solution.
Here. Because, you know if you flip a bit, you
might end up, you know, having vastly different fitness.
So, there's no way to really orient towards what is the ultimate goal with
this intermediate smoothness, it means, that you know.
You can make some progress and you can find a local optimum here this peak and
fitness. But this may mean that you're stuck.
Because if you just. Change one bit at a time.
You try going down here. You think, oh no, that's not going to
work. I better stay here, right.
So it's not as easy as here, where basically if you continue climbing to the
top you're bound to find the global optimum.
Here you can climb, climb, climb and find a pretty good solution but miss that there
was actually a better solution out here. So they're going to be some update rules
for our network such that we can simulate the process of either innovating or
imitating. So those are the two choices that each
node has, so each node is going to start out with a random bit string.
And it's going to do one of two things. If it notices that a node that it's
connected to has a better solution, it's going to copy that solution.
Otherwise, if none of its neighbors are doing any better, then it's going to flip
a random bit and see how it does. So, either it's going to imitate or it's
going to innovate. I've set up a simple ring lattice.
And each of the colors represents a different bit string.
A different solution. And you can see that there is variation in
the fitness of these different bit strings.
The highest one is point 662. And on average.
The average fitness is something like point five.
Now what we're going to do is we're going to allow the nodes to update according to
these rules. If one of their neighbors is doing better,
they're going to copy that solution. Otherwise, they're going to innovate on
their own. So we're going to let it run.
And eventually, okay. Right.
Everyone has conversed on the same solution.
But this solution you'll notice. 764. is higher than the original maximum.
So there is some innovation taking place and at the end all of the agents can.
Agents, because it's agent based modeling but all of the nodes converge on the same
solution. And you can see like, everyone has the
same fitness. Now, let's see if we go to somewhat of an
extreme here and add 1,000 additional edges.
So this is what it's going to look like. It's going to look very dense.
This time we have a best solution of 673. and we're going to let the nodes update
again. And you can see almost instantaneously
they all converged, but the value they converged on, let's see if you can answer
the quiz question. Relative to the regular lattice the
network with many additional random connections has on average slower
conversions to local optimum. Smaller improvement in the best solution
relative to the initial maximum. More oscillation between solutions.
What you should have seen was that, when you have many ties between the notes.
Let's go back. They actually converged more quickly.
But what they converged on was not as good.
Because they didn't spend enough time innovating.
So, in the assignment, you'll be trying to find, you know whether there's maybe a
sweet spot. For both innovation and, for this, quick
convergence. The final problem that I'll address, is
that of graph coloring. There's a very simple application of this
which is to coloring adjacent countries on a map.
The printer at some point only had a fixed number of colors that they could use, and
they didn't want adjacent countries to have the same color because then, for
example, you might think that, you know, Sweden and Norway are the same color if
they were both the same peach color, right.
So you might imagine that the nodes are countries and the edges are drawn between
countries that share a border. So, this was studied my Michael Currence,
who is teaching Coursera's Network For Life.
So this is ad actually for his class. But I feel safe doing that now, that we're
already in week six, and hopefully you're already committed to my class.
But they did this very neat experiment, with human subjects.
So they sat a bunch of human students in a lab, and each one of them was sitting in
front of a computer. And, they varied how much information they
could have about what color the other students had chosen so each student was a
node. And they were either told, you know they
were placed on some typology and they were either told what colors their neighbors
had, or they could actually see the colors and the topology for the whole network.
And the question is, how quickly could they converge on a solution.
And with human subjects, it turned out that more edges helped the,
The, the graph coloring problem, It helped it to be solved more quickly, but in scale
free networks, sometimes the presence of hubs actually made it more difficult.
So, I'll leave most of this for the assignment.
But here you have, you know, two adjacent nodes have the same color.
So, they need to fix that. And indeed in the final solution.
I'm not sure I'm pointing to the same one. Now,
Yes I am. So.
One of them is green and one of them is blue.
So, they, they switched. And this one that's also blue switched to
red. So here I have a small world topology. I can generate this a few times.
I can also have a very low rewiring probability.
And to start off with many neighboring nodes have the same color.
Now I'm just going to update this a couple of times until it settles into a solution.
And what you'll want to do, for example with the small world topology, is to vary
the number of you know, the amount of rewiring and see what happens with the
average time to the solution. So are nodes better able to solve when
there are fewer or more of these rewired links, or is there a sweet spot in
between? So let's see if we can answer this
question. As the rewiring probabilities increase
from zero to one, the following happens, the solution time decreases, the solution
time increases, or the solution time initially.
Decreases then increases again. If you, manage to get the model to run,
what you should see is that when it's a regular lattice, the amount of time it
takes for, you know, the solution to propagate all the way around, is fairly
long. As you add.
More edges, the solution time should drop. And then it increases again, interestingly
enough, once you have many additional edges.
Potentially because, that means that, that you ha, the problem itself is more
difficult. Because a node that has more neighbors has
a tougher time figuring out what you know, what color to be.
So just to recap, what we learned this week was that network topology can have a
profound influence on the processes that occur on these networks.
It can influence what state the nodes converge to, will the whole network become
infected or not, will they adopt one opinion or will many different opinions
persist and it can also affect how quickly that state is reached, you know what is
the time to convergence. And we saw different behavior depending on
what the process was. Was it just.
Simple diffusion or was it complex contagion?
Was there an element of coordination or collective action?
Were the nodes doing something else, for example learning or innovating?
And how did what their neighbors were doing influence their own their own
development? And ultimately, how well did the network
as a whole perform from the nodes making these individual yet interlinked actions?