Tip:
Highlight text to annotate it
X
Let's say I had the set of vectors-- I don't want to do
it that thick.
Let's say one of the vectors is the vector 2, 3, and then
the other vector is the vector 4, 6.
And I just want to answer the question: what is the span of
these vectors?
And let's assume that these are position vectors.
What are all of the vectors that these
two vectors can represent?
Well, if you just look at it, and remember, the span is just
all of the vectors that can be represented by linear
combinations of these.
So it's the set of all the vectors that if I have some
constant times 2 times that vector plus some other
constant times this vector, it's all the possibilities
that I can represent when I just put a bunch of different
real numbers for c1 and c2.
Now, the first thing you might realize is that, look, this
vector 2, this is just the same thing as
2 times this vector.
So I could just rewrite it like this.
I could just rewrite it as c1 times the vector 2, 3 plus c2
times the vector-- and here, instead of writing the vector
4, 6, I'm going to write 2 times the vector 2, 3, because
this vector is just a multiple of that vector.
So I could write c2 times 2 times 2, 3.
I think you see that this is equivalent to the 4, 6.
2 times 2 is 4.
2 times 3 is 6.
Well, then we can simplify this a little bit.
We can rewrite this as just c1 plus 2c2, all of that, times
2, 3, times our vector 2, 3.
And this is just some arbitrary constant.
It's some arbitrary constant plus 2 times some other
arbitrary constant.
So we can just call this c3 times my vector 2, 3.
So in this situation, even though we started with two
vectors, and I said, well, you know, the span of these two
vectors is equal to all of the vectors that can be
constructed with some linear combination of these, any
linear combination of these, if I just use this
substitution right here, can be reduced to just a scalar
multiple of my first vector.
And I could have gone the other way around.
I could have substituted this vector as being 1/2 times
this, and just made any combination of scalar multiple
of the second vector.
But the fact is, that instead of talking about linear
combinations of two vectors, I can reduce this to just a
scalar combination of one vector.
And we've seen in R2 a scalar combination of one vector,
especially if they're position vectors.
For example, this vector 2, 3.
It's 2, 3.
It looks like this.
All the scalar combinations of that vector are just going to
lie along this line.
So 2, 3, it's going to be right there.
They're all just going to lie along that line right there,
so along this line going in both directions forever.
And if I take negative values of 2, 3, I'm
going to go down here.
If I take positive values, I'm going to go here.
If I get really large positive values, it's
going to go up here.
But I can just represent the vectors, and when you put them
in standard form, their arrows essentially would
trace out this line.
So you could say that the span of my set of vectors-- let me
put it over here.
The span of the set of vectors 2, 3 and 4, 6 is
just this line here.
Even though we have two vectors,
they're essentially collinear.
They're multiples of each other.
I mean, if this is 2, 3, 4, 6 is just this right here.
It's just that longer one right there.
They're collinear.
These two things are collinear.
Now, in this case, when we have two collinear vectors in
R2, essentially their span just reduces to that line.
You can't represent some vector like--
let me do a new color.
You can't represent this vector right there with some
combination of those two vectors.
There's no way to kind of break out of this line.
So there's no way that you can represent everything in R2.
So the span is just that line there.
Now, a related idea to this, and notice, you had two
vectors, but it kind of reduced to one vector when you
took its linear combinations.
The related idea here is that we call this set-- we call it
linearly dependent.
Let me write that down: linearly dependent.
This is a linearly dependent set.
And linearly dependent just means that one of the vectors
in the set can be represented by some combination of the
other vectors in the set.
A way to think about it is whichever vector you pick that
can be represented by the others, it's not adding any
new directionality or any new information, right?
In this case, we already had a vector that went in this
direction, and when you throw this 4, 6 on there, you're
going in the same direction, just scaled up.
So it's not giving us any new dimension, letting us break
out of this line, right?
And you can imagine in three space, if you have one vector
that looks like this and another vector that looks like
this, two vectors that aren't collinear, they're going to
define a kind of two-dimensional space.
They can define a two-dimensional space.
Let's say that this is the plane defined
by those two vectors.
In order to define R3, a third vector in that set can't be
coplanar with those two, right?
If this third vector is coplanar with these, it's not
adding any more directionality.
So this set of three vectors will
also be linearly dependent.
And another way to think about it is that these two purple
vectors span this plane, span the plane that they define,
essentially, right?
Anything in this plane going in any direction can be-- any
vector in this plane, when we say span it, that means that
any vector can be represented by a linear combination of
this vector and this vector, which means that if this
vector is on that plane, it can be represented as a linear
combination of that vector and that vector.
So this green vector I added isn't going to add anything to
the span of our set of vectors and that's because this is a
linearly dependent set.
This one can be represented by a sum of that one and that one
because this one and this one span this plane.
In order for the span of these three vectors to kind of get
more dimensionality or start representing R3, the third
vector will have to break out of that plane.
It would have to break out of that plane.
And if a vector is breaking out of that plane, that means
it's a vector that can't be represented anywhere on that
plane, so it's outside of the span of those two vectors.
Where it's outside, it can't be represented by a linear
combination of this one and this one.
So if you had a vector of this one, this one, and this one,
and just those three, none of these other things that I
drew, that would be linearly independent.
Let me draw a couple more examples for you.
That one might have been a little too abstract.
So, for example, if I have the vectors 2, 3 and I have the
vector 7, 2, and I have the vector 9, 5, and I were to ask
you, are these linearly dependent or independent?
So at first you say, well, you know, it's not trivial.
Let's see, this isn't a scalar multiple of that.
That doesn't look like a scalar multiple of either of
the other two.
Maybe they're linearly independent.
But then, if you kind of inspect them, you kind of see
that v, if we call this v1, vector 1, plus vector 2, if we
call this vector 2, is equal to vector 3.
So vector 3 is a linear combination of
these other two vectors.
So this is a linearly dependent set.
And if we were to show it, draw it in kind of two space,
and it's just a general idea that-- well, let me see.
Let me draw it in R2.
There's a general idea that if you have three two-dimensional
vectors, one of them is going to be redundant.
Well, one of them definitely will be redundant.
For example, if we do 2, 3, if we do the vector 2, 3, that's
the first one right there.
I draw it in the standard position.
And I draw the vector 7, 2 right there, I could show you
that any point in R2 can be represented by some linear
combination of these two vectors.
We can even do a kind of a graphical representation.
I've done that in the previous video, so I could write that
the span of v1 and v2 is equal to R2.
That means that every vector, every position here can be
represented by some linear combination of these two guys.
Now, the vector 9, 5, it is in R2.
It is in R2, right?
Clearly.
I just graphed it on this plane.
It's in our two-dimensional, real number space.
Or I guess we could call it a space or in our set R2.
It's there.
It's right there.
So we just said that anything in R2 can be represented by a
linear combination of those two guys.
So clearly, this is in R2, so it can be represented as a
linear combination.
So hopefully, you're starting to see the relationship
between span and linear independence or linear
dependence.
Let me do another example.
Let's say I have the vectors-- let me do a new color.
Let's say I have the vector-- and this one will be a little
bit obvious-- 7, 0, so that's my v1, and then I have my
second vector, which is 0, minus 1.
That's v2.
Now, is this set linearly independent?
Is it linearly independent?
Well, can I represent either of these as a
combination of the other?
And really when I say as a combination, you'd have to
scale up one to get the other, because there's only two
vectors here.
If I am trying to add up to this vector, the only thing I
have to deal with is this one, so all I can
do is scale it up.
Well, there's nothing I can do.
No matter what I multiply this vector by, you know, some
constant and add it to itself or scale it up, this term
right here is always going to be zero.
It's always going to be zero.
So nothing I can multiply this by is going to
get me to this vector.
Likewise, no matter what I multiply this vector by, the
top term is always going to be zero.
So there's no way I could get to this vector.
So both of these vectors, there's no way that you can
represent one as a combination of the other.
So these two are linearly independent.
And you can even see it if we graph it.
One is 7, 0, which is like that.
Let me do it in a non-yellow color.
7, 0.
And one is 0, minus 1.
And I think you can clearly see that if you take a linear
combination of any of these two, you can represent
anything in R2.
So the span of these, just to kind of get used to our notion
of span of v1 and v2, is equal to R2.
Now, this is another interesting point to make.
I said the span of v1 and v2 is R2.
Now what is the span of v1, v2, and v3 in
this example up here?
I already told you.
I already showed you that this third vector can be
represented as a linear combination of these two.
It's actually just these two summed up.
I can even draw it right here.
It's just those two vectors summed up.
So it clearly can be represented as a linear
combination of those two.
So what's its span?
Well, the fact that this is redundant means that it
doesn't change its span.
It doesn't change all of the possible linear combinations.
So its span is also going to be R2.
It's just that this was more vectors than you
needed to span R2.
R2 is a two-dimensional space, and you needed two vectors.
So this was kind of a more efficient way of providing a
basis, and I haven't defined basis formally, yet, but I
just want to use it a little conversationally, and then
it'll make sense to you when I define it formally.
This provides a better basis, or this provides a basis, kind
of a non-redundant set of vectors that can represent R2.
While this one, right here, is redundant.
So it's not a good basis for R2.
Let me give you one more example in three dimensions.
And then in the next video, I'm going to make a more
formal definition of linear dependence or independence.
So let's say that I had the vector 2, 0, 0.
Let me make a similar argument that I made up there: the
vector 2, 0, 0, the vector 0, 1, 0, and the vector 0, 0, 7.
We are now in R3, right?
Each of these are three-dimensional vectors.
Now, are these linear dependent or linearly
independent?
Sorry, are they linearly dependent or independent?
Well, there's no way with some combination of these two
vectors that I can end up with a non-zero term right here to
make this third vector, right?
Because no matter what I multiply this one by and this
one by, this last term is going to be zero.
So this is kind of adding a new direction
to our set of vectors.
Likewise, there's nothing I can do-- there's no
combination of this guy and this guy that I can get a
non-zero term here.
And finally, no combination of this guy and this guy that I
can get a non-zero term here.
So this set is linearly independent.
And if you were to graph these in three dimensions, you would
see that none of these-- these three do not
lie on the same plane.
Obviously, any two of them lie on the same plane, but if you
were to actually graph it, you get 2, 0.
Let me say that that's x-axis.
That's 2, 0, 0.
Then you have this, 0, 1, 0.
Maybe that's the y-axis.
And then you have 0, 0, 7.
It would look something like this.
So it almost looks like, your three-dimensional axes, it
almost looks like the vectors i, j, k.
They're just scaled up a little bit.
But you can always correct that by just scaling them
down, right?
Because we care about any linear combination of these.
So the span of these three vectors right here, because
they're all adding new directionality, is R3.
Anyway, I thought I would leave you there in this video.
I realize I've been making longer and longer videos, and
I want to get back in the habit of making shorter ones.
In the next video, I'm going to make a more formal
definition of linear dependence, and we'll do a
bunch more examples.