Tip:
Highlight text to annotate it
X
So now we know not only that the first order business is to make the system
stable meaning it shouldn't blow up, it should behave well, but we also know what
this means, namely that the Eigen values of the system should have negative real
part and even though we saw some critically stable cases last time.
Including a rather awesome, set of robots slamming into each other. We typically
don't want critical stability. We want asymptotic stability. Which means that we
need to have strictly negative real part of all the eigenvalues here. And today,
we're going to actually achieve that, or try to achieve it, by designing a
controller. Because remember this picture. , Speaker:We have our system, x dot is Ax
plus Bu where u is the input, and then y is equal to Cx. So, whatever we do to our
control choice has to depend on the things that we have access to, which is why we
don't know the state of the system. But, we certainly know the output. So, today
we're going to try something called 'output feedback,' which means we're going
to take the output of the system. And use it to feed directly back in as a way of
controlling it. And we're going to start by returning to our old friend, the
world's simplest robot, which is just a point on a line that we can control the
acceleration of and, as you hopefully recall, we can write this in state space
form as x dot is. This a matrix, 0100x. plus a b matrix that's 01. And then, the
output is 10x. This means that the output is the position of the point mass. And x1,
the first component of x is the position. The second component of x is. The velocity
and then u immediately gives us the, the acceleration. So now our job is to somehow
connect y to u meaning pick our input in such a way that this system behaviors and
in fact here is an idea, we want to drive. The system to zero, which means stabilize
it. So, why don't we move towards the origin? meaning, the position of the robot
is what it is. And let's say that, here is the origin. And our job is to drive it to
the origin. So if the position is negative. Meaning we're on the left on the
origin. We should probably move in this direction. If the.
Robot is over there, we should move it to the left, which is in a negative
direction. So that's a very, very simple idea. And in fact if we turn it into math,
we see that if y is negative, so y negative again, corresponds if the origin
is here, us being on this side, then u should be positive, which means moving
this. Direction, and similarly if Y is positive than U should be negative. And
here's some suggestion. Right. Let's pick the world's simplest controller
that achieves this. Simply, the negative of Y. So Y positive means negative U, Y
negative means positive U. u. So let's try this, and see what it actually does. And
what we need to do first is understand. How does this change the system dynamics?
Because what we really did now, is, we had u equal to well, we had minus y. So k, in
our example, was just a1. But in general, k here could be. It's a more rich
structure, and if used in the y's or multidimensional, then this needs to be a
vector. Now, we know that y is equal to C times x, so we can write this as u being
equal to negative KCx. So, now let's plug this into our differential equation that
we have. So, x dot is Ax plus Bu, and now instead of u. We're plugging in this term
right. So then we get a minus BKCx or if you plug everything together, we get A
minus BKCx. And this, we can write if you want to as a hat times x. So this is just
a new system matrix and of course our job is to pick k now so that's the real Part
of the eigenvalues of A hat is strictly negative. In other words, pick if possible
K such that the real part of lambda is strictly negative for all lambda that is,
that is an eigenvalue to this new system matrix A minus BKC. So that's really our,
our job here and in a way already picked K, we said K was equal to one. Well let's
see what's happening if that's the case, then we have x dot is, this is A, this is
B, this is K a nd this is C. So this is what we have in terms of the. The system
matrix for our, robot. And one thing to note, first of all. That 1 times 1, 0 is
just 1, 0. And if I multiply this by 0, 1, then I get, what do I get? I get 0 times
1, then 0 times 0 1 times 1. And 0 times 1. So this is what this whole expression
is equal to. So now I can just take this matrix, and subtract away this matrix to
get the, the right answer. And if I do that, I get x. is 0, 1, negative 1, 0, x.
And this is for my particular choice of k. And let's check out what, what the
eigenvalues are of this thing. Well, you write eig in MATLAB and you get the
answer. Or, as we will see in future lectures, you can actually compute it and,
and say something about the clever choice of k in that way. But, for now, we're just
going to immediately plug this into our favorite software system, and we find out
that the eigenvalues are plus and minus j. Where j is square root to negative 1. So
is this system asymptotically stable? Well, the real part of lamda for both
lamdas so we have two lamdas. The real part is zero, because this is a purely
imaginary system. And, as I said last time, if I have two. Imaginary eigenvalues
and all the others are well behaved, in this case I don't have any others, then I
have a critically stable system. And in fact since I have imaginary components, we
have already hinted at this that what we actually end up with are oscillations. So
this is a critically stable system and if I simulate with this role, what it's
doing, it's going to look like this. Really what's happening is this thing is
just going back and forth, back and forth. So, what's the problem? We clearly did not
stabilize it, we know it's not asymptotically stable in fact it's just
going back and forth back and forth well here is the problem, When the robot is
let's say on it's way. Away from the origin, then we're pushing it, correctly
so, towards the origin. But when it's on it's way back, we're still pushing it
equally hard, even thou gh it's actually going there almost by itself. So, we're
kind of not taking the direction in which the robot is going. Into account.
What this actually means is that we are not looking at the velocity because the
velocity is going to tell you which direction it is going in. So the problem
is that we do not take velocity into account. Remember what the state is? The
state is position and velocity. So the problem is that we need all of it to
stabilize and we need the full state information not just the output. So output
feedback like this doesn't quiet Work, but instead we want to operate on x instead of
y. But here of course is the problem. How do we do that? And the corollary to that
is, we don't even know x. We only know y. How in the world can we design controllers
for things that we don't know? Well, as we will see in the next module, it's possible
to figure out x from y a lot of times. If you just think of y in this case as being
the position And x being y, as x being position in velocity. Then velocity,
right, we can get by measuring two positions after each other, and dividing
it by the time in between the measurements. Then we get an estimate of
the velocity. So, it's clearly possible in this case to at least get an estimate of
state from the, the From the output. So like I said this is the next module but in
the next lecture which is the last lecture of this module will pretend that we
actually have x and revisits the world's simplest robot and see how can we actually
stabilize it if we have all of x and not just y.