Okay.
So, so far in the course, we have mainly
chit-chatted about things. We've seen some
models and we have now a model of a, a
cruise controller or at least how the
controller input affects the velocity of a
car. We see it here, x dot was c over m
times u. Where x was the velocity, u was
the applied input, and c was some unknown
parameter, and m was the mass of the
vehicle. We also talked a little bit about
what we wanted the controller to do. So
now, let's start designing controllers.
Let's actually do it. No more excuses.
What we want, of course, is that x should
approach r. And recall, again, that r was
the reference velocity that we wanted the
car to get to. And x is the actual
velocity. And typically, in control
design, you talk about asymptotic
properties, which is fancy speak for when
t goes to infinity. So, what we want, is,
after a while x should approach r. The
velocity should approach the reference
velocity. Or another way of saying that is
that the error, so the mismatch or
imbalance between the 2 two velocities
should approach 0. That's what we want.
So, I am going to give you a controller
here. This is attempt 1. I have picked
some values for, you know, how hard I want
to hit the gas pedal. And I'm going to say
that, if the error is positive, so
positive error means that the reference is
bigger than the state, which means that
we're driving slower than we should. Then,
let's hit the gas. And if the error is
negative, meaning that the velocity, the
actual velocity of the car is greater than
the reference velocity, which means we're
going too fast, let's brake. And if we're
perfect, let's do nothing. Fine.
So, take a second to stare at this and see
what you think. Is this going to work or
not? Okay, a second is up let's take a
look. Yeah, it works beautifully. I put
the reference velocity to 70 so it's up
here, here is 70. This is the actual
velocity of the car and look at what the
car is doing. It's basically starting down
somewhere and then increasing up to 70 and
then it's remaining flat around 70. So,
that's, that's awesome. It's doing what it
should be doing. Now, I'm calling this
bang-bang control and that's actually a
technical term from doing things like u
max and negative u max. You're switching
between two extremes. so this seems to be
easy peasy and there's no need to take a
course on controls and mobile robots. Now,
let's see what the control signals is
actually doing. Let's see what the control
values were that generated this nice and
pretty plot. Well, they look like this.
This ladies and gentlemen is miserable.
Even though the car was doing the right
thing in terms of the velocity, I had u
max be a 100, so negative max is minus a
100 and first of all, we are accelerating
up for a while, until we hit the right
velocity. And then, we start switching
wildly between plus and minus a 100. Well,
when the error was 0, the u was supposed
to be 0, but the error is never going to
be exactly 0. Just ain't going to happen,
and this is bad, because what's going on?
Well, first of all, we get a really bumpy
ride. We're going to be tossed around in
the car, backwards, forwards, backwards,
forwards, because of all these
accelerations that are being induced by
these, these extreme control signals.
We're also burning out the actuators.
We're asking the car to respond extremely
aggressive and for no good reason. I mean,
we're basically doing a lot of work when
we're very close to perfect. So, this is
actually not a particularly good control
design. And the problem is exactly this of
overreaction to small errors. Even though
the error is tiny, as long as it's
positive, we're slamming gas as hard as we
can. so we somehow need to change this
design. So, how shall we do that? Well,
the easiest thing to do is to say, you
know what, when error is small, let's have
the control signal be small. In fact,
here's my second attempt. u is k times e,
for some positive k, e is the error.
Positive error means we're going too slow,
u should be positive. Negati ve error
means we're going to fast, u should be
negative. So this is a much cleaner
design. It's what's called it's, it's a
smooth feedback law. It's actually linear
feedback in the error, and this seems to
be much more reasonable because small
error yields small control signals, which
is what we wanted. Nice and smooth. We're
not going to wildly fluctuate in our
controller. And, in fact, this is called a
P regulator, where P stands for
proportional because the control signal,
the input, u, is directly proportional to
the error through this k controlled gain.
So, here is a completely different and
possibly better way of doing it. This is
what the P-regulator in action looks like.
So, it's nice and smooth, right? It seems
even stable. Stable, again, we haven't
really defined it, but it's clearly we're
not blowing up the course. So, nice and
smooth and stable. Now, here is a little
problem. You see what it says up here? It
says 60 and I had my reference be 70. So,
even though we're nice and smooth, we
actually did not hit the target value. The
reference signal was supposed to be 70,
and we got to 58 or so. so even though
we're stable and smooth, we're not
achieving tracking. And here is the
problem. I actually added a term to the
model and this is a term that really
reflects wind resistance because here is
the acceleration of the car, this is our
term. Well, if we're going really, really
fast, we're going to encounter wind
resistance. So, add it a little bit of
wind resistance. This says that if we're
going positive and fast, then we're
getting a negative force, we, meaning,
we're being slow down a little bit and
gamma is some term or some coefficient
that again we don't know. And this was the
model I used when I simulated the
controller. and somehow, the P-regulator
wasn't strong enough to, to deal with
this, and in fact, let's see what happens.
At steady state, so steady state means
when nothing changes anymore, and if for
your call from the plot, after awhile, x
stopped varying. At steady state, x is not
varying. Well, another way of saying that,
is that the time derivative of x is 0. So,
at steady state, x is not varying, which
means that this term here has to be equal
to 0. And this is the model right? Well, I
know what u is. u is equal to k times
error, which is r minus x. So, I'm
plugging in u there. And I'm saying that
this thing has to be equal to 0. Well, if
I write down the corresponding equation
now that says that, this term here has to
be equal to 0, then I get this. Well, I
can do even better than that. What I get
is that x, let me get rid of the red stuff
there, x is now going to be, ck divided by
ck plus m gamma times r, and this, for all
these coefficients being positive, is
always strictly less than r. Which means,
that at steady state, the velocity is not
going to be, it's not going to make it up
to the reference velocity. And we can see
that if we make k really, really, really
big, then these two terms are going to
dominate and we're going to get closer and
closer to having this complicated thing,
go to r. So, as k becomes bigger, we're
getting closer to r, which means we're
having a stronger gain. But we're never,
for any finite value of the gain, going to
actually make it up to the reference. So,
something is lacking and next time, we're
going to see what it is that is lacking to
actually achieve tracking and stability.
EmoticonEmoticon