On the Need for Models | Control of Mobile Robots





So now we have seen that controls deals
with dynamical systems in generality. And
robotics is one facet of this. Now what we
haven't done is actually try to make any
sense of what this means in any precise or
mathematical was. And one thing that we're
going to need in order to do this is. Come
up with, with models. And models are gong
to be an approximation, and an abstraction
of what the actual system is doing. And
the control design is going to be made,
rather done, relative to that model and
then deployed on the real system. But,
without models we can't really do much in
terms of control design. We would just be
stabbing in the dark without knowing
really what, what's going on. So, models
are actually Key when it comes to
designing controllers, because if you
remember that the question is really how
in control theory, is how do we pick the
input signal u. So u again, takes the
reference, compares it to the output, the
measurement and comes up with a
corresponding Course correction to what
the system is doing. And the objectives
when you pick this kind of control input,
well, there are a number of different
kinds of, of objectives. The first one is
always stability. Stability, loosely
speaking, means that the system doesn't
blow up. So, if you decide a controller
that makes the system go unstable, then no
other objectives matter cause you have
failed. If your robots drive into walls or
your aerial robots fall to the ground,
basically control stability is always
control objective number one. Now, once
you have that, the system doesn't blow up.
You may want it to do something more than
not blow up, and something that we're
going to deal with is tracking a lot.
Which means, here is a reference either of
value. 14, how do we make our system do 14
or here is a path how do I make my robot
follow this path or how do I make my
autonomous self driving car follow a road.
So tracking reference signals is another
kind of objective. assert important type
of objective is robustness in the sense
that. ,, . Since we are dealing with
models when we're doing our design. And
models are never going to be perfect. We
can't overcommit to a particular model.
And we can't have our controller be too
highly dependent on what the particular
parameters in the model. R, model r. So,
what we need to do is to design
controllers that are somewhat immune to
variations across parameters in the model,
for instance. So this is very important.
I'm calling it robustness. a companion to
robustness, in fact one can ague that it's
an aspect of robustness. It's disturbance
rejection, because, at the end of the day.
We are going to be acting on measurements.
And sensors have measurement noise. things
always happen if you're flying a in the
air, all of a sudden you get a wind gust.
Now that's a disturbance. if you're
driving a robot, all of a sudden you're
going from Linoleum floored carpet, now
the friction changed. So all of a sudden
you have these disturbances that enter
into the system and your controllers have
to be able to overcome them. at least,
reasonable disturbances for the, the
controllers to be To be effective. Now
once you have that we can wrap other
questions around it like optimality, which
is not only how do we do something but how
do we do it in the best possible way. And
best can mean many different things, it
could mean how do I drive from point A to
point B as quickly as possible, Possible
or as using as little fuel as possible or
while staying as centered into the middle
of the road as possible. So optimality can
mean different things and this is
typically something we can do on top of
all these other things and in other to do
any of this we really need a model to, to
be a. Effective.
So effective controlled strategies rely on
predictive models. Because without them,
we have no way of knowing what our control
actions are, are actually going to do. So,
what do these models look like? Let's
start in discrete time. this means that,
what's happening is that, that Distinct
time instances, thi ngs happen.
In discrete time, what we have typically,
is that the state of the system, remember
that x is the state. So this is at time
instance, k plus 1. Well, it's some
function of what the state was like,
yesterday, the time, k, and, what they did
yesterday. So, the position of the robot,
position of the robot tomorrow, is a
function of where the robot is today, and
what I did today. And then, f, somehow
tells me how to go from today's state and
controlling to tomorrow's state. This is
known as a difference equation because it
tells you the difference between different
values across, different time instances.
So, that's in, in discrete time. and
here's an example of this. This is the
world's simplest discrete time system.
It's a clock, or a calendar. This is the
time today. Now I'm adding one second. And
this is the time one second later. So the
time right now plus 1 second is the time 1
second later. This is clearly a completely
trivial discreet time system, but it is a
difference equation. It's a clock, so if
you plot this we see that here is, this is
the 8, which is what time it is As a
function of time. So it's silly. But at 1
o' clock, the state is one. At 2 o' clock,
the state is two, and so forth. And you
get this thing with slope one. So, this
would be the world's simplest model. There
are no control signals or anything in
there. But it least it is a dynamic
discrete time model that describes. How a
clock would work. Now the problem we have
with this though is that the Laws of
Physics are all in continuous time. And
when we're controlling robots we are going
to have to deal with the Laws of Physics.
Newton is going to tell us that the force
is equal to the mass times acceleration.
Or, if we're doing circuits, Kirchoff's
Laws is going to relate various properties
to capacitances and resistances in the, in
the, in the circuit. So, we're going to
have to deal with things in continuous
time, and in continuous time, there is no
notion of next. But we have something
almost be tter, and that's the notion of a
derivative, which is, it's not next, but
it tells us How is the time change? The
change with respect to time. So in
continuous time, we don't have difference
equations. What we have are these things
called differential equations. And right
here you see that the derivative of the
state with respect to time. Is some
function of x and u. So this not telling
me what the next value it is. It's telling
me, what's the change? Instantaneous
change. And here, it's the same thing. But
now I'm written, I've written x. instead
of dx, dt. and time derivatives, a lot of
times, is written with a dot. And I'm
going to use that in this. Class and this
actually traces back to the, the slight
controversy between Newton and Leibniz.
Leibniz, so in 1684, Newton said, oh I
have this idea that I call it
differential, and Leibniz at the same time
had the same idea. Well, this is Leibniz's
notation and this is Newton's notation,
and we're going to use the dot for time
derivatives here. The point is that these
are both the same equations and they are
differential equations, because They are
relating changes to the values of the
states. so if you go back to our clock,
what would the differential equation of a
clock look like? Well, it would be very,
very simple it would say that the, the
change, the rate of change of the time is
one. Which basically means The clock
changes a second every second. That's what
it means. So when I drew this picture of
the discrete time clock. Or, I drew this
line diagonally across it. What I was
really doing was describing this. So this
is the continuous time clock. x dot is
equal to 1 And.
We are going to need, almost always,
continuous time. Models for, for our
systems. And next couple of lectures,
we're going to start developing models of
particular systems. But, before we do
this, I want to say a few words about how
to go from continuous time to discrete
time. Because our models are going to be
continuous time differential equations.
But th en, when we're putting this on a
robot, we're going to put it on a
computer. The computer runs in discrete
time, so somehow we need to map continuous
time models onto discrete time models. So
now if I say that x at time k, well it's x
at time k times delta t, where delta t is
some sample time. So we've sampled k
measurements. Well if I, use this. What is
x at time K plus 1 which is at the x.k
plus first sample time. Well, I need to
relate this thing somehow to the
differential equation. So how do I do
this? Well, this is not always easy to do
exactly but what you can note is that.
This is known as a tailor approximation.
So x at time k times delta t plus a little
delta t which is exactly the next state.
Well it's roughly what it was last time
times the length of the time interval,
delta t times the derivative. So this is a
an approximation but the cool thing here
is that this x at time k plus delta t,
well that's, xk. So these things are the
same. x dot at time k equals delta t, well
that's f. So this are the same things, and
then I just have to multiply my delta t
there. So this is a way of getting a
discrete time model from the continuous
time model. And, this is how we're going
to have to take the things we do in
continuous time, and map it onto the
actual Implementations of computers that
ultimately run in discrete time.


EmoticonEmoticon