the world is fundementally dynamic and changing and
unknown to the robot, so it does not make
sense to overplan and think very hardly
about how do you act optimally given these
assumptions about what the world looks
like. That may make sense if your
designing controllers for an industrial
robot at a manufacturing plant where the
robot is going to repeat the same. Motion
over and over and over again. You're going
to do spot welding, and you're going to
produce the same motion 10,000 times in a
single day. Then you'll overplan. Then
you'll make sure that you're optimal. But
if a robot is out exploring an area where
it doesn't know exactly what's going on,
you don't want to spend all your
computational money on Finding the best
possible way to move. Because, it's not
actually going to be best. Because the
world is not what you thought it was. So
the key idea to overcome this that's quite
standard in robotics, is to simply develop
a library of useful controllers. So these
are controllers that do different things.
Like going to landmarks, avoiding
obstacles. We saw one that tried to make
robots drive through center of gravity of
their neighbors. Basically, we can have a
library of these useful controllers,
behaviors if you will, and then we switch
among these behaviors in response to what
the environment throws at us. If all of a
sudden an obstacle appears, then we avoid
it. Then if we see a power outlet and
we're low on battery then we go and
recharge. So we're switching to different
controllers in response to what is going
on. So what I would like to do is to start
designing some Behaviors just to see how
what we learned in module one, a little
bit about control design can be used to
build some behaviors. So let's assume we
have our differential-drive mobile robot.
And to make matters a little easier up
front, we're going to assume that the
velocity. The speed is, is constant. So v
not. We're not going to change how quickly
the robot is moving. So what we can change
i s how we're steering. So you're
basically sitting in a car on cruise
control, where the velocities are
changing, and you steer it. That's your
job. And the question is, how should you
actually. >> Steer the robot around. So,
this is the equation then, that's
governing how the input Omega, it's the
state that we're interested in, in this
case pi, which is the heading of the
robot. So, pi dot is equal to Omega. Okay,
so, let's say that we have our. >> Yellow
triangle robot, it's a unicycle or
differential-drive robot. It's headed in
direction five, so this is the direction
it's in. And, for some reason, we have
figured out that we want to go in this
direction, five desired or 5 sum D. Maybe
there is something interesting over here,
that were interested in. So, we want to
drive in this direction. Well, how should
we actually do this? Well, pi dot is equal
to Omega. So, our job clearly is that of
figuring out what Omega is equal to, which
is the control input. Alright, so, how do
we do that? Well, you know what? We have a
reference, pi desired. Well, in module
one. we called references r. Right? We
have an error, meaning, that compares the
reference pi desired to what the system is
doing. In this case, pi. So it's comparing
the headings. So we have an error, we have
a reference. You know what? We have a
dynamics. pi dot is equal to Omega. So we
have everything we had towards the end of
module one. So we even know how to design
controllers for that. How should we do
that? Well, we saw PID, right? That's the
only controller we've actually seen. So,
why don't we try a PID regulator? That
seems like a perfectly useful way of
building a controller. So, you know what,
Omega is Kp times e, where Kp was the
proportional gain. So this response to
what the error is right now. You make Kp
large it responds quicker but you may
induce oscillations, then you have the
integral of the error. So you take the e
of tau, the tau times k sub i, which is
the integral gain. And this thing, this
integral, has the nice property that it's
integrating up all these tiny little
tracking errors that we may have, and
after a while this integral becomes large
enough that it pushes the system Up to no
tracking errors, that's a very good
feature of the, the interval. Even though
as we saw we need to be aware of the fact
that a big KI can actually also induce
oscillations and then we could have a d
terms. A KD times e dot and that where KD
is the, the gain for derivative part. This
makes the system. Very responsive but can
become a little bit oversensitive to
noise. So will this work? No it won't. And
I will now tell you why. In this case
we're dealing with angles. And angles are.
Rather peculiar beasts. Let's say that phi
desired a 0 radiance. And my actual
heading now, phi is 100 radiance. Then the
error is minus 100 radiance. Which means
that this is a really, really large error.
So Omega is going to be ginormous. But,
that doesn't seem right. Because 100 pi
radius is the same as zero radius, right?
So, the error should actually be zero, so
we should not be niave when we're dealing
with angles. And, in fact this is
something we should be aware of. Is angles
are rather peculiar beasts. And we need to
be, be dealing with them. And there are
famous robotic crashes that have been
caused by this. When the robot starts
spinning like crazy. Even though it
shouldn't. But it's doing it because it
thinks it's 200 pi off instead of zero
radius off. So what do we do about it?
Well the solution is to ensure that the
error is always between minus pi and pi.
So minus 100 pi, well that's the same
thing as zero. So we need to ensure that
whatever we're doing is we're staying
within minus pi and pi. And there is a
really easy way of doing that. We can use
a function, arc tangents two. Any language
there is a library with and it operates in
the same way. It's a way of producing
angles between minus pi and pi. C plus,
plus has it, Java has it, MATLAB has it,
whatever you, Python has it. So you c an
always do this and how do you do that?
Well you take the angle that's now
1,000,000 pi right and You take sine of it
comma cosine of it. So this is the same as
saying that we're really doing arc tan. So
I'm going to write this as tan inverse
sine e over cosine e. But arc tan or tan
inverse. Doesn't, it's not clear what that
always returns but arc tan 2, where you
have a coma in it, you always get
something that's within minus Pi and Pi.
So here's what you need to do, whenever
you're dealing with angles and you're
acting on them, it's not a bad idea to
wrap one of these arc tan two lines around
it to ensure That you are indeed having
values that aren't crazy. So, with a
little caveate that we're going to use e
prime instead of e, the PID regulator will
work like a charm. Okay, so here is an
example problem. We've already seen this
picture. this is the problem of driving
the robot, which is the little blue ball,
to. The goal, which is the sun,
apparently, and lets see if we can use
this PID control design on Omega to design
controllers that take us to the sun, or to
the goal. and since we're dealing with
obstacles and we're dealing with goal
locations, and we're also talking about
behaviors. at the minmum we really need
two behaviors. Goal to goal, and avoid
obstacles. So what we're going to do over
the next couple of lectures, is develop
these behaviors, and then deploy them on a
robot and see if there any good or not.
EmoticonEmoticon