<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Murray,<br>
</p>
<div class="moz-cite-prefix"><br>
See comments below.</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">On 8/31/21 6:19 AM, Murray Altheim via
DPRGlist wrote:<br>
<snip><br>
> So if both LEFT and RIGHT were reading the same distance, the
effect<br>
> would be the same as Roam, since the LEFT and RIGHT would
balance<br>
> each other out. If LEFT were closer, it would steer away from
the<br>
> left at an arc inversely proportional to the distance. This
idea is<br>
> similar to the idea bandied about in the group of having a
PID (or<br>
> P or PD) controller for steering, except I'm using it for
swerving<br>
> away from an obstacle.<br>
<br>
Exactly. That's sort of the way the SensComp ultrasonic sensor
behaviors on my robots work. The further away the detection, the
milder the correction. (Though this can get a little squirrelly
for environments with lots of close-in and far detections.) <br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Also remember that, if you are doing
differential steering thru a rotation and a velocity, the angle of
the turn "automatically" gets tighter as the velocity decreases,
for a constant rotation, even without further adjustments. <br>
<br>
As I understand, you have set a trigger of <50cm for your roam
behavior, for the center detector, which then scales velocity by
distance, stopping at 25c. Is that also the trigger for
differencing the left and right detectors? Or is that steering
happening all along? So when the robot hits the <50cm point,
it begins turning towards the longer of the two side sensors as
modulated by the distance to the detection. And that distance is
measured by the shorter of the two side sensors?<br>
<br>
Picture the robot in an empty room driving towards a shoebox.
With LEFT and RIGHT equal, it's not clear which way to turn. On
the other hand, it really doesn't make any difference. Once the
robot begins turning either direction, lets say right, the center
detector will drift off the shoebox, but by then the left side
detector will see it and continue the turn right. Same would
happen turning left. In this way the turn is "handed-off" from
detector to detector as the robot turns.<br>
<br>
So one could use a random number or some history function, etc, to
determine which way to turn in the absence of side detections. My
preferred method is to continue turning whatever way the robot is
currently turning, as determined from velocity of the wheels.
That has additional advantages when the obstacle is first seen by
a side looking detector and thereafter the center detector, which
happens often.<br>
<br>
As the robot continues to turn, eventually the side looking sensor
will also move off the shoebox. At that time the avoidance
behavior can release control of the robot to whatever lower
priority behavior(s) it is subsuming, until the next trigger.<br>
<br>
<snip><br>
> <br>
> Your intuition about the interplay between the different
sensors is<br>
> what I'm trying to work out, e.g., what you've called "out of
band<br>
> signaling" is basically where I'm at right now.<br>
<br>
The interplay between the different sensors is what is handled by
the subsumption architecture. Out-of-band signalling should be
the exception, not the rule. It refers to things like behaviors
inquiring into each others states, and so forth. But the main
interplay twixt the behaviors is subsumption. <br>
<br>
In fact, subsumption purists like the late Randy Dumse would argue
that out-of-band signaling breaks the subsumption paradigm, which
is a slippery slope, and therefore should not be used at all.
Paul Bouchier has suggested the same thing. YMMV.<br>
<br>
Perhaps this might be a helpful hint. There is an implicit
assumption in, ah, subsumption. An implicit subsumption
assumption. That is: that high priority behaviors control the
robot LESS often than low priority behaviors. <br>
<br>
This goes counter to what we normally mean by the word "priority"
in the context of an operating system or hardware interrupt
controller, and so can be the source of some confusion.<br>
<br>
In the case of an operating system or interrupt controller, there
are multiple processes vying for computer resources, basically cpu
cycles. Processes with higher priority get more. More cpu
cycles, or executed more often, or allowed to run longer, or
allowed more immediate access, pushed to the front of the queue,
allowed to interrupt other interrupts, etc. This is not the way
the word "priority" is used in a subsumption architecture. <br>
<br>
In a subsumption stack like the one I described at the June DPRG
meeting, the highest priority behavior is the escape or bumper
behavior. When it is controlling the robot, the output of all
lower priority behaviors are ignored. They are not saved for
execution at a later time. Their output is discarded. We say
these behaviors are "subsumed" until the escape behavior releases
control of the robot. Hence the term "subsumption." It's right
there in the name.<br>
<br>
But when the next highest priority IR and SONAR behaviors are
enabled, the bumper behavior rarely controls the robot, because it
doesn't run into many things. On some long runs it never bumps
into anything at all, and the bump behavior never controls the
robot. So here is the HIGHEST priority subsumption behavior using
the LEAST amount of "system resources." <br>
<br>
The IR avoidance behavior is the next highest, and the main reason
we're not hitting things. It is detecting them and turning the
robot away and so is controlling the robot more often than the
bumper, but still only in the presence of obstacles, so less often
than the next lower priority, the sonar. And so on.<br>
<br>
On the bottom end is the lowest priority behavior, the default
behavior. Be it driving in a straight line, wall following, or
waypoint navigation, it wants to control the robot all the time
that higher priority behaviors are not in control. That is why it
is the default behavior. It never releases control because there
are no lower priority behaviors. So it tends to control the
robot all the time nothing else wants to, which is the <b>lion's
share </b>of the time.<br>
<br>
Thus the lowest priority subsumption behavior uses the MOST system
resources while the highest priority uses the LEAST. This
guarantees that all behaviors cooperate in controlling the robot.<br>
<br>
Hope this is useful<br>
dpa<br>
<br>
<br>
</div>
</body>
</html>