[Dprglist] KR01 Infrared Behaviours

David P. Anderson davida at smu.edu
Wed Sep 1 10:10:40 PDT 2021


Hi Murray,


See comments below.

On 8/31/21 6:19 AM, Murray Altheim via DPRGlist wrote:
<snip>
 > So if both LEFT and RIGHT were reading the same distance, the effect
 > would be the same as Roam, since the LEFT and RIGHT would balance
 > each other out. If LEFT were closer, it would steer away from the
 > left at an arc inversely proportional to the distance. This idea is
 > similar to the idea bandied about in the group of having a PID (or
 > P or PD) controller for steering, except I'm using it for swerving
 > away from an obstacle.

Exactly.  That's sort of the way the SensComp ultrasonic sensor 
behaviors on my robots work.  The further away the detection, the milder 
the correction.  (Though this can get a little squirrelly for 
environments with lots of close-in and far detections.)

Also remember that, if you are doing differential steering thru a 
rotation and a velocity, the angle of the turn "automatically" gets 
tighter as the velocity decreases, for a constant rotation, even without 
further adjustments.

As I understand, you have set a trigger of <50cm for your roam behavior, 
for the center detector, which then scales velocity by distance, 
stopping at 25c.  Is that also the trigger for differencing the left and 
right detectors? Or is that steering happening all along?  So when the 
robot hits the <50cm point, it begins turning towards the longer of the 
two side sensors as modulated by the distance to the detection.  And 
that distance is measured by the shorter of the two side sensors?

Picture the robot in an empty room driving towards a shoebox. With LEFT 
and RIGHT equal, it's not clear which way to turn.  On the other hand, 
it really doesn't make any difference.  Once the robot begins turning 
either direction, lets say right, the center detector will drift off the 
shoebox, but by then the left side detector will see it and continue the 
turn right. Same would happen turning left.   In this way the turn is 
"handed-off" from detector to detector as the robot turns.

So one could use a random number or some history function, etc, to 
determine which way to turn in the absence of side detections.  My 
preferred method is to continue turning whatever way the robot is 
currently turning, as determined from velocity of the wheels. That has 
additional advantages when the obstacle is first seen by a side looking 
detector and thereafter the center detector, which happens often.

As the robot continues to turn, eventually the side looking sensor will 
also move off the shoebox.  At that time the avoidance behavior can 
release control of the robot to whatever lower priority behavior(s) it 
is subsuming, until the next trigger.

<snip>
 >
 > Your intuition about the interplay between the different sensors is
 > what I'm trying to work out, e.g., what you've called "out of band
 > signaling" is basically where I'm at right now.

The interplay between the different sensors is what is handled by the 
subsumption architecture.  Out-of-band signalling should be the 
exception, not the rule.  It refers to things like behaviors inquiring 
into each others states, and so forth.  But the main interplay twixt the 
behaviors is subsumption.

In fact, subsumption purists like the late Randy Dumse would argue that 
out-of-band signaling breaks the subsumption paradigm, which is a 
slippery slope, and therefore should not be used at all. Paul Bouchier 
has suggested the same thing.  YMMV.

Perhaps this might be a helpful hint.  There is an implicit assumption 
in, ah, subsumption.  An implicit subsumption assumption.  That is: that 
high priority behaviors control the robot LESS often than low priority 
behaviors.

This goes counter to what we normally mean by the word "priority" in the 
context of an operating system or hardware interrupt controller, and so 
can be the source of some confusion.

In the case of an operating system or interrupt controller, there are 
multiple processes vying for computer resources, basically cpu cycles.  
Processes with higher priority get more.  More cpu cycles, or executed 
more often, or allowed to run longer, or allowed more immediate access, 
pushed to the front of the queue, allowed to interrupt other interrupts, 
etc.  This is not the way the word "priority" is used in a subsumption 
architecture.

In a subsumption stack like the one I described at the June DPRG 
meeting, the highest priority behavior is the escape or bumper 
behavior.  When it is controlling the robot, the output of all lower 
priority behaviors are ignored.  They are not saved for execution at a 
later time.  Their output is discarded.  We say these behaviors are 
"subsumed" until the escape behavior releases control of the robot.   
Hence the term "subsumption."  It's right there in the name.

But when the next highest priority IR and SONAR behaviors are enabled, 
the bumper behavior rarely controls the robot, because it doesn't run 
into many things.  On some long runs it never bumps into anything at 
all, and the bump behavior never controls the robot.  So here is the 
HIGHEST priority subsumption behavior using the LEAST amount of "system 
resources."

The IR avoidance behavior is the next highest, and the main reason we're 
not hitting things.  It is detecting them and turning the robot away and 
so is controlling the robot more often than the bumper, but still only 
in the presence of obstacles, so less often than the next lower 
priority, the sonar.  And so on.

On the bottom end is the lowest priority behavior, the default 
behavior.  Be it driving in a straight line, wall following, or waypoint 
navigation, it wants to control the robot all the time that higher 
priority behaviors are not in control.  That is why it is the default 
behavior.  It never releases control because there are no lower priority 
behaviors.   So it tends to control the robot all the time nothing else 
wants to, which is the *lion's share *of the time.

Thus the lowest priority subsumption behavior uses the MOST system 
resources while the highest priority uses the LEAST.  This guarantees 
that all behaviors cooperate in controlling the robot.

Hope this is useful
dpa


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dprg.org/pipermail/dprglist-dprg.org/attachments/20210901/1f6df3e4/attachment.html>


More information about the DPRGlist mailing list