[Dprglist] ChatGPT as a robot SW designer
Carl Ott
carl.ott.jr at gmail.com
Fri Jan 19 08:58:25 PST 2024
Paul,
wrt this GPT4 interaction -
Comparing Behavior Tree and Subsumption Architectures
https://chat.openai.com/share/994e01a0-b757-4a33-b92c-12d145dde8ac
Thanks for the detailed analysis!
FWIW - I suggest looking at the last two implementations, not the first
stab / 'unrefined' ones...
Yes - I've viewed the framework code provided by GPT with skeptical
suspicion - but I haven't analyzed it to the level you did.
However - even with the issues you describe - even with those obvious
errors - I still have an impression that the general gist of what it
provided is aimed in the right direction (overall combination of narrative
text and code). Or maybe that's wishful thinking ;-)
Specifically, I have a sense that
1. the general categorization and descriptions of each style is generally
right / that the errors and issues largely lie in the detailed
implementation/ execution.
2. that the general code structure is at least mostly grammatically &
syntactically correct - even if it has functional design errors.
I am accustomed and comfortable to solving design errors (mine, my
resource, etc.). Unfortunately, I only infrequently dip into various
languages, hence I often struggle to produce more elegant coding structures
from scratch. I suffer from generalist syndrome - jack of all trades /
master of nothing...
Hence - even with errors - this interaction gives me specific coding
language and structure *ideas to start with and hack on*.
3. most of the time - wrt the templates and examples I find online ( blogs,
stackoverflow.com, etc...) - those are anyhow* always rife with unstated
assumptions and issues one must fix *or adapt to one's own situation.
I see ChatGPT as no different in that regard - except that its noisy output
starts with much better alignment to my specific query than generic
internet searches. And the rapid interactive nature provides benefits not
easy to achieve with traditional research methods.
4. I view this GPT as a provocative & interactive 'fencing tool style'
learning tool. I bet I could get up to speed (to a level that interests me)
much more quickly and/or more thoroughly by starting with interactions like
this vs. e.g. blogs / stack overflow / papers, etc...
About these points you made
Paul
> E.g. if it's not yet time for the EnvironmentSensor to take another
> reading, or the next reading is not ready, it should return false so that
> the next lower priority behavior in the Selector can run.
>
Carl
Agreed - GPT left 'false' branches for the reader to sort out. That felt
obvious enough that I didn't press GPT to clarify.
Anyhow, about "not yet time" - I always follow a fundamental design pattern
that samples *all sensors *on a strict periodic basis, as fast as possible
or as needed for the fastest sensor, whether individual sensor data is
needed or not. I prefer to trigger readings with a hardware timer
interrupt in order to minimize timing variance. After sampling all sensors
at the fastest possible rate, then I perform business logic, then after
that I apply command output to motors.
Hence, with my robots, the sample rate timing to motors tends to have a
wider variance than sensor measurement variance (due to business logic
timing variance adding onto sensor readout variance), but motor updates
still occur with the same average period as sensor readings - still
typically much faster than is really needed.
I see a similar structure or approach in both of the GPT frameworks. Hence
- at a high level - GPT checked that box for me.
However, I would certainly modify the GPT4 framework to rely on interrupts
instead of the less controlled method "delay() at end of loop".
Paul
> Also, the subsumption code looks backward to me.
>
Carl
It may well be.
It does seem that a kind of duality exists between Behavioral and
Subsumption - albeit from 'opposite ends of the candle'. I need to do
another review - and maybe ground truth the GPT narrative against citable
resources. I haven't completely reconciled the portrayals of "top down" vs
"bottom up" vs "subsume" vs "basic reactive" vs "highest level", "etc"...
Sometimes I get the wrong impression from language and naming conventions
in models like this. I find that interactive examples like this give a
chance to calibrate and cross-check understanding.
Paul
> Overall, this looks like template code from a homework exercise for some
> robotics course.
>
Carl
For sure. But even if that's the case - is that bad?
I intended this chat as exploration and learning / not shrink-wrapped
executable code.
Paul
> it looks to me like it would lead you astray.
>
Carl
-> a persistent risk for sure. I figure there are many paths to learning -
and many can lead one astray. You just pick your poison, and for any path
- turn on a critical Socratic method mindset... ;-)
Anyhow - fun discussion!
Carl
On Wed, Jan 17, 2024 at 8:22 AM Paul Bouchier via DPRGlist <
dprglist at lists.dprg.org> wrote:
> DPRG folks - at last night's RBNV Carl shared an interaction with the paid
> version of ChatGPT, in which he asked it to design a behavior-tree
> implementation of driving a robot along a path, attending to obstacles,
> environmental sensors, etc. Then he asked it to design a subsumption
> implementation that did the same thing. It was a very interesting
> conversation (both at RBNV and in the chatGPT record). The chat record is
> here:
> ChatGPT
> <https://chat.openai.com/share/994e01a0-b757-4a33-b92c-12d145dde8ac>
>
> ChatGPT
>
> A conversational AI system that listens, learns, and challenges
> <https://chat.openai.com/share/994e01a0-b757-4a33-b92c-12d145dde8ac>
>
> I must say I am very impressed by how well it speaks, and how
> convincingly. I am less impressed by the code it generated. There would
> be a need to understand what the code it provided does, how the algorithm
> it used works, and to fix it appropriately.
>
> Specifically, the first code sample - behavior tree - shows the task
> functions always returning true after running BIT logic or Sensor checking
> logic. In fact, the behavior would need to return true or false depending
> on whether it needed to consume that iteration, because Selector.run() runs
> its children until one returns true, then returns true to its caller, so if
> all the tasks return true always, it will only ever execute the first task.
>
> IOW there's a need to understand how the behavior tree algorithm should
> work in order to correctly write the task functions to return the correct
> value, true or false. E.g. if it's not yet time for the EnvironmentSensor
> to take another reading, or the next reading is not ready, it should return
> false so that the next lower priority behavior in the Selector can run.
>
> Also, the subsumption code looks backward to me. ComplexBehavior (layer 2)
> is shown as path following which should be lower priority than layer 1
> (obstacle avoidance), but ComplexBehavior.execute() is called first in
> loop() and it gets first shot at whether it wants to run in "if
> (/*condition to take control */), and if it doesn't want to run it calls
> lowerlayer - the obstacle avoidance. Path following is always going to want
> to run, but it should be subsumed by the avoidance behavior, but in fact
> the opposite happens in chatGPT's implementation.
>
> Overall, this looks like template code from a homework exercise for some
> robotics course. I'm reminded of the author who's suing OpenAI because
> ChatGPT knows all about the characters in his book, and will recite the
> story in detail on demand. I'm unconvinced it knows anything and it looks
> to me like it would lead you astray.
>
> I only analyzed the first two implementations before concluding it was
> junk.
>
> Unimpressed in Little Elm :-0
>
>
>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at lists.dprg.org
> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dprg.org/pipermail/dprglist-dprg.org/attachments/20240119/dbbea1c1/attachment.htm>
More information about the DPRGlist
mailing list