[Dprglist] ChatGPT as a robot SW designer

Paul Bouchier bouchierpaul at yahoo.com
Wed Jan 17 06:22:09 PST 2024


DPRG folks - at last night's RBNV Carl shared an interaction with the paid version of ChatGPT, in which he asked it to design a behavior-tree implementation of driving a robot along a path, attending to obstacles, environmental sensors, etc. Then he asked it to design a subsumption implementation that did the same thing. It was a very interesting conversation (both at RBNV and in the chatGPT record). The chat record is here:ChatGPT


| 
| 
| 
|  |  |

 |

 |
| 
|  | 
ChatGPT

A conversational AI system that listens, learns, and challenges
 |

 |

 |


I must say I am very impressed by how well it speaks, and how convincingly. I am less impressed by the code it generated. There would be a need to understand what the code it provided does, how the algorithm it used works, and to fix it appropriately. 
Specifically, the first code sample - behavior tree - shows the task functions always returning true after running BIT logic or Sensor checking logic. In fact, the behavior would need to return true or false depending on whether it needed to consume that iteration, because Selector.run() runs its children until one returns true, then returns true to its caller, so if all the tasks return true always, it will only ever execute the first task.
IOW there's a need to understand how the behavior tree algorithm should work in order to correctly write the task functions to return the correct value, true or false. E.g. if it's not yet time for the EnvironmentSensor to take another reading, or the next reading is not ready, it should return false so that the next lower priority behavior in the Selector can run.
Also, the subsumption code looks backward to me. ComplexBehavior (layer 2) is shown as path following which should be lower priority than layer 1 (obstacle avoidance), but ComplexBehavior.execute() is called first in loop() and it gets first shot at whether it wants to run in "if (/*condition to take control */), and if it doesn't want to run  it calls lowerlayer - the obstacle avoidance. Path following is always going to want to run, but it should be subsumed by the avoidance behavior, but in fact the opposite happens in chatGPT's implementation. 
Overall, this looks like template code from a homework exercise for some robotics course. I'm reminded of the author who's suing OpenAI because ChatGPT knows all about the characters in his book, and will recite the story in detail on demand. I'm unconvinced it knows anything and it looks to me like it would lead you astray.
I only analyzed the first two implementations before concluding it was junk.
Unimpressed in Little Elm :-0


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dprg.org/pipermail/dprglist-dprg.org/attachments/20240117/19bf3e58/attachment.htm>


More information about the DPRGlist mailing list