[Dprglist] PID-tuned Clock in Python?
David Anderson
davida at smu.edu
Thu Feb 18 21:16:51 PST 2021
Howdy
Karim, I can certainly match your downright nitpickery and am therefore
all in with crotchty oldfartdom.
The "timing loops" under consideration seem to be a fairly
undifferentiated amalgamation of things that actually require very
different timing constraints. So for example, the ENCODERS on my
robots are maintained by hardware, once setup. Those are in turn
sampled at a regular interval by an interrupt driven from a hardware
timer. So the latency is that of the ARM NVIC hardware, which is
probably as good as can be had. Same is true of the array of SONAR
senors, for example, but with a different timer and much higher sample
rate (*).
So what is done with that data is that it's read by the various
subsumption behaviors that do their own filtering or whatever. Those
behaviors all run in a single loop with an RTOS wake_after(40) command
at the bottom of the loop. That loop time on RCAT is more or less 25
Hz. But it can drift around a bit, depending on how busy the systems is
with other RTOS tasks. The motor PIDS run in that loop. I can make it
more precise by using a PERIOD() call which subtracts off the loop
execution time. But it doesn't make much difference in the robot's
performance. At this upper level it really doesn't matter. The part of
the system that needs the exact hardware timing has already been done.
My extended point being that there are some robotly tasks that need
close access to the hardware, and others that do not. They shouldn't be
all clumped together as the "loop timing error problem." They need to
be considered separately in terms of their implementation.
(*) My experience with the seismic signal processing we do at work,
which is much like what we do in robotics, is that sampling is best done
at very regular intervals, with Nyquist considerations and such. It
massively simplifies all subsequent signal processsing. In the cases in
which that is not possible and the data are not evenly sampled, and
therefore require individual timestamps, you end up using some sort of
interpolation to remap the data onto a regular grid anyway, for
subsequent processing. But now you're just guessing :)
We still use lots of time stamps, but those are generally at the packet
level and higher, not the individual samples. So I guess I'm not sold
on that as the solution for the encoder and pid timing loops that Murray
is addressing.
cheers!
dpa
On 2/18/21 5:40 PM, Karim Virani via DPRGlist wrote:
> David, I see your pedantic and raise by downright nitpicky,
>
> David wrote: "Right tool for the job, as the old guys use to say."
>
> Agree. Much of the academic and startup robotics community has decided
> that linux is very often the right tool. The question is what is the job?
>
> Murray wrote: But that's not solving the problem, which is having a
> very accurate millisecond timing loop."
> and then: "assume an imprecise clock ... that would make our code more
> complicated"
>
> Disagree. My teams and I haven't seen a precise timing loop since
> 2014. In fact under Android our loop times are downright gnarly. Get
> into the habit of normalizing all time-sensitive calculations to
> actual elapsed time and it becomes second nature. This tends to work
> for anything that sits in the time domain of the kind of robot
> physical motions we'll experience in hobby / educational robotics.
>
> I'm not talking about low level bit banging, ISRs or TOF calcs - or
> anything like that. If you're trying to solve those kinds of things
> with the same general purpose processor you use for robot governance
> and behaviors, maybe the goal is the problem. This is where we look to
> dedicated (smart) sensors and comms chips. The trend in that direction
> is inexorable.
>
> Now, if you started out with microcontrollers, might as well use all
> their capacity. But they may be more suited to isolated lone-robot
> applications and maybe not as much to highly interconnected
> environments. There may be conflicting use cases at play here.
>
> But we don't always follow our own use cases. The real truth is that
> we start with what we are comfortable with (or what we have to use),
> and stick with it until it proves insufficient. Hard to argue with
> that approach. Getting started with anything goes a lot further than
> waiting for the perfect hardware combo to appear before ye.
>
> RTC's are a baby red herring. RTOS is the papa red herring. Unless,
> maybe, if you're landing on Mars...
>
> Otherwise, [wise-guy voice] don't worry about it.
>
> On Thu, Feb 18, 2021 at 4:08 PM Murray Altheim via DPRGlist
> <dprglist at lists.dprg.org <mailto:dprglist at lists.dprg.org>> wrote:
>
> On 19/02/21 6:03 am, John Swindle via DPRGlist wrote:
> > Going back to a previous topic regarding jitter in timing intervals.
>
> Hi John,
>
> I was trying to find a paragraph of text that I saw a few days ago
> on some
> web page about how the combination of Linux and Python simply couldn't
> provide the kind of precise timing guarantees one can expect on a
> microcontroller, and I certainly don't contest that notion, i.e., this
> mirrors my own experience.
>
> Just to reiterate the issue (and I hope I've got all this
> correctly), Linux
> is a time-sharing operating system that cannot guarantee precise
> timing at
> an application level, even when the hardware it is running on (e.g., a
> Raspberry Pi or Intel 5.3GHz i9) has an accurate system clock, and
> may even
> have access to an RTC -- noting that an RTC provides an extremely
> accurate
> long-term clock but typically provides resolution only down to a
> second,
> not milli- or nanosecond. I.e., an RTC is a bit of a red herring.
>
> So running on a time-sharing OS means that any given application
> can't be
> guaranteed that it will be able to create precise time intervals at a
> nanosecond or millisecond level, since the application's code is
> sharing
> system resources with the system itself as well as other applications.
>
> One of those applications is the base Python/CPthon interpreter,
> which is
> an imperfect handler of timing, given its imperfect control of
> internal
> threading.
>
> So an application can ask the system clock for the number of
> milliseconds
> or nanoseconds since the Epoch (1 Jan 1970), and the number
> returned will
> be accurate, but if the application itself is running a timing loop it
> will be polling the system clock limited by its time share, not
> the system
> clock. So effectively, the loop will falter but know it's faltering by
> being able to compare its time with the system clock.
>
> > Why can't parameters used in calculations be scaled by the actual
> > sample interval? I understand 50ms is chosen because it gives
> > optimum control without undue overhead. When the actual interval is,
> > say, 47ms, why not scale the time-related parameters to 47/50 of
> what
> > they nominally are, just for that interval? If the next interval is
> > 74ms, scale the parameters to 74/50. Is this impractical? Is the
> > uncertainty of measuring the time interval too large? This is, if
> > Python says the time interval is 47ms, is the error, say, +/- 10ms?
>
> So given our understanding of the limitations of timing loops within a
> Python application running on Linux (i.e., not the operating system's
> internal clock), we can assume that with two levels of
> "indirection" any
> clock loop will run imprecisely.
>
> So a 50.00ms loop in Python will return values like 50.01ms, 51.3ms,
> 49.1ms, etc., and during a surge where another application pulls a lot
> of system resources the values might run 54.33ms, 57.56ms, etc. In
> other
> words, imprecise, intermittently and unpredictably so. You suggested
> that over a longer period, say several minutes, the average may be
> close
> to 50.0ms but that's not necessarily a safe assumption since the
> cumulative values are subject to cumulative error, and that error can
> be high. One can compare the value with the (accurate) system clock,
> or even with an RTC. But that's not solving the problem, which is
> having
> a very accurate millisecond timing loop.
>
> So the other alternative would be to write code to simply assume an
> imprecise clock and capture the per-loop error. The problem is that we
> haven't escaped the basic problem of "being in situ" with our
> code. And
> that would also make our code more complicated. A PID controller is
> itself already rather tricky to tune. Having an imprecise clock
> doesn't
> make that any easier.
>
> I'm looking into another alternative, since my KR01 has an Itsy
> Bitsy M4
> on board that is currently not being used. I'm thinking of using it as
> an external clock, triggering a GPIO pin as an interrupt on the Pi.
>
> Cheers,
>
> Murray
>
> ...........................................................................
> Murray Altheim <murray18 at altheim dot com> = = ===
> http://www.altheim.com/murray/
> === ===
> = = ===
> In the evening
> The rice leaves in the garden
> Rustle in the autumn wind
> That blows through my reed hut.
> -- Minamoto no Tsunenobu
>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at lists.dprg.org <mailto:DPRGlist at lists.dprg.org>
> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org
>
>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at lists.dprg.org
> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dprg.org/pipermail/dprglist-dprg.org/attachments/20210218/a735dd7e/attachment-0001.html>
More information about the DPRGlist
mailing list