<div dir="ltr">David, I see your pedantic and raise by downright nitpicky,<br><br>David wrote: "Right tool for the job, as the old guys use to say."<br><br><div>Agree. Much of the academic and startup robotics community has decided that linux is very often the right tool. The question is what is the job?<br><br>Murray wrote: But that's not solving the problem, which is having a very accurate millisecond timing loop."<br>and then: "assume an imprecise clock ... that would make our code more complicated"<br><br>Disagree. My teams and I haven't seen a precise timing loop since 2014. In fact under Android our loop times are downright gnarly. Get into the habit of normalizing all time-sensitive calculations to actual elapsed time and it becomes second nature. This tends to work for anything that sits in the time domain of the kind of robot physical motions we'll experience in hobby / educational robotics. <br><br>I'm not talking about low level bit banging, ISRs or TOF calcs - or anything like that. If you're trying to solve those kinds of things with the same general purpose processor you use for robot governance and behaviors, maybe the goal is the problem. This is where we look to dedicated (smart) sensors and comms chips. The trend in that direction is inexorable. <br><br>Now, if you started out with microcontrollers, might as well use all their capacity. But they may be more suited to isolated lone-robot applications and maybe not as much to highly interconnected environments. There may be conflicting use cases at play here.<br><br>But we don't always follow our own use cases. The real truth is that we start with what we are comfortable with (or what we have to use), and stick with it until it proves insufficient. Hard to argue with that approach. Getting started with anything goes a lot further than waiting for the perfect hardware combo to appear before ye.<br><br>RTC's are a baby red herring. RTOS is the papa red herring. Unless, maybe, if you're landing on Mars...<br><br>Otherwise, [wise-guy voice] don't worry about it.<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 18, 2021 at 4:08 PM Murray Altheim via DPRGlist <<a href="mailto:dprglist@lists.dprg.org">dprglist@lists.dprg.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 19/02/21 6:03 am, John Swindle via DPRGlist wrote:<br>
> Going back to a previous topic regarding jitter in timing intervals.<br>
<br>
Hi John,<br>
<br>
I was trying to find a paragraph of text that I saw a few days ago on some<br>
web page about how the combination of Linux and Python simply couldn't<br>
provide the kind of precise timing guarantees one can expect on a<br>
microcontroller, and I certainly don't contest that notion, i.e., this<br>
mirrors my own experience.<br>
<br>
Just to reiterate the issue (and I hope I've got all this correctly), Linux<br>
is a time-sharing operating system that cannot guarantee precise timing at<br>
an application level, even when the hardware it is running on (e.g., a<br>
Raspberry Pi or Intel 5.3GHz i9) has an accurate system clock, and may even<br>
have access to an RTC -- noting that an RTC provides an extremely accurate<br>
long-term clock but typically provides resolution only down to a second,<br>
not milli- or nanosecond. I.e., an RTC is a bit of a red herring.<br>
<br>
So running on a time-sharing OS means that any given application can't be<br>
guaranteed that it will be able to create precise time intervals at a<br>
nanosecond or millisecond level, since the application's code is sharing<br>
system resources with the system itself as well as other applications.<br>
<br>
One of those applications is the base Python/CPthon interpreter, which is<br>
an imperfect handler of timing, given its imperfect control of internal<br>
threading.<br>
<br>
So an application can ask the system clock for the number of milliseconds<br>
or nanoseconds since the Epoch (1 Jan 1970), and the number returned will<br>
be accurate, but if the application itself is running a timing loop it<br>
will be polling the system clock limited by its time share, not the system<br>
clock. So effectively, the loop will falter but know it's faltering by<br>
being able to compare its time with the system clock.<br>
<br>
> Why can't parameters used in calculations be scaled by the actual <br>
> sample interval? I understand 50ms is chosen because it gives <br>
> optimum control without undue overhead. When the actual interval is,<br>
> say, 47ms, why not scale the time-related parameters to 47/50 of what<br>
> they nominally are, just for that interval? If the next interval is<br>
> 74ms, scale the parameters to 74/50. Is this impractical? Is the<br>
> uncertainty of measuring the time interval too large? This is, if<br>
> Python says the time interval is 47ms, is the error, say, +/- 10ms?<br>
<br>
So given our understanding of the limitations of timing loops within a<br>
Python application running on Linux (i.e., not the operating system's<br>
internal clock), we can assume that with two levels of "indirection" any<br>
clock loop will run imprecisely.<br>
<br>
So a 50.00ms loop in Python will return values like 50.01ms, 51.3ms,<br>
49.1ms, etc., and during a surge where another application pulls a lot<br>
of system resources the values might run 54.33ms, 57.56ms, etc. In other<br>
words, imprecise, intermittently and unpredictably so. You suggested<br>
that over a longer period, say several minutes, the average may be close<br>
to 50.0ms but that's not necessarily a safe assumption since the<br>
cumulative values are subject to cumulative error, and that error can<br>
be high. One can compare the value with the (accurate) system clock,<br>
or even with an RTC. But that's not solving the problem, which is having<br>
a very accurate millisecond timing loop.<br>
<br>
So the other alternative would be to write code to simply assume an<br>
imprecise clock and capture the per-loop error. The problem is that we<br>
haven't escaped the basic problem of "being in situ" with our code. And<br>
that would also make our code more complicated. A PID controller is<br>
itself already rather tricky to tune. Having an imprecise clock doesn't<br>
make that any easier.<br>
<br>
I'm looking into another alternative, since my KR01 has an Itsy Bitsy M4<br>
on board that is currently not being used. I'm thinking of using it as<br>
an external clock, triggering a GPIO pin as an interrupt on the Pi.<br>
<br>
Cheers,<br>
<br>
Murray<br>
<br>
...........................................................................<br>
Murray Altheim <murray18 at altheim dot com> = = ===<br>
<a href="http://www.altheim.com/murray/" rel="noreferrer" target="_blank">http://www.altheim.com/murray/</a> === ===<br>
= = ===<br>
In the evening<br>
The rice leaves in the garden<br>
Rustle in the autumn wind<br>
That blows through my reed hut.<br>
-- Minamoto no Tsunenobu<br>
<br>
_______________________________________________<br>
DPRGlist mailing list<br>
<a href="mailto:DPRGlist@lists.dprg.org" target="_blank">DPRGlist@lists.dprg.org</a><br>
<a href="http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org" rel="noreferrer" target="_blank">http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org</a><br>
</blockquote></div>