[Dprglist] Visualizing data [was: PID]

Bob Cook bob at bobandeileen.com
Sat Oct 3 11:07:43 PDT 2020


Oh and one more thing: ElasticSearch is expecting something like milliseconds of "UNIX timestamps" for a ‘date’ field type. My robot records the millisecond offset for each data point, where the start of the robot run is considered zero offset. My conversion script from the binary stream (from the robot) to JSONND file just offsets that millisecond marker from the time when I ran the program. In Python that is datetime.datetime.now() * 1000. Then truncates to an integer. I’m sure there is a way to get more precise timestamps but my application didn’t require it, so this is what I did.

- Bob


> On Oct 3, 2020, at 11:00 AM, Bob Cook via DPRGlist <dprglist at lists.dprg.org> wrote:
> 
> Hi Murray,
> 
> Grafana connects to your ElasticSearch instance via http, on port 9200 by default. ElasticSearch should have opened that port on the loopback address, assuming you are running the two instances on the same system then you provide http://127.0.0.1:9200/ <http://127.0.0.1:9200/> for Grafana in its Data Source to connect. Two more challenges: first, pick “7.0+” for the Version field, and second pick your field that corresponds to the Time. This field has to be a ‘date’ field type in ElasticSearch.
> 
> I’m using JSONND (JSON newline delimited) as my filetype, because Kibana has a nifty import feature that directly accepts that filetype and shows you how it has been parsed. Handy to use when I was first getting started. It would be easy to convert CSV to JSONND if you so desired. Here is the script I’m using to upload my JSONND data files to my ES instance:
> 
> #!/usr/bin/python3
> 
> import argparse
> import datetime
> import elasticsearch
> import json
> import sys
> 
> parser = argparse.ArgumentParser()
> parser.add_argument( 'files', nargs='+' )
> args = parser.parse_args()
> 
> es = elasticsearch.Elasticsearch( 'localhost:9200' )
> 
> m = { 'mappings' : { 'properties' : { 'timestamp' : { 'type' : 'date' } } } }
> 
> result = es.indices.create( index=‘robot-data', ignore=400, body=m )
> 
> for f in args.files:
>     with open( f, 'r' ) as fd:
>         jl = fd.readline()
>         while jl:
>             record = json.loads( jl )
>             result = es.index( index=‘robot-data', body=record )
>             jl = fd.readline()
>         
> es.close()
> 
> 
> Note: this is not the most efficient way to do it, the Python elasticsearch module supports “bulk” methods that can batch things for better performance. Personally I didn’t bother, I’m not looking at millions of data points from each robot trial. :)
> 
> The call to create the index is done each time I run the script, and I just ignore the result when it fails because the index already exists. The part about identifying my “timestamp” field as a date type helps when pulling the data into Grafana.
> 
> There is a handy CSV parser module in Python that would make short work of parsing your file into a dictionary to feed into the index function, rather than using the JSON parser as I did.
> 
> Hope that helps you out.
> 
> - Bob
> 
> 
> 
>> On Oct 3, 2020, at 2:40 AM, Murray Altheim via DPRGlist <dprglist at lists.dprg.org <mailto:dprglist at lists.dprg.org>> wrote:
>> 
>> Hi Bob,
>> 
>> As you may imagine, time to do things always seems too short. I had a
>> go at installing both Grafana and Elasticsearch via Docker containers
>> onto a "spare" NVidia Xavier NX I'd bought and unfortunately haven't
>> been able to use for its intended purpose [I've tried 4-5 times to
>> install the Cuda nvGRAPH library but the whole thing then fails to
>> boot and I've just given up on this $800 or so of investment, a real
>> regret, but that's another story...].
>> 
>> I was able to create Grafana and Elasticsearch accounts, etc. but
>> haven't managed to figure out how to tie them together with the ES as
>> a data source yet. This involves creating a data source and pointing
>> Grafana at Elasticsearch, but I haven't got that far.
>> 
>> My Logger class extends the Python logging library, and I just made a
>> minor change to now write just the PID data to a CSV compatible file.
>> My new data output is as per David, Karim and others' suggestions
>> (thank you!) now running on its own thread an entirely separate 20Hz
>> loop (i.e., separate from the PID controller's own loops) that polls
>> *all* of the data and writes it onto one line of the log.
>> 
>> I then attempted to use the Python-to-Elasticsearch library and wrote
>> a test class but for some reason the ping() feature failed to connect
>> to the server (via IP address), so I gave up and began working once
>> again on the log output from my PID controllers. At least that's done,
>> it even prints out a header and legend:
>> 
>>  name   : description
>>  kp     : proportional constant
>>  ki     : integral constant
>>  kd     : derivative constant
>>  p.cp   : port proportial value
>>  p.ci   : port integral value
>>  p.cd   : port derivative value
>>  p.lpw  : port last power
>>  p.cpw  : port current motor power
>>  p.spwr : port set power
>>  p.cvel : port current velocity
>>  p.stpt : port velocity setpoint
>>  s.cp   : starboard proportial value
>>  s.ci   : starboard integral value
>>  s.cd   : starboard derivative value
>>  s.lpw  : starboard proportional value
>>  s.cpw  : starboard integral value
>>  s.spw  : starboard derivative value
>>  s.cvel : starboard current velocity
>>  s.stpt : starboard velocity setpoint
>>  p.stps : port encoder steps
>>  s.stps : starboard encoder steps
>> 
>>  https://service.robots.org.nz/wiki/attach/PIDController/ros-2020_10_02T20_31_53_278210.csv <https://service.robots.org.nz/wiki/attach/PIDController/ros-2020_10_02T20_31_53_278210.csv>
>> 
>> David Anderson seems to have a (as an outside observer) very quick process to
>> get data into GnuPlot, or maybe just a ton of experience. I'd used GnuPlot on
>> my v1 PID controller but would both as a hobbiest and software professional
>> love to be able to populate an Elasticsearch DB and automatically get that
>> into Grafana. I have no need for history so overwriting the ID of the DB
>> record would be fine.
>> 
>> If you've got some relatively easy way to get CSV into Elasticsearch I'd be
>> quite happy to learn how.
>> 
>> Cheers,
>> 
>> Murray
>> 
>> On 3/10/20 1:46 pm, Bob Cook via DPRGlist wrote:
>>> HI Murray,
>>> Grafana does pretty slick graphs without much work at all. Here is an example image of a (not very good) PID implementation showing the target speed, actual speed, and error. As you can see this is going from speed=0 to speed=20 then speed=0 again after 30 seconds.
>>> Happy to share the detail of my Docker setup if that would help. I’m using Docker Compose to create three containers: ElasticSearch, Kibana, and Grafana. The three containers share a private network, and the data for each resides on the host rather than inside the containers themselves - makes upgrading the containers easy without losing the data. Sort of a beautiful thing made simple by Docker.
>>> - Bob
>>>> On Oct 1, 2020, at 1:24 AM, Murray Altheim via DPRGlist <dprglist at lists.dprg.org <mailto:dprglist at lists.dprg.org> <mailto:dprglist at lists.dprg.org <mailto:dprglist at lists.dprg.org>>> wrote:
>>>> 
>>>> On 1/10/20 9:25 am, Bob Cook via DPRGlist wrote:
>>>>> Hi Murray,
>>>>> You may want to look at Grafana as a tool to visualize time series
>>>>> data. It is open source and easy to set up if you are familiar with
>>>>> Docker containers. It pulls data from a variety of sources, I’m using it with an Elasticsearch instance, also as a Docker container.
>>>>> I set them up on a Linux host with Docker, but apparently you can
>>>>> use Windows or macOS as your Docker host pretty easily too.
>>>> 
>>>> Hi Bob,
>>>> 
>>>> I've as part of various jobs designed and implemented both Solr and
>>>> ElasticSearch services but I've never tried standing up a Docker
>>>> instance nor have I used Grafana, though I've looked over the website
>>>> a few times. I'm a Linux user so yeah, it doesn't sound like it'd be
>>>> too difficult to attempt what you've done, maybe easier than trying
>>>> to do it the hard way, which is kinda what I'd done on my version 1
>>>> PID controller: write the data from the PID to a file on the robot,
>>>> then either on a HDMI-connected monitor to the Raspberry Pi or on my
>>>> workstation computer doing the GnuPlot visualisation. That took a
>>>> fair bit of effort so if Grafana is substantially easier and more
>>>> configurable I might give it a try. Good learning experience both for
>>>> my robotics as well as job skills too.
>>>> 
>>>>> I’m using Python to push data into the ES instance. There is a Python
>>>>> lib for easy access to ES. My robot records a binary stream of stats
>>>>> that I convert and upload offline, after I’ve collected data from a test run.
>>>> 
>>>> Wow, thanks for that clue -- Python to ElasticSearch would be a pretty
>>>> huge simplification if the ElasticSearch to Grafana connection were
>>>> similarly easy. But again, a good learning experience in either case.
>>>> 
>>>> If I were doing a microservice architecture between powerful services
>>>> on a fast network I might want to send all the log messages directly
>>>> to ElasticSearch, but considering this is running on a Raspberry Pi
>>>> over WiFi and I don't need the data to be live-streamed, I'll likely
>>>> think about how to load a completed log (as CSV, as Chris suggested)
>>>> after the fact.
>>>> 
>>>> Thanks much,
>>>> 
>>>> Murray
>>>> 
>>>> ...........................................................................
>>>> Murray Altheim <murray18 at altheim dot com>                       = =  ===
>>>> http://www.altheim.com/murray/ <http://www.altheim.com/murray/>                                     ===  ===
>>>>                                                                   = =  ===
>>>>    In the evening
>>>>    The rice leaves in the garden
>>>>    Rustle in the autumn wind
>>>>    That blows through my reed hut.
>>>>           -- Minamoto no Tsunenobu
>>>> 
>>>> _______________________________________________
>>>> DPRGlist mailing list
>>>> DPRGlist at lists.dprg.org <mailto:DPRGlist at lists.dprg.org> <mailto:DPRGlist at lists.dprg.org <mailto:DPRGlist at lists.dprg.org>>
>>>> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org <http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org>
>>> _______________________________________________
>>> DPRGlist mailing list
>>> DPRGlist at lists.dprg.org <mailto:DPRGlist at lists.dprg.org>
>>> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org <http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org>
>> 
>> -- 
>> 
>> ...........................................................................
>> Murray Altheim <murray18 at altheim dot com>                       = =  ===
>> http://www.altheim.com/murray/ <http://www.altheim.com/murray/>                                     ===  ===
>>                                                                   = =  ===
>>    In the evening
>>    The rice leaves in the garden
>>    Rustle in the autumn wind
>>    That blows through my reed hut.
>>           -- Minamoto no Tsunenobu
>> 
>> _______________________________________________
>> DPRGlist mailing list
>> DPRGlist at lists.dprg.org <mailto:DPRGlist at lists.dprg.org>
>> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org <http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org>
> _______________________________________________
> DPRGlist mailing list
> DPRGlist at lists.dprg.org
> http://lists.dprg.org/listinfo.cgi/dprglist-dprg.org

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dprg.org/pipermail/dprglist-dprg.org/attachments/20201003/a2ab9217/attachment-0001.html>


More information about the DPRGlist mailing list