Hope you're well and thanks for reading.
Been revisiting an old project, leveraging plotly to stream data out of mysql with python in-between the two. I've never had great luck w/ plot.ly (which I'm sure relates more to my understanding than their platform), streams/iframes seem to stall over time and I am not apt enough to troubleshoot completely.
My current symptom is this: Plots arbitrarily stall - I'm pushing data, but the iframe isn't updating.
The current solution is: Refresh the browser every X minutes.
The solution works, but it's aggrevating, because I dont understand why the visual is stalling in the first place (is it me, is it them, etc).
As I was reviewing some of the documentation, specifically this link:
https://plot.ly/streaming/
I noticed they call out NOT continually opening and closing streams, and that heartbeats should be placed every so often to keep things alive/fresh.
Here's what I'm currently calling every 10 minutes:
pullData(mysql)
format data
open(plotly.stream1)
write data to plotly.stream1
close(plotly.stream1)
open(plotly.stream2)
write data to plotly.stream2
close(plotly.stream2)
Based on what I am reading, it sounds like I should actually execute the script once on startup, and keep the streams open, but heartbeat() them every 15 or-so seconds between actual write() calls like this:
open(plotly.stream1)
open(plotly.stream2)
every 10 minutes:
pullData(mysql)
format data
write data to plotly.stream1
write data to plotly.stream2
while not pulling and writing:
every 15 seconds:
heartbeat(plotly.stream1)
heartbeat(plotly.stream2)
if error:
close(plotly.stream1)
close(plotly.stream2)
Please excuse the sudo-mess, I'm just trying to convey an idea. Anyone have any advice? I started on my original path of opening, writing, closing based on the streaming example, but that's a one time write. The other example is a constant stream of data. I'm somewhere in between those two.
Furthermore - is this train of thought even related to the iframe not refreshing? Part of me believes the symptom is unrelated to my idea - the data is getting to plot.ly fine - it's my session that's expiring, or the iframe "connection" that's going stale. If the symptom is unrelated, at least I'll have made my source code a bit cleaner and more appropriate.
Any advice is greatly appreciated!
Thanks
-justin
Plotly will close a stream that is inactive for more than 60 seconds. You must send a newline down the streaming channel (a heartbeat) to keep it open. I recommend every 30 seconds.
Your first code example may not work as expected because the client side websocket (that connects the Plot to our system) may close when your first source stream (the stream that connects your script to our system) exits. When you disconnect a source stream a signal is sent to our system that lets it know your stream is now inactive. If a new source stream does not reconnect quickly we close the client connecting websockets.
Now, when your script gets more data and opens a new stream it will successfully stream data to our system but the client-side websocket, now closed, will not pass the data to the Plot. We will cache a certain amount of points for you behind the scenes so that when you refresh the page the websocket reconnects and you get the last n points (where n is set by max-points in the API call).
This is why sending the heartbeat is important. We keep the source stream open and that in turn ensures that all the connected Clients keep their websockets open.
This isn't necessarily the most robust behaviour for a streaming platform to have and we will likely make it better in the future. For now though you will likely see better results by attempting to implement the code in your second example.
Hope that helped!
Related
I am trying to simulate a communication protocol where I am following a pattern, so I constantly loop though looking for the same set of characters to reply information. I'm using an RS-232 adapter and the protocol I am simulating is asynchronous and half-duplex where the rx/tx lines are tied together by design and that causes a sort of echo when reading after writing.
That said, I need to be able to clear the input buffer after every write I send out in order to avoid reading what I just wrote. So whenever I use reset_input_buffer() it does not clear the last message I sent out. I have tried to fix this using a couple of methods, such as: using reset_output_buffer() together with reset_input_buffer(), using reset_input_buffer() twice, and using flush(). None of these methods make any difference, the only other method that works to clear the buffer is closing and immediately opening the port but this causes a delay that messes with the timing as it is critical at certain times.
I'm open to any suggestions, please help!
I have a similar question as this one but the solution there did not apply to my problem. I can connect and send commands to my Keysight B1500 mainframe, via pyvisa/GPIB. The B1500 is connected via Keysight's IO tool "Connection Expert"
rman = pyvisa.ResourceManager()
keyS = rman.open_resource('GPIB0::18::INSTR')
keyS.timeout = 20000 # time in ms
keyS.chunk_size = 8204800 # 102400 is 100 kB
keyS.write('*rst; status:preset; *cls')
print('variable keyS is being assigned to ',keyS.query('*IDN?'))
Using this pyvisa object I can query without issues (*IDN? above provides the expected output), and I have also run and extracted data from a different type of IV curve on the same tool.
However, when I try to run a pulsed voltage sweep (change voltage of pulses as function of time and measure current) I do not get the measured data out from the tool. I can hook the output lead from the B1500 to an oscilloscope and can see that my setup has worked and the tool is behaving as expected, right up until I try to extract sweep data.
Again, I can run a standard non-pulsed sweep on the tool and the data extraction works fine using [pyvisaobject].read_raw() - so something is different with the way I'm pulsing the voltage.
What I'm looking for is a way to interrogate the connection in cases where the data transfer is unsuccessful.
Here, in no particular order, are the ways I've tried to extract data. These methods are suggested in this link:
keyS.query_ascii_values('CURV?')
or
keyS.read_ascii_values()
or
keyS.query_binary_values('CURV?')
or
keyS.read_binary_values()
This link from the vendor does cover the extraction of data, but also doesn't yield data in the read statement in the case of pulsed voltage sweep:
myFieldFox.write("TRACE:DATA?")
ff_SA_Trace_Data = myFieldFox.read()
Also tried (based on tab autocompletion on iPython):
read_raw() # This is the one that works with non-pulsed sweep
and
read_bytes(nbytes)
The suggestion from #Paul-Cornelius is a good one, I had to include an *OPC? to get the previous data transfer to work as well. So right before I attempt the data transfer, I send these lines:
rep = keyS.query('NUB?')
keyS.query('*OPC?')
print(rep,'AAAAAAAAAAAAAAAAAAAAA') # this line prints!
mretholder = keyS.read_raw() # system hangs here!
In all the cases the end result is the same - I get a timeout error:
pyvisa.errors.VisaIOError: VI_ERROR_TMO (-1073807339): Timeout expired before operation completed.
The tracebacks for all of these show that they are all using the same basic framework from:
chunk, status = self.visalib.read(self.session, size)
Hoping someone has seen this before, or at least has some ideas on how to troubleshoot. Thanks in advance!
I don't have access to that instrument, but to read a waveform from a Tektronics oscilloscope I had to do the following (pyvisa module):
On obtaining the Resource, I do this:
resource.timeout = 10000
In some cases, I have to wait for a command to complete before proceeding like this:
resource.query("*opc?")
To transfer a waveform from the scope to the PC I do this:
ascii_waveform = resource.query("wavf?")
The "wavf?" query is specifically for this instrument, but the "*opc?" query is generic ("wait for operation complete"), and so is the method for setting the timeout parameter.
There is a lot of information in the user's guide (on the same site you have linked to) about reading data from various devices. I have used the visa library a few times on different devices, and it always requires some fiddling to get it to work.
It looks like you don't have any time delays in your program. In my experience they are almost always necessary, somehow, either by using the resource.timeout feature, the *opc? query, or both.
I am having an issue with a Python script that is running on a Raspberry Pi. Frankly, the script initially runs perfectly fine and then after a certain period of time (typically >1 hour) the computer either freezes or shuts down. I am not sure if this is a software or a hardware issue. The only clue I have so far is the following error message that appeared one time when the computer froze:
[9798.371860] Unable to handle kernel paging request at virtual address e50b405c
How should this message be interpreted? What could be a good way of keep debugging the code? Any help is relevant since I am fairly new to programming and have run out of ideas on how to troubleshoot this issue..
Here is also some background to what the Python code intends to do (not sure if it makes a difference though). In short, every other second it registers the temperature through a sensor, creates a JSON file and saves it, sends this JSON object through cURL (urllib) to a web API, receives a new JSON file, changes switches based on the data in this file, sleeps for 2 seconds and repeats this process.
Thanks!
I recently acquired a Go Pro Hero 3. Its working fine but when i attempt to stream live video/audio it gitches every now and then.
Initially i just used vlc to open the m3u8 file, however when that was glitchy i downloaded the android app and attempted to stream over that.
It was a little better on the app.
I used wireshark and i think the cause of it is its simply not transferring/buffering fast enough. Tried just to get everything with wget in loop, it got through 3 loops before it either: caught up (possible but i dont think so ... though i may double check that) or fell behind and hence timed out/hung.
There is also delay in the image, but i can live with that.
I have tried lowering the resolution/frame rate but im not sure if it is actually doing anything as i can't tell any difference. I think it may be just the settings for recording on the go pro. Either way, it didn't work.
Essentially i am looking for any possible methods for removing this 'glitchiness'
My current plan is to attempt writing something in python to get the files over UDP (no TCP overhead).
Ill just add a few more details/symptoms:
The Go Pro is using the Apple m3u8 streaming format.
At anyone time there are 16 .ts files in the folder. (26 Kb each)
These get overwritten in a loop (circular buffer)
When i stream on vlc:
Approx 1s delay - streams fine for ~0.5s, stops for a little less than that, then repeats.
What i think is happening is the file its trying to transfer gets overwritten which causes it to timeout.
Over the android App:
Less delay and shorter 'timeouts' but still there
I want to write a python script to try get a continuous image. The files are small enough that they should fit in a single UDP packet (i think ... 65Kb ish right?)
Is there anything i could change in terms of wifi setting on my laptop to improve it too?
Ie some how dedicate it to that?
Thanks,
Stephen
I've been working on creating a GoPro API recently for Node.js and found the device very glitchy too. Its much more stable after installing the latest gopro firmware (3.0.0).
As for streaming, I couldnt get around the wifi latency and went for a record and copy approach.
I have a long running python process running headless on a raspberrypi (controlling a garden) like so:
from time import sleep
def run_garden():
while 1:
/* do work */
sleep(60)
if __name__ == "__main__":
run_garden()
The 60 second sleep period is plenty of time for any changes happening in my garden (humidity, air temp, turn on pump, turn off fan etc), BUT what if i want to manually override these things?
Currently, in my /* do work */ loop, i first call out to another server where I keep config variables, and I can update those config variables via a web console, but it lacks any sort of real time feel, because it relies on the 60 second loop (e.g. you might update the web console, and then wait 45 seconds for the desired effect to take effect)
The raspberryPi running run_garden() is dedicated to the garden and it is basically the only thing taking up resources. So i know i have room to do something, I just dont know what.
Once the loop picks up the fact that a config var has been updated, the loop could then do exponential backoff to keep checking for interaction, rather than wait 60 seconds, but it just doesnt feel like that is a whole lot better.
Is there a better way to basically jump into this long running process?
Listen on a socket in your main loop. Use a timeout (e.g. of 60 seconds, the time until the next garden update should be performed) on your socket read calls so you get back to your normal functionality at least every minute when there are no commands coming in.
If you need garden-tending updates to happen no faster than every minute you need to check the time since the last update, since read calls will complete significantly faster when there are commands coming in.
Python's select module sounds like it might be helpful.
If you've ever used the unix analog (for example in socket programming maybe?), then it'll be familiar.
If not, here is the select section of a C sockets reference I often recommend. And here is what looks like a nice writeup of the module.
Warning: the first reference is specifically about C, not Python, but the concept of the select system call is the same, so the discussion might be helpful.
Basically, it allows you to tell it what events you're interested in (for example, socket data arrival, keyboard event), and it'll block either forever, or until a timeout you specify elapses.
If you're using sockets, then adding the socket and stdin to the list of events you're interested in is easy. If you're just looking for a way to "conditionally sleep" for 60 seconds unless/until a keypress is detected, this would work just as well.
EDIT:
Another way to solve this would be to have your raspberry-pi "register" with the server running the web console. This could involve a little bit extra work, but it would give you the realtime effect you're looking for.
Basically, the raspberry-pi "registers" itself, by alerting the server about itself, and the server stores the address of the device. If using TCP, you could keep a connection open (which might be important if you have firewalls to deal with). If using UDP you could bind the port on the device before registering, allowing the server to respond to the source address of the "announcement".
Once announced, when config. options change on the server, one of two things usually happen:
A) You send a tiny "ping" (in the general sense, not the ICMP host detection protocol) to the device alerting it that config options have changed. At this point the host would immediately request the full config. set, acquiring the update with it.
B) You send the updated config. option (or maybe the entire config. set) back to the device. This decreases the number of messages between the device and server, but would probably take more work as it seems like more a deviation from your current setup.
Why not use an event based loop instead of sleeping for a certain amount of time.
That way your loop will only run when a change is detected, and it will always run when a change is detected (which is the point of your question?).
You can do such a thing by using:
python event objects
Just wait for one or all of your event objects to be triggered and run the loop. You can also wait for X events to be done, etc, depending if you expect one variable to be updated a lot.
Or even a system like:
broadcasting events