For a flood warning system, my Raspberry Pi rings the bells in near-real-time but uses a NAS drive as a postbox to output data files to a PC for slower-time graphing and reporting, and to receive various input data files back. Python on the Pi takes precisely 10 seconds to establish that the NAS drive right next to it is not currently available. I need that to happen in less than a second for each access attempt, otherwise the delays add up and the Pi fails to refresh the hardware watchdog in time. (The Pi performs tasks on a fixed cycle: every second, every 15 seconds (watchdog), every 75 seconds and every 10 minutes.) All disc access attempts are preceded by tests with try-except. But try-except doesn't help, as tests like os.path.exists() or with open() both take 10 seconds before raising the exception, even when the NAS drive is powered down. It's as though there's a 10-second timeout way down in the comms protocol rather than up in the software.
Is there a way of telling try-except not to be so patient? If not, how can I get a more immediate indicator of whether the NAS drive is going to hold up the Pi at the next read/write, so that the Pi can give up and wait till the next cycle? I've done all the file queueing for that, but it's wasted if every check takes 10 seconds.
Taking on MQTT at this stage would be a big change to this nearly-finished project. But your suggestion of decoupling the near-real-time Python from the NAS drive by using a second script is I think the way to go. If the Python disc interface commands wait 10 seconds for an answer, I can't help that. But I can stop it holding up the time-critical Python functions by keeping all time-critical file accesses local in Pi memory, and replicating whole files in both directions between the Pi and the NAS drive whenever they change. In fact I already have the opportunistic replicator code in Python - I just need to move it out of the main time-critical script into a separate script that will replicate the files. And the replicator Python script will do any waiting, rather than the time-critical Python script. The Pi scheduler will decouple the two scripts for me. Thanks for your help - I was beginning to despair!
Related
I am new to python and I've just created this script :
import os
import os.path
import time
while True:
if os.path.isfile('myPathTo/shutdown.svg'):
os.remove('myPathTo/shutdown.svg')
time.sleep(1)
os.system('cd C:\Windows\PSTools & psshutdown -d -t 0')
As you can see, this script is very short and I think there is a way to make it less laggy. On my PC, it is using about 30% of my processor :
Python stats on my pc
I don't really know why it is using so much resources, I need your help :)
A little explanation of the program :
I'm using IFTTT to send a file on my google drive which is synchronized on my pc (shutdown.svg) when I ask google home to shut down my pc.
When Python detect the file, he has to remove it and shut down the pc. I've added time between theses actions to make sure the script does not check the file too many times to reduce lag. Maybe 1 second is too short ?
I've added time between theses actions to make sure the script does not check the file too many times to reduce lag
This loop is sleeping 1 sec only before shutting down when the file is found, i.e. it never sleeps until it actually finds a file. So, put sleep(1) out of the if-condition.
Maybe 1 second is too short?
If you can, make this sleep time as long as possible.
If your only task is to shut down the PC, there are so many ways of scanning for an update like crons for regular scripts running or setting a lightweight server
I'm trying to get a looping call to run every 2 seconds. Sometimes, I get the desired functionality, but othertimes I have to wait up to ~30 seconds which is unacceptable for my applications purposes.
I reviewed this SO post and found that looping call might not be reliable for this by default. Is there a way to fix this?
My usage/reason for needing a consistent ~2 seconds:
The function I am calling scans an image (using CV2) for a dollar value and if it finds that amount it sends a websocket message to my point of sale client. I can't have customers waiting 30 seconds for the POS terminal to ask them to pay.
My source code is very long and not well commented as of yet, so here is a short example of what I'm doing:
#scan the image for sales every 2 seconds
def scanForSale():
print ("Now Scanning for sale requests")
#retrieve a new image every 2 seconds
def getImagePreview():
print ("Loading Image From Capture Card")
lc = LoopingCall(scanForSale)
lc.start(2)
lc2 = LoopingCall(getImagePreview)
lc2.start(2)
reactor.run()
I'm using a Raspberry Pi 3 for this application, which is why I suspect it hangs for so long. Can I utilize multithreading to fix this issue?
Raspberry Pi is not a real time computing platform. Python is not a real time computing language. Twisted is not a real time computing library.
Any one of these by itself is enough to eliminate the possibility of a guarantee that you can run anything once every two seconds. You can probably get close but just how close depends on many things.
The program you included in your question doesn't actually do much. If this program can't reliably print each of the two messages once every two seconds then presumably you've overloaded your Raspberry Pi - a Linux-based system with multitasking capabilities. You need to scale back your usage of its resources until there are enough available to satisfy the needs of this (or whatever) program.
It's not clear whether multithreading will help - however, I doubt it. It's not clear because you've only included an over-simplified version of your program. I would have to make a lot of wild guesses about what your real program does in order to think about making any suggestions of how to improve it.
I am having an issue with a Python script that is running on a Raspberry Pi. Frankly, the script initially runs perfectly fine and then after a certain period of time (typically >1 hour) the computer either freezes or shuts down. I am not sure if this is a software or a hardware issue. The only clue I have so far is the following error message that appeared one time when the computer froze:
[9798.371860] Unable to handle kernel paging request at virtual address e50b405c
How should this message be interpreted? What could be a good way of keep debugging the code? Any help is relevant since I am fairly new to programming and have run out of ideas on how to troubleshoot this issue..
Here is also some background to what the Python code intends to do (not sure if it makes a difference though). In short, every other second it registers the temperature through a sensor, creates a JSON file and saves it, sends this JSON object through cURL (urllib) to a web API, receives a new JSON file, changes switches based on the data in this file, sleeps for 2 seconds and repeats this process.
Thanks!
When I run any python script that doesn't even contain any code or imports that could access the internet in any way, I get 2 pythonw.exe processes pop up in my resource monitor under network activity. One of them is always sending more than receiving while the other has the same activity but the amount of sending vs receiving is reversed. The amount of overall activity is dependent on the file size, regardless of how many line are commented out. Even a blank .py document will create network activity of about 200 kb/s. The activity drops from its peak, which is as high as 15,000 kb/s for a file with 10,000 lines, to around zero after around 20 seconds, and then the processes quit on their own. The actual script has finished running long before the network processes stop.
Because the activity is dependent on file size I'm suspicious that every time I run a python script, the whole thing is being transmitted to a server somewhere else in the world.
Is this something that could be built into python, a virus that's infecting my computer, or just something that python is supposed to do and its innocent activity?
If anyone doesn't have an answer but could check to see if this activity affects their own installation of python, that would be great. Thanks!
EDIT:
Peter Wood, to start the process just run any python script from the editor, its runs on its own, at least for me. I'm on 2.7.8.
Robert B, I think you may be right, but why would the communication continue after the script has finished running?
I have a long running python process running headless on a raspberrypi (controlling a garden) like so:
from time import sleep
def run_garden():
while 1:
/* do work */
sleep(60)
if __name__ == "__main__":
run_garden()
The 60 second sleep period is plenty of time for any changes happening in my garden (humidity, air temp, turn on pump, turn off fan etc), BUT what if i want to manually override these things?
Currently, in my /* do work */ loop, i first call out to another server where I keep config variables, and I can update those config variables via a web console, but it lacks any sort of real time feel, because it relies on the 60 second loop (e.g. you might update the web console, and then wait 45 seconds for the desired effect to take effect)
The raspberryPi running run_garden() is dedicated to the garden and it is basically the only thing taking up resources. So i know i have room to do something, I just dont know what.
Once the loop picks up the fact that a config var has been updated, the loop could then do exponential backoff to keep checking for interaction, rather than wait 60 seconds, but it just doesnt feel like that is a whole lot better.
Is there a better way to basically jump into this long running process?
Listen on a socket in your main loop. Use a timeout (e.g. of 60 seconds, the time until the next garden update should be performed) on your socket read calls so you get back to your normal functionality at least every minute when there are no commands coming in.
If you need garden-tending updates to happen no faster than every minute you need to check the time since the last update, since read calls will complete significantly faster when there are commands coming in.
Python's select module sounds like it might be helpful.
If you've ever used the unix analog (for example in socket programming maybe?), then it'll be familiar.
If not, here is the select section of a C sockets reference I often recommend. And here is what looks like a nice writeup of the module.
Warning: the first reference is specifically about C, not Python, but the concept of the select system call is the same, so the discussion might be helpful.
Basically, it allows you to tell it what events you're interested in (for example, socket data arrival, keyboard event), and it'll block either forever, or until a timeout you specify elapses.
If you're using sockets, then adding the socket and stdin to the list of events you're interested in is easy. If you're just looking for a way to "conditionally sleep" for 60 seconds unless/until a keypress is detected, this would work just as well.
EDIT:
Another way to solve this would be to have your raspberry-pi "register" with the server running the web console. This could involve a little bit extra work, but it would give you the realtime effect you're looking for.
Basically, the raspberry-pi "registers" itself, by alerting the server about itself, and the server stores the address of the device. If using TCP, you could keep a connection open (which might be important if you have firewalls to deal with). If using UDP you could bind the port on the device before registering, allowing the server to respond to the source address of the "announcement".
Once announced, when config. options change on the server, one of two things usually happen:
A) You send a tiny "ping" (in the general sense, not the ICMP host detection protocol) to the device alerting it that config options have changed. At this point the host would immediately request the full config. set, acquiring the update with it.
B) You send the updated config. option (or maybe the entire config. set) back to the device. This decreases the number of messages between the device and server, but would probably take more work as it seems like more a deviation from your current setup.
Why not use an event based loop instead of sleeping for a certain amount of time.
That way your loop will only run when a change is detected, and it will always run when a change is detected (which is the point of your question?).
You can do such a thing by using:
python event objects
Just wait for one or all of your event objects to be triggered and run the loop. You can also wait for X events to be done, etc, depending if you expect one variable to be updated a lot.
Or even a system like:
broadcasting events