Python BackgroundScheduler program crashing when ran from another module - python

I am trying to build a application that will run a bash script every 10 minutes. I am using apscheduler to accomplish this and when i run my code from terminal it works like clock work. However when i try to run the code from another module it crashes i suspect that the calling module is waiting for the "schedule" module to finish and then crash when that never happens.
Error code
/bin/bash: line 1: 13613 Killed ( python ) < /tmp/vIZsEfp/26
shell returned 137
Function that calls schedule
def shedual_toggled(self,widget):
prosessSchedular.start_background_checker()
Schedule Program
def schedul_check():
"""set up to call prosess checker every 10 mins"""
print "%s check ran" %(counter)
counter =+ 1
app = prosessCheckerv3.call_bash() < calls the bash file
if app == False:
print "error with bash"
return False
else:
prosessCheckerv3.build_snap_shot(app)
def start_background_checker():
scheduler = BackgroundScheduler()
scheduler.add_job(schedul_check, 'interval', minutes=10)
scheduler.start()
while True:
time.sleep(2)
if __name__ == '__main__':
start_background_checker()
this program simply calls another ever 10 mins. As a side note i have been trying to stay as far away from multi-threading as possible but if that is required so be it.

Well I managed to figure it out my self. The issue that GTK+ is not thread safe so the timed module need to be either be ran in another thread or else you can realise/enter the thread before/after calling the module.
I just did it like this.
def shedual_toggeld(self,widget):
onOffSwitch = widget.get_active()
""" After main GTK has logicly finished all GUI work run thread on toggel button """
thread = threading.Thread(target=self.call_schedual, args=(onOffSwitch,))
thread.daemon = True
thread.start()
def call_schedual(self, onOffSwitch):
if onOffSwitch == True:
self.sch.start_background_checker()
else:
self.sch.stop_background_checker()
This article goes through it in more detail. Hopefully some one else will find this useful.
http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness

Related

Creating a Flag file

I'm relatively new to python so please forgive early level understanding!
I am working to create a kind of flag file. Its job is to monitor a Python executable, the flag file is constantly running and prints "Start" when the executable started, "Running" while it runs and "Stop" when its stopped or crashed, if a crash occurs i want it to be able to restart the script. so far i have this down for the Restart:
from subprocess import run
from time import sleep
# Path and name to the script you are trying to start
file_path = "py"
restart_timer = 2
def start_script():
try:
# Make sure 'python' command is available
run("python "+file_path, check=True)
except:
# Script crashed, lets restart it!
handle_crash()
def handle_crash():
sleep(restart_timer) # Restarts the script after 2 seconds
start_script()
start_script()
how can i implement this along with a flag file?
Not sure what you mean with "flag", but this minimally achieves what you want.
Main file main.py:
import subprocess
import sys
from time import sleep
restart_timer = 2
file_path = 'sub.py' # file name of the other process
def start():
try:
# sys.executable -> same python executable
subprocess.run([sys.executable, file_path], check=True)
except subprocess.CalledProcessError:
sleep(restart_timer)
return True
else:
return False
def main():
print("starting...")
monitor = True
while monitor:
monitor = start()
if __name__ == '__main__':
main()
Then the process that gets spawned, called sub.py:
from time import sleep
sleep(1)
print("doing stuff...")
# comment out to see change
raise ValueError("sub.py is throwing error...")
Put those files into the same directory and run it with python main.py
You can comment out the throwing of the random error to see the main script terminate normally.
On a larger note, this example is not saying it is a good way to achieve the quality you need...

Python multiprocessing.Process calls join by itself

I have this code:
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print "doing something"
logging.debug("proc is stopping")
When I call start() on the process it should be running forever, since self.stop_request() is not set. After some miliseconds join() is being called by itself and breaking run. What is going on!? why is join being called by itself?
Moreover, when I start a debugger and go line by line it's suddenly working fine.... What am I missing?
OK, thanks to ely's answer the reason hit me:
There is a race condition -
new process created...
as it's starting itself and about to run logging.debug("process has started") the main function hits end.
main function calls sys exit and on sys exit python calls for all finished processes to close with join().
since the process didn't actually hit "while not self.stop_request.is_set()" join is called and "self.stop_request.set()". Now stop_request.is_set and the code closes.
As mentioned in the updated question, this is because of a race condition. Below I put an initial example highlighting a simplistic race condition where the race is against the overall program exit, but this could also be caused by other types of scope exits or other general race conditions involving your process.
I copied your class definition and added some "main" code to run it, here's my full listing:
import logging
import multiprocessing
import time
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print("doing something")
time.sleep(1)
logging.debug("proc is stopping")
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
while True:
pass
The above code listing runs as expected for me using both Python 2.7.11 and 3.6.4. It loops infinitely and the process never terminates:
ely#eschaton:~/programming$ python extended_process.py
doing something
doing something
doing something
doing something
doing something
... and so on
However, if I instead use this code in my main section, it exits right away (as expected):
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
This exits because the interpreter reaches the end of the program, which in turn triggers automatically destroying the p object as it goes out of scope of the whole program.
Note this could also explain why it works for you in the debugger. That is an interactive programming session, so after you start p, the debugger environment allows you to wait around and inspect it ... it would not be automatically destroyed unless you somehow invoked it within some scope that is exited while stepping through the debugger.
Just to verify the join behavior too, I also tried with this main block:
if __name__ == "__main__":
log = logging.getLogger()
log.setLevel(logging.DEBUG)
p = ExtendedProcess()
p.start()
st_time = time.time()
while time.time() - st_time < 5:
pass
p.join()
print("Finished!")
and it works as expected:
ely#eschaton:~/programming$ python extended_process.py
DEBUG:root:process has started
doing something
doing something
doing something
doing something
doing something
DEBUG:root:stop request received
DEBUG:root:proc is stopping
Finished!

Signal handling in loop with sleep python 2.7

This might be a simple situation that I expect many would have encountered it.
I have a simple python program that does something and sleeps for sometime in an infinite loop. I want to use signals to make this program exit gracefully on a SIGHUP. Now when a signal is sent to callee.py when it is in sleep, the program exits immediately whereas I expect it to finish the sleep and then exit.
Is there any workaround to bypass this behavior? I am also open to any other methods through which I can achieve this.
Note: This works as expected with python3 but I cannot port my existing module which is in python 2.7 to 3 now.
This is the code I have:
callee.py
stop_val = False
def should_stop(signal, frame):
print('received signal to exit')
global stop_val
stop_val = True
def main():
while not stop_val:
signal.signal(signal.SIGTERM, should_stop)
# do something here
print('Before sleep')
time.sleep(300)
print('after sleep')
caller.py
pid = xxx;
os.system('kill -15 %s' % pid)
I ran into the same issue today. Here is a simple wrapper that mimicks the python3 behaviour:
def uninterruptable_sleep(seconds):
end = time.time()+seconds
while True:
now = time.time() # we do this once to prevent race conditions
if now >= end:
break
time.sleep(end-now)

Endless Application in Python

I have a python script that I want always to run in the background. What the application does is that it goes to a Oracle database and checks if there is a message to be displayed to the user and if there is, use the pynotify to display a notification.
I tried using a Timer object but it only invoke the method after the selective time. I want it to invoke every time after the selective time.
if __name__ == '__main__':
applicationName = "WEWE"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
t = threading.Timer(5.0, runWorks)
t.start()
Will doing this work and is there a better way?
if __name__ == '__main__':
applicationName = "WEEWRS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
while True:
t = threading.Timer(5.0, runWorks)
t.start()
But that gave me another problem.
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
I solved the problem reducing the creation of strings. The below error -
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
comes when there is a lack of resources. Below is the corrected code.
if __name__ == '__main__':
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
I also used a lock file so that the script won't run multiple-times.
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
# If we have a lock already block the program
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do all the work
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
# Controls for the application
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
# Release the file
os.unlink(pidfile)
It is always better to use this simple Daemon script:
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
It does a very good job !

How to prevent a process from terminating on a KeyboardInterrupt?

I've been messing around with a Django project.
What I want to achieve is the Django project starting up in another process while the parent process initiates a load of arbitary code I have written (the backend of my project). Obviously, the Django process and parent processes communicate. I'd like a dictionary to be read and written to by the processes.
I have the following code, based upon examples from here:
#!/usr/bin/env python
from multiprocessing import Process, Manager
import os
import time
from dj import manage
def django(d, l):
print "starting django"
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
manage.start()
def stop(d, l):
print "stopping"
print d
print l
if (__name__ == '__main__'):
os.system('clear')
print "starting backend..."
time.sleep(1)
print "backend start complete."
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=django, args=(d, l))
p.start()
try:
p.join()
except KeyboardInterrupt:
print "interrupt detected"
stop(d, l)
When I hit CTRL+C to kill the Django process, I'm seeing the Django server shut down, and stop() being called. Then what I want to see is the dictionary, d, and list, l, being printed.
Output is:
starting backend...
backend start complete.
starting django
Validating models...
0 errors found
Django version 1.3, using settings 'dj.settings'
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
^Cinterrupt detected
stopping
<DictProxy object, typeid 'dict' at 0x141ae10; '__str__()' failed>
<ListProxy object, typeid 'list' at 0x1425090; '__str__()' failed>
It can't find the dictionary or list after the CTRL+C event. Has the Manager process been terminated when the SIGINT is issued? If it is, is there anyway to stop it from terminating there and terminating with the main process?
I hope this makes sense.
Any help greatly receieved.
Ok, as far I see no possibility to simply ignore exception. When you rise one, you always go straight into a "except" block if there is one. What I'm proposing here is something what will restart your django application on each ^C, but note, that there should be added some back door for leaving.
In theory, you can wrap each line with a try..except.. block and that would act like a restart of each line, what will not be as visible as restart of whole script. If anyone finds a really-working solution, I will be the first one to upvote him.
You can set all inside your if (__name__ == '__main__'): into o main function and leave something like this:
def main():
#all the code...
if (__name__ == '__main__'):
while True:
try:
main()
except KeyboardInterrupt:
pass

Categories