How to prevent a process from terminating on a KeyboardInterrupt? - python

I've been messing around with a Django project.
What I want to achieve is the Django project starting up in another process while the parent process initiates a load of arbitary code I have written (the backend of my project). Obviously, the Django process and parent processes communicate. I'd like a dictionary to be read and written to by the processes.
I have the following code, based upon examples from here:
#!/usr/bin/env python
from multiprocessing import Process, Manager
import os
import time
from dj import manage
def django(d, l):
print "starting django"
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
manage.start()
def stop(d, l):
print "stopping"
print d
print l
if (__name__ == '__main__'):
os.system('clear')
print "starting backend..."
time.sleep(1)
print "backend start complete."
manager = Manager()
d = manager.dict()
l = manager.list(range(10))
p = Process(target=django, args=(d, l))
p.start()
try:
p.join()
except KeyboardInterrupt:
print "interrupt detected"
stop(d, l)
When I hit CTRL+C to kill the Django process, I'm seeing the Django server shut down, and stop() being called. Then what I want to see is the dictionary, d, and list, l, being printed.
Output is:
starting backend...
backend start complete.
starting django
Validating models...
0 errors found
Django version 1.3, using settings 'dj.settings'
Development server is running at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
^Cinterrupt detected
stopping
<DictProxy object, typeid 'dict' at 0x141ae10; '__str__()' failed>
<ListProxy object, typeid 'list' at 0x1425090; '__str__()' failed>
It can't find the dictionary or list after the CTRL+C event. Has the Manager process been terminated when the SIGINT is issued? If it is, is there anyway to stop it from terminating there and terminating with the main process?
I hope this makes sense.
Any help greatly receieved.

Ok, as far I see no possibility to simply ignore exception. When you rise one, you always go straight into a "except" block if there is one. What I'm proposing here is something what will restart your django application on each ^C, but note, that there should be added some back door for leaving.
In theory, you can wrap each line with a try..except.. block and that would act like a restart of each line, what will not be as visible as restart of whole script. If anyone finds a really-working solution, I will be the first one to upvote him.
You can set all inside your if (__name__ == '__main__'): into o main function and leave something like this:
def main():
#all the code...
if (__name__ == '__main__'):
while True:
try:
main()
except KeyboardInterrupt:
pass

Related

Python multiprocessing.Process calls join by itself

I have this code:
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print "doing something"
logging.debug("proc is stopping")
When I call start() on the process it should be running forever, since self.stop_request() is not set. After some miliseconds join() is being called by itself and breaking run. What is going on!? why is join being called by itself?
Moreover, when I start a debugger and go line by line it's suddenly working fine.... What am I missing?
OK, thanks to ely's answer the reason hit me:
There is a race condition -
new process created...
as it's starting itself and about to run logging.debug("process has started") the main function hits end.
main function calls sys exit and on sys exit python calls for all finished processes to close with join().
since the process didn't actually hit "while not self.stop_request.is_set()" join is called and "self.stop_request.set()". Now stop_request.is_set and the code closes.
As mentioned in the updated question, this is because of a race condition. Below I put an initial example highlighting a simplistic race condition where the race is against the overall program exit, but this could also be caused by other types of scope exits or other general race conditions involving your process.
I copied your class definition and added some "main" code to run it, here's my full listing:
import logging
import multiprocessing
import time
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print("doing something")
time.sleep(1)
logging.debug("proc is stopping")
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
while True:
pass
The above code listing runs as expected for me using both Python 2.7.11 and 3.6.4. It loops infinitely and the process never terminates:
ely#eschaton:~/programming$ python extended_process.py
doing something
doing something
doing something
doing something
doing something
... and so on
However, if I instead use this code in my main section, it exits right away (as expected):
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
This exits because the interpreter reaches the end of the program, which in turn triggers automatically destroying the p object as it goes out of scope of the whole program.
Note this could also explain why it works for you in the debugger. That is an interactive programming session, so after you start p, the debugger environment allows you to wait around and inspect it ... it would not be automatically destroyed unless you somehow invoked it within some scope that is exited while stepping through the debugger.
Just to verify the join behavior too, I also tried with this main block:
if __name__ == "__main__":
log = logging.getLogger()
log.setLevel(logging.DEBUG)
p = ExtendedProcess()
p.start()
st_time = time.time()
while time.time() - st_time < 5:
pass
p.join()
print("Finished!")
and it works as expected:
ely#eschaton:~/programming$ python extended_process.py
DEBUG:root:process has started
doing something
doing something
doing something
doing something
doing something
DEBUG:root:stop request received
DEBUG:root:proc is stopping
Finished!

How to handle console exit and object destruction

Given this code:
from time import sleep
class TemporaryFileCreator(object):
def __init__(self):
print 'create temporary file'
# create_temp_file('temp.txt')
def watch(self):
try:
print 'watching tempoary file'
while True:
# add_a_line_in_temp_file('temp.txt', 'new line')
sleep(4)
except (KeyboardInterrupt, SystemExit), e:
print 'deleting the temporary file..'
# delete_temporary_file('temp.txt')
sleep(3)
print str(e)
t = TemporaryFileCreator()
t.watch()
during the t.watch(), I want to close this application in the console..
I tried using CTRL+C and it works:
However, if I click the exit button:
it doesn't work.. I checked many related questions about this but it seems that I cannot find the right answer..
What I want to do:
The console can be exited while the program is still running.. to handle that, when the exit button is pressed, I want to make a cleanup of the objects (deleting of created temporary files), rollback of temporary changes, etc..
Question:
how can I handle console exit?
how can I integrate it on object destructors (__exit__())
Is it even possible? (how about py2exe?)
Note: code will be compiled on py2exe.. "hopes that the effect is the same"
You may want to have a look at signals. When a *nix terminal is closed with a running process, this process receives a couple signals. For instance this code waits for the SIGHUB hangup signal and writes a final message. This codes works under OSX and Linux. I know you are specifically asking for Windows but you might want to give it a shot or investigate what signals a Windows command prompt is emitting during shutdown.
import signal
import sys
def signal_handler(signal, frame):
with open('./log.log', 'w') as f:
f.write('event received!')
signal.signal(signal.SIGHUP, signal_handler)
print('Waiting for the final blow...')
#signal.pause() # does not work under windows
sleep(10) # so let us just wait here
Quote from the documentation:
On Windows, signal() can only be called with SIGABRT, SIGFPE, SIGILL, SIGINT, SIGSEGV, or SIGTERM. A ValueError will be raised in any other case.
Update:
Actually, the closest thing in Windows is win32api.setConsoleCtrlHandler (doc). This was already discussed here:
When using win32api.setConsoleCtrlHandler(), I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app.
And if Daniel's code still works, this might be a nice way to use both (signals and CtrlHandler) for cross-platform purposes:
import os, sys
def set_exit_handler(func):
if os.name == "nt":
try:
import win32api
win32api.SetConsoleCtrlHandler(func, True)
except ImportError:
version = “.”.join(map(str, sys.version_info[:2]))
raise Exception(”pywin32 not installed for Python ” + version)
else:
import signal
signal.signal(signal.SIGTERM, func)
if __name__ == "__main__":
def on_exit(sig, func=None):
print "exit handler triggered"
import time
time.sleep(5)
set_exit_handler(on_exit)
print "Press to quit"
raw_input()
print "quit!"
If you use tempfile to create your temporary file, it will be automatically deleted when the Python process is killed.
Try it with:
>>> foo = tempfile.NamedTemporaryFile()
>>> foo.name
'c:\\users\\blah\\appdata\\local\\temp\\tmpxxxxxx'
Now check that the named file is there. You can write to and read from this file like any other.
Now kill the Python window and check that file is gone (it should be)
You can simply call foo.close() to delete it manually in your code.

Python BackgroundScheduler program crashing when ran from another module

I am trying to build a application that will run a bash script every 10 minutes. I am using apscheduler to accomplish this and when i run my code from terminal it works like clock work. However when i try to run the code from another module it crashes i suspect that the calling module is waiting for the "schedule" module to finish and then crash when that never happens.
Error code
/bin/bash: line 1: 13613 Killed ( python ) < /tmp/vIZsEfp/26
shell returned 137
Function that calls schedule
def shedual_toggled(self,widget):
prosessSchedular.start_background_checker()
Schedule Program
def schedul_check():
"""set up to call prosess checker every 10 mins"""
print "%s check ran" %(counter)
counter =+ 1
app = prosessCheckerv3.call_bash() < calls the bash file
if app == False:
print "error with bash"
return False
else:
prosessCheckerv3.build_snap_shot(app)
def start_background_checker():
scheduler = BackgroundScheduler()
scheduler.add_job(schedul_check, 'interval', minutes=10)
scheduler.start()
while True:
time.sleep(2)
if __name__ == '__main__':
start_background_checker()
this program simply calls another ever 10 mins. As a side note i have been trying to stay as far away from multi-threading as possible but if that is required so be it.
Well I managed to figure it out my self. The issue that GTK+ is not thread safe so the timed module need to be either be ran in another thread or else you can realise/enter the thread before/after calling the module.
I just did it like this.
def shedual_toggeld(self,widget):
onOffSwitch = widget.get_active()
""" After main GTK has logicly finished all GUI work run thread on toggel button """
thread = threading.Thread(target=self.call_schedual, args=(onOffSwitch,))
thread.daemon = True
thread.start()
def call_schedual(self, onOffSwitch):
if onOffSwitch == True:
self.sch.start_background_checker()
else:
self.sch.stop_background_checker()
This article goes through it in more detail. Hopefully some one else will find this useful.
http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness

The thread hangs using FTP LIST with Python

I'm using ftplib for connecting and getting file list from FTP server.
The problem I have is that the connection hangs from time to time and I don't know why. I'm running python script as a daemon, using threads.
See what I mean:
def main():
signal.signal(signal.SIGINT, signal_handler)
app.db = MySQLWrapper()
try:
app.opener = FTP_Opener()
mainloop = MainLoop()
while not app.terminate:
# suspend main thread until the queue terminates
# this lets to restart the queue automatically in case of unexpected shutdown
mainloop.join(10)
while (not app.terminate) and (not mainloop.isAlive()):
time.sleep(script_timeout)
print time.ctime(), "main: trying to restart the queue"
try:
mainloop = MainLoop()
except Exception:
time.sleep(60)
finally:
app.db.close()
app.db = None
app.opener = None
mainloop = None
try:
os.unlink(PIDFILE)
except:
pass
# give other threads time to terminate
time.sleep(1)
print time.ctime(), "main: main thread terminated"
MainLoop() has some functions for FTP connect, download specific files and disconnect from the server.
Here's how I get the file's list:
file_list = app.opener.load_list()
And how FTP_Opener.load_list() function looks like:
def load_list(self):
attempts = 0
while attempts<=ftp_config.load_max_attempts:
attempts += 1
filelist = []
try:
self._connect()
self._chdir()
# retrieve file list to 'filelist' var
self.FTP.retrlines('LIST', lambda s: filelist.append(s))
filelist = self._filter_filelist(self._parse_filelist(filelist))
return filelist
except Exception:
print sys.exc_info()
self._disconnect()
sleep(0.1)
print time.ctime(), "FTP Opener: can't load file list"
return []
Why sometimes the FTP connection hangs and how can I monitor this? So if it happens I would like to terminate the thread somehow and start a new one.
Thanks
If you are building for robustness, I would highly recommend that you look into using an event-driven method. One such which have FTP support is Twisted (API).
The advantage is that you don't block the thread while waiting for i/O and you can create simple timer functions to monitor your connections if you so prefer. It also scales a lot better. It is slightly more complicated to code using event-driven patterns, so if this is just a simple script it may or may not be worth the extra effort, but since you write that you are writing a daemon, it might be worth looking into.
Here is an example of an FTP client: ftpclient.py

Endless Application in Python

I have a python script that I want always to run in the background. What the application does is that it goes to a Oracle database and checks if there is a message to be displayed to the user and if there is, use the pynotify to display a notification.
I tried using a Timer object but it only invoke the method after the selective time. I want it to invoke every time after the selective time.
if __name__ == '__main__':
applicationName = "WEWE"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
t = threading.Timer(5.0, runWorks)
t.start()
Will doing this work and is there a better way?
if __name__ == '__main__':
applicationName = "WEEWRS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
while True:
t = threading.Timer(5.0, runWorks)
t.start()
But that gave me another problem.
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
I solved the problem reducing the creation of strings. The below error -
thread.error: can't start new thread
(r.py:12227): GLib-ERROR **: creating thread 'gdbus': Error creating thread: Resource temporarily unavailable
comes when there is a lack of resources. Below is the corrected code.
if __name__ == '__main__':
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
I also used a lock file so that the script won't run multiple-times.
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
# If we have a lock already block the program
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do all the work
applicationName = "DSS POS"
# Initialization of the Notification library
if not pynotify.init(applicationName):
sys.exit(1)
# Controls for the application
flagContinous = True
timeout = 5
# This loop will continously keep the application in the background
while flagContinous:
time.sleep(timeout)
runWorks()
# After 30 seconds, "hello, world" will be printed
# Release the file
os.unlink(pidfile)
It is always better to use this simple Daemon script:
http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
It does a very good job !

Categories