I am using a custom Django runserver command that is supposed to run a bunch of cleanup functions upon termination. This works fine as long as I don't use the autoreloader: by server catches the KeyboardInterrupt exception properly and exits gracefully.
However, if I use Django's autoreloader, the reloader seems to simply kill the server thread without properly terminating it (as far as I can tell, it doesn't have any means to do this).
This seems inherently unsafe, so I can't really believe that there's not a better way of handling this.
Can I somehow use the autoreloader functionality without having my server thread be killed uncleanly?
Try using the atexit module to catch the termination. It should work for everything which acts like SIGINT or SIGTERM, SIGKILL cannot be interrupted (but should not be sent by any auto-restart script without sending SIGTERM before).
Related
I have a python script and the script is run in background as a supervisor job.
If I were to give the command:
$ sudo supervisorctl stop myscript
What's going on behind the scenes?
If I had not handled the exception, will this be considered a graceful shutdown?
While your question is not fully complete-
at its most basic, not exactly. If you mean how Python handles the Exception, it really depends on how you are calling the command and handling any codes or errors from it. All python Exceptions should inherit from BaseException, but you shouldn't catch that normally and instead catch Exception or some form of subprocess.SubprocessError (which inherits from Exception).
However, if you mean how supervisorctl handles this internally, it probably has similar logic to what you are expecting: sending an exit signal for the process to handle. As you seem most familiar with Python, the signal library may be insightful for you.
I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"?
One option is to set a variable (e.g. environment variable, commandline option) when debugging.
I have a multiprocessed python application which is being run as an EXE on windows. Upon selecting to shutdown the operating system the applications throws a number of exceptions as a result of the processes being shutdown.
Is there a way to capture the system shutdown request by windows so I may handle the closure of the multiprocesses myself?
A nabble.com page suggests using win32api.SetConsoleCtrlHandler:
“I need to do something when windows shuts down, as when someone presses the power button. I believe this is a window message, WM_QUERYENDSESSION or WM_ENDSESSION. I can't find any way to trap this in python. atexit() does not work. Using the signal module to trap SIGBREAK or SIGTERM does not work either.”
You might be able to use win32api.SetConsoleCtrlHandler and catch the CTRL_SHUTDOWN_EVENT that's sent to the console.
Also see Python windows shutdown events, which says, “When using win32api.setConsoleCtrlHandler() I'm able to receive shutdown/logoff/etc events from Windows, and cleanly shut down my app” etc.
I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
From the signal(2) man page:
The signals SIGKILL and SIGSTOP cannot be caught or ignored.
So there is no way the process can run any cleanup code in response to that signal. Usually you only use SIGKILL to terminate a process that doesn't exit in response to SIGTERM (which can be caught).
You could change your launcher (or wrap it up in another launcher) and remove the pid file before trying to restart twistd.
I have a python script that runs on a server after hours and invokes many shell subprocesses. None of the programs that are called should be prompting, but sometimes it happens and the script hangs, waiting for input until the user (me) notices and gets angry. :)
Tried: Using p.communicate() with stdin=PIPE, as written in the python subprocess documentation.
Running: Ubuntu 10.10, Python 2.6
I don't want to respond to the prompts, I want the script to raise an error and continue. Any thoughts?
Thanks,
Alexander.
As a catch-all solution to any problems in subprocesses I'd recommend using timeouts for all shell calls. There's no built-in timeout support in subprocess module calls, so you need to use signals. See details here: Using module 'subprocess' with timeout
You need a time-out while waiting for your tasks to complete and then have your script kill or terminate the process (in addition to raising the error).
Pyexpect is a Python tool for dealing with subprocesses that may generate output (and may need input as a result). It will help you easily deal with the various cases, including managing timeouts.
See: http://www.noah.org/wiki/pexpect