I have a python application that uses twisted framework.
I make use of value stored in the pidfile generated by twistd. A launcher script checks for it's presence and will not spawn a daemon process if the pidfile already exists.
However, twistd does not remove the .pidfile when it gets SIGKILL signal. That makes the launcher script think that the daemon is already running.
I realize the proper way to stop the daemon would be to use SIGTERM signal, but the problem is that when user who started the daemon logs out, the daemon never gets a SIGTERM signal, so apparently it's killed with SIGKILL. That means once a user logs out, he will never be able to start the daemon again, because the pidfile still exists.
Is there any way I could make that file disappear in such situations?
From the signal(2) man page:
The signals SIGKILL and SIGSTOP cannot be caught or ignored.
So there is no way the process can run any cleanup code in response to that signal. Usually you only use SIGKILL to terminate a process that doesn't exit in response to SIGTERM (which can be caught).
You could change your launcher (or wrap it up in another launcher) and remove the pid file before trying to restart twistd.
Related
I run ffmpeg from python script and need to shutdown recording on demand.
In Linux I just send SIGTERM. But in Windows as I understand SIGTERM replaced by SIGKILL so records needs to be remixed to play properly.
After googling I found that I should use CTRL_BREAK_EVENT but this signal terminate my parent script too.
What should I use?
When developing a bottle webapp with python 3.5, I regularly get a zombie process. I get this when using the auto-restart development mode.
The windows console still updates with the access logs, and the errors, but the program isn't running in foreground anymore, so I can't access it to use Ctrl+C.
The only way to kill this is to open the task manager and end the process manually.
If I don't kill it, it will still be listening on the port, and have precedence on a newly started process.
I haven't found a rule for when this happens, nor have I found a way to reproduce.
How can I avoid this multi-spawned zombie process?
I am using a custom Django runserver command that is supposed to run a bunch of cleanup functions upon termination. This works fine as long as I don't use the autoreloader: by server catches the KeyboardInterrupt exception properly and exits gracefully.
However, if I use Django's autoreloader, the reloader seems to simply kill the server thread without properly terminating it (as far as I can tell, it doesn't have any means to do this).
This seems inherently unsafe, so I can't really believe that there's not a better way of handling this.
Can I somehow use the autoreloader functionality without having my server thread be killed uncleanly?
Try using the atexit module to catch the termination. It should work for everything which acts like SIGINT or SIGTERM, SIGKILL cannot be interrupted (but should not be sent by any auto-restart script without sending SIGTERM before).
I'm working on a project that spins off several long-running workers as processes. Child workers catch SIGINT and clean up after themselves - based on my research, this is considered a best practice, and works as expected when terminating scripts.
I am actively developing this project, which means that I am regularly testing changes in the interpreter. When I'm working in an interpreter, I often hit CTRL+C to clear currently written text and get a fresh prompt. Unfortunately, if I do this while a subprocess is running, SIGINT is sent to that worker, causing it to terminate.
Is there a solution to this problem other than "never hit CTRL+C in your interpreter"?
One option is to set a variable (e.g. environment variable, commandline option) when debugging.
I am spawning some processes with Popen (Python 2.7, with Shell=True) and then sending SIGINT to them. It appears that the process group leader is actually the Python process, so sending SIGINT to the PID returned by Popen, which is the PID of bash, doesn't do anything.
So, is there a way to make Popen create a new process group? I can see that there is a flag called subprocess.CREATE_NEW_PROCESS_GROUP, but it is only for Windows.
I'm actually upgrading some legacy scripts which were running with Python2.6 and it seems for Python2.6 the default behavior is what I want (i.e. a new process group when I do Popen).
bash does not handle signals while waiting for your foreground child process to complete. This is why sending it SIGINT does not do anything. This behaviour has nothing to do with process groups.
There are a couple of options to let your child process receive your SIGINT:
When spawning a new process with Shell=True try prepending exec to the front of your command line, so that bash gets replaced with your child process.
When spawning a new process with Shell=True append the command line with & wait %-. This will cause bash to react to signals while waiting for your child process to complete. But it won't forward the signal to your child process.
Use Shell=False and specify full paths to your child executables.