Kill Windows asynchronous Popen process in 2.4 - python

I have a testing script that needs to open a process (a Pyro server), do some stuff that will call the opened process for information, and when it's all done will need to close the process back down. It's all part of an automated test on a staging server.
In python 2.6 you can do this:
pyro_server = subprocess.Popen(['python', 'pyro_server.py'])
# Do stuff, making remote calls to the Pyro server on occasion
pyro_server.terminate()
Alas, I'm locked into python 2.4 here at work so I don't have access to that function. And if I just let the script end of course the server lives on. What should I be doing to close/kill that process before the script exits?

Consider copying subprocess.py to your python2.4 dist-packages directory. It should just work as it's a simple wrapper around the old popen library.
The Popen object terminate function does nothing more than the following:
import os, signal
os.kill(pid, signal.SIGKILL)
pid is the child process's process id. signal.SIGKILL is the number 9, and is a standard unix kill signal. You can see how to spawn a subprocess and get its pid in python 2.4 with the popen module here:

#BrainCore: Note that os.kill is not available on windows in python24, check the docs.
My solution for killing a subprocess.Popen object on windows when using python24 is this:
import os, subprocess
p = subprocess.Popen(...)
res = os.system('taskkill /PID %d /F' % p.pid)

Related

How can I start a subprocess in Python that runs independently and continues running if the main process is closed?

I have a small Flask API that is receiving requests from a remote server. Whenever a request is received, a subprocess is started. This subprocess is simply executing a second Python file that is in the same folder. This subprocess can run for several hours and several of these subprocesses can run simultaneously. I am using stdout to write the output of the python file into a text file.
All of this is working fine, but every couple of weeks it happens that the Flask API becomes unresponsive and needs to be restarted. As soon as I stop the Flask server, all running subprocesses stop. I would like to avoid this and run each subprocess independently from the flask API.
This is a small example that illustrates what I am doing (this code is basically inside a method that can be called through the API)
import subprocess
f = open("log.txt","wb")
subprocess.Popen(["python","job.py"],cwd = "./", stdout = f, stderr = f)
I would like to achieve that the subprocess keeps running after I stop the Flask API. This is currently not the case. Somewhere else I read that the reason is that I am using the stdout and stderr parameters, but even after removing those the behavior stays the same.
Any help would be appreciated.
Your sub-processes stop because their parent process dies when you restart your Flask server. You need to completely separate your sub-processes from your Flask process by running your Python call in a new shell:
from subprocess import call
# On Linux:
command = 'gnome-terminal -x bash -l -c "python job.py"'
# On Windows:
# command = 'cmd /c "python job.py"'
call(command, shell=True)
This way your Python call of job.py will run in a separate terminal window, unaffected by your Flask server process.
Use fork() to create a child process of the process in which you are calling this function. On successful fork(), it returns a zero for the child id.
Below is a basic example of fork, which you can easily incorporate in your code.
import os
pid = os.fork()
if pid == 0: # new process
os.system("nohup python ./job.py &")
Hope this helps!

Cannot find daemon after daemonizing python script

I daemonized a python script using the daemonize python library, but now I cannot find the daemon that it spawned. I want to find the daemon and kill it to make some changes to the script.
I used the following to daemonize:
pidfile='/tmp/filename.pid'
daemon = Daemonize(app='filename',pid=pidfile, action=main)
print("daemon started")
daemon.start()
Open a terminal window and try the following:
ps ax | grep <ScriptThatStartedTheDaemon>.py
It should return the PID and the name of the process. Once you have the PID, do:
kill <pid>
Depending on how many times you've run your script, you may have multiple daemons running, in which case you'd want to kill all of them.
To make sure the process was terminated, run the first line of code again. The process with the PID that you killed shouldn't show up if it was successfully terminated.

Python subprocess.Popen() - subprocess causes sockets to remain open

I have a Python2.7 script that does some parallelism magic and finally enters Flask gui_loop. At some point a thread creates a background process with subprocess.Popen. This works.
When my script exits and if the subprocess is still running, I can't run my script again, as flask gui_loop fails with:
socket.error: [Errno 98] Address already in use
With netstat -peanut I can see the ownership of the socket transfers to the child process when the python script exits. This is how it looks when both python script and subprocess are running:
root#test:/tmp# netstat -peanut | grep 5000
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 840210 21458/python
After terminating the Python script, socket does not close but its ownership is passed to the child process:
root#test:~/PycharmProjects/foo/gui# netstat -peanut | grep 5000
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 1000 763103 19559/my-subprocess
Is there any way around this? The subprocess (written in C) is not doing anything on that socket and doesn't need it. Can I somehow create a subprocess without passing the gui loop socket resource to it?
I can of course terminate the process but this is not ideal as the purpose of this is to build a simple gui around some calculations and not lose the progress if the gui script happens to exit. I would have a mechanism to reattach connection to the subprocess if I just could get the gui script up and running again.
R
You should use close_fds=True when creating the subproces, which will cause all file descriptors (and therfore open sockets) to be closed in the child process (except for stdin/stdout/stderr).
In newer versions (python 3.2+) close_fds already defaults to True, as in most cases you don't want to inherit all open file descriptors in a child process, but in python2.7 you still need to specify it explicitly.
You could try using the with statement. Some documentation here:
http://preshing.com/20110920/the-python-with-statement-by-example/
https://www.python.org/dev/peps/pep-0343/
This does open/close cleanup for you.

How to avoid process termination when parent process terminates in python

I have a python daemon that runs on linux. I'm implementing an auto updating functionality which works this way:
When new version is detected, the app invokes updater script using subprocess.call.
Child process (which is updater script in reality) stops the daemon
Because the daemon is stopped, updater script also terminates :/
So my question is how can I launch updater script in a way that it won't depend on parent process. In other words, I don't want parent process termination to cause child process termination.
Environment: Linux mint 16
Python 3.3
Thanks
You could do something along the lines of:
from subprocess import Popen
updater = ['/usr/bin/python', '{PATH TO}/updater_script.py', '&']
Popen(updater)
The updater won't be affected by the deamon closing.

Popen new process group on linux

I am spawning some processes with Popen (Python 2.7, with Shell=True) and then sending SIGINT to them. It appears that the process group leader is actually the Python process, so sending SIGINT to the PID returned by Popen, which is the PID of bash, doesn't do anything.
So, is there a way to make Popen create a new process group? I can see that there is a flag called subprocess.CREATE_NEW_PROCESS_GROUP, but it is only for Windows.
I'm actually upgrading some legacy scripts which were running with Python2.6 and it seems for Python2.6 the default behavior is what I want (i.e. a new process group when I do Popen).
bash does not handle signals while waiting for your foreground child process to complete. This is why sending it SIGINT does not do anything. This behaviour has nothing to do with process groups.
There are a couple of options to let your child process receive your SIGINT:
When spawning a new process with Shell=True try prepending exec to the front of your command line, so that bash gets replaced with your child process.
When spawning a new process with Shell=True append the command line with & wait %-. This will cause bash to react to signals while waiting for your child process to complete. But it won't forward the signal to your child process.
Use Shell=False and specify full paths to your child executables.

Categories