I spent a lot of time searching for the answer for my question, but I could not find it.
I run xdm-open, using subrocess, to play a video (I do not want to know what applications are available)
I'm waiting for a while
I want to kill the video
import os
import subprocess
import psutil
from time import sleep;
from signal import SIGTERM
file = "test.mkv"
out = subprocess.Popen(['xdg-open', file])
pid = out.pid
print('sleeping...')
sleep(20.0)
print('end of sleep...')
os.kill(pid, SIGTERM) #alternatively: out.terminate()
Unfortunatelly the last line is killing only the xdg-open process. The mplayer process (which was started by xdg) is still running.
I tried to get the sub-processes of the xdg by using the following code:
main_process = psutil.Process(pid)
children_processes = main_process.children(recursive=True)
for child in children_processes:
print("child process: ", child.pid, child.name())
but it did not help either. The list was empty.
Has anybody an idea how to kill the player process?
Programs like xdg-open typically look for a suitable program to open a file with, start that program with the file as argument and then exit.
By the time you get around to checking for child processes, xdg-open has probably already exited.
What happens then is OS dependant. In what follows, I'll be talking about UNIX-like operating systems.
The processes launched by xdg-open will usually get PID 1 as their parent process id (PPID) after xdg-open exits, so it will be practically impossible to find out for certain who started them by looking at the PPID.
But, there will probably be a relatively small number of processes running under your user-ID with PPID 1, so if you list those before and after calling xdg-open and remove all the programs that were in the before-list from the after-list, the program you seek will be in the after-list. Unless your machine is very busy, chances are that there will be only one item in the after-list; the one started by xdg-open.
Edit 1:
You commented:
I want to make the app OS independent.
All operating systems that support xdg-open are basically UNIX-like operating systems. If you use the psutil Python module to get process information, you can run your "app" on all the systems that psutil supports:
Linux
macOS
FreeBSD, OpenBSD, NetBSD
Sun Solaris
AIX
(psutil even works on ms-windows, but I kind of doubt you will find xdg-open there...)
Related
I am trying to start and later kill a process that requires sudo via a python-script. Even if the python script itself is run with sudo and kill() does not give any permission errors the process is not killed (and never receives SIGKILL).
Investigating this, i found out that Popen() returns the the process id of the sudo process, i assume at least, rather than the process i want to control. So when i correctly kill it later the underlying process keeps running. (Although if i kill the python program before killing the sudo process in python code the underlying process is also killed, so i guess there must be a way to do this manually, too).
I know it might be an option to use pgrep or pidof to search for the correct process, but as the processes name might not be unique it seems unnescessarly error prone (it might also occur that a process with the same name is started around the same time, so taking the latest one might not help).
Is there any solution to get reliably the pid of the underlying process started with sudo in python?
Using Python3.
My code for conducting the tests, taken slightly modified from https://stackoverflow.com/a/43417395/1171541:
import subprocess, time
cmd = ["sudo", "testscript.sh"]
def myfunction(action, process=None):
if action === "start":
process = subprocess.Popen(cmd)
return process
if action === "stop"
# kill() and send_signal(signal.SIGTERM) do not work either
process.terminate()
process = myfunction("start")
time.sleep(5)
myfunction("stop", process);
Okay, i can answer my own question here (which i found on https://izziswift.com/how-to-terminate-a-python-subprocess-launched-with-shelltrue/). The trick was to open the process with:
subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True, preexec_fn=os.setsid)
and then kill it:
os.killpg(os.getpgid(process.pid), signal.SIGTERM)
This time i use a shell to open and use the os to kill all the processes in the process group.
OS: Windows 10
Python: 3.5.2
I am trying to open calc.exe do some actions and than close it.
Here is my code sample
import subprocess, os, time
p = subprocess.Popen('calc.exe')
#Some actions
time.sleep(2)
p.kill()
So this is not working for calc.exe, it just opens the calculator, but does not close it, But same code is working fine for "notepad.exe".
I am guessing that there is a bug in subprocess lib for process kill method. so the notepad.exe process name in task manager is notepad.exe, but the calc.exe process name is calculator.exe, so I am guessing it is trying to kill by name and do not find it.
There's no bug in subprocess.kill. If you're really worried about that, just check the source, which is linked from the docs. The kill method just calls send_signal, which just calls os.kill unless the process is already done, and you can see the Windows implementation for that function. In short: subprocess.Process.kill doesn't care what name the process has in the kernel's process table (or the Task Manager); it remembers the PID (process ID) of the process it started, and kills it that way.
The most likely problem is that, like many Windows apps, calc.exe has some special "single instance" code: when you launch it, if there's already a copy of calc.exe running in your session, it just tells that copy to come to the foreground (and open a window, if it doesn't have one), and then exits. So, by the time you try to kill it 2 seconds later, the process has already exited.
And if the actual running process is calculator.exe, that means calc.exe is just a launcher for the real program, so it always tells calculator.exe to come to the foreground, launching it if necessary, and then exits.
So, how can you kill the new calculator you started? Well, you can't, because you didn't start a new one. You can kill all calc.exe and/or calculator.exe processes (the easiest way to do this is with a third-party library like psutil—see the examples on filtering and then kill the process once you've found it), but that will kill any existing calculator process you had open before running your program, not just the new one you started. Since calc.exe makes it impossible to tell if you've started a new process or not, there's really no way around that.
This is one way to kill it, but it will close every open calculator.
It calls a no window command prompt and gives the command to close the Calculator.exe process.
import subprocess, os, time
p = subprocess.Popen('calc.exe')
print(p)
#Some actions
time.sleep(2)
CREATE_NO_WINDOW = 0x08000000
subprocess.call('taskkill /F /IM Calculator.exe', creationflags=CREATE_NO_WINDOW)
Apparently I can't get the process resources usage in Mac OS X with psutil after the process got reaped, i.e. after p.wait() where p is a psutil.Popen() instance. So for example, if I try ps.cpu_times().system where ps is a psutil.Process() instance, I get a raise of no such process. What are the other options for measuring the resources usage in a mac (elapsed time, memory and cpu usage)?
If the process "dies" or gets reaped how are you supposed to interact with it? Of course you can't, 'cause it's gone. If on the other hand the process is a zombie, then in that case you might be able to extract some info off of it, like the parent PID, but not CPU or memory stats.
Can my python script spawn a process that will run indefinitely?
I'm not too familiar with python, nor with spawning deamons, so I cam up with this:
si = subprocess.STARTUPINFO()
si.dwFlags = subprocess.CREATE_NEW_PROCESS_GROUP | subprocess.CREATE_NEW_CONSOLE
subprocess.Popen(executable, close_fds = True, startupinfo = si)
The process continues to run past python.exe, but is closed as soon as I close the cmd window.
Using the answer Janne Karila pointed out this is how you can run a process that doen't die when its parent dies, no need to use the win32process module.
DETACHED_PROCESS = 8
subprocess.Popen(executable, creationflags=DETACHED_PROCESS, close_fds=True)
DETACHED_PROCESS is a Process Creation Flag that is passed to the underlying CreateProcess function.
This question was asked 3 years ago, and though the fundamental details of the answer haven't changed, given its prevalence in "Windows Python daemon" searches, I thought it might be helpful to add some discussion for the benefit of future Google arrivees.
There are really two parts to the question:
Can a Python script spawn an independent process that will run indefinitely?
Can a Python script act like a Unix daemon on a Windows system?
The answer to the first is an unambiguous yes; as already pointed out; using subprocess.Popen with the creationflags=subprocess.CREATE_NEW_PROCESS_GROUP keyword will suffice:
import subprocess
independent_process = subprocess.Popen(
'python /path/to/file.py',
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP
)
Note that, at least in my experience, CREATE_NEW_CONSOLE is not necessary here.
That being said, the behavior of this strategy isn't quite the same as what you'd expect from a Unix daemon. What constitutes a well-behaved Unix daemon is better explained elsewhere, but to summarize:
Close open file descriptors (typically all of them, but some applications may need to protect some descriptors from closure)
Change the working directory for the process to a suitable location to prevent "Directory Busy" errors
Change the file access creation mask (os.umask in the Python world)
Move the application into the background and make it dissociate itself from the initiating process
Completely divorce from the terminal, including redirecting STDIN, STDOUT, and STDERR to different streams (often DEVNULL), and prevent reacquisition of a controlling terminal
Handle signals, in particular, SIGTERM.
The reality of the situation is that Windows, as an operating system, really doesn't support the notion of a daemon: applications that start from a terminal (or in any other interactive context, including launching from Explorer, etc) will continue to run with a visible window, unless the controlling application (in this example, Python) has included a windowless GUI. Furthermore, Windows signal handling is woefully inadequate, and attempts to send signals to an independent Python process (as opposed to a subprocess, which would not survive terminal closure) will almost always result in the immediate exit of that Python process without any cleanup (no finally:, no atexit, no __del__, etc).
Rolling your application into a Windows service, though a viable alternative in many cases, also doesn't quite fit. The same is true of using pythonw.exe (a windowless version of Python that ships with all recent Windows Python binaries). In particular, they fail to improve the situation for signal handling, and they cannot easily launch an application from a terminal and interact with it during startup (for example, to deliver dynamic startup arguments to your script, say, perhaps, a password, file path, etc), before "daemonizing". Additionally, Windows services require installation, which -- though perfectly possible to do quickly at runtime when you first call up your "daemon" -- modifies the user's system (registry, etc), which would be highly unexpected if you're coming from a Unix world.
In light of that, I would argue that launching a pythonw.exe subprocess using subprocess.CREATE_NEW_PROCESS_GROUP is probably the closest Windows equivalent for a Python process to emulate a traditional Unix daemon. However, that still leaves you with the added challenge of signal handling and startup communications (not to mention making your code platform-dependent, which is always frustrating).
That all being said, for anyone encountering this problem in the future, I've rolled a library called daemoniker that wraps both proper Unix daemonization and the above strategy. It also implements signal handling (for both Unix and Windows systems), and allows you to pass objects to the "daemon" process using pickle. Best of all, it has a cross-platform API:
from daemoniker import Daemonizer
with Daemonizer() as (is_setup, daemonizer):
if is_setup:
# This code is run before daemonization.
do_things_here()
# We need to explicitly pass resources to the daemon; other variables
# may not be correct
is_parent, my_arg1, my_arg2 = daemonizer(
path_to_pid_file,
my_arg1,
my_arg2
)
if is_parent:
# Run code in the parent after daemonization
parent_only_code()
# We are now daemonized, and the parent just exited.
code_continues_here()
For that purpose you could daemonize your python process or as you are using windows environment you would like to run this as a windows service.
You know i like to hate posting only web-links:
But for more information according to your requirement:
A simple way to implement Windows Service. read all comments it will resolve any doubt
If you really want to learn more
First read this
what is daemon process or creating-a-daemon-the-python-way
update:
Subprocess is not the right way to achieve this kind of thing
I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these.
When the image-viewing process exists, I want to remove the temporary images.
I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following:
p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"])
p.communicate()
os.unlink("/example/image1.jpg")
os.unlink("/example/image2.jpg")
..as this blocks the main thread, nor could I check for the pid exiting in a thread etc
The only solution I can think of means I have to use shell=True, which I would rather avoid:
import pipes
import subprocess
cmd = ['imgviewer']
cmd.append("/example/image2.jpg")
for x in cleanup:
cmd.extend(["&&", "rm", pipes.quote(x)])
cmdstr = " ".join(cmd)
subprocess.Popen(cmdstr, shell = True)
This works, but is hardly elegant..
Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.
If you're on any variant of Unix, you could fork your Python program, and have the parent process go on with its life while the child process daemonized, runs the viewer (doesn't matter in the least if that blocks the child process, which has no other job in life anyway;-), and cleans up after it. The original Python process may or may not exist at this point, but the "waiting to clean up" child process of course will (some process or other has to do the clean-up, after all, right?-).
If you're on Windows, or need cross-platform code, then have your Python program "spawn" (i.e., just start with subprocess, then go on with life) another (much smaller) one, which is the one tasked to run the viewer (blocking, who cares) and then do the clean-up. (If on Unix, even in this case you may want to daemonize, otherwise the child process might go away when the parent process does).