I have a python script which invokes multiple processes and those processes can invoke more processes.
To kill all of them I've used following script:
os.setpgrp()
# Code which invokes multiple processes
#
# Almost all work got completed but some processes are still running which I don't need any more.
os.killpg(0, signal.SIGKILL)
When running the above python script, I get output with print Killed in the end.
If I change the signal signal.SIGKILL to signal.SIGTERM, then output changes from Killed to Terminated. I want to suppress this info so that it doesn't confuse user because it is not relevant to him. Is there any way to suppress this from the stdout?
EDIT 1: As pointed out by #SiHa, there is a related question: Python - How to hide output after killed specified process
But my question is little bit different from that in the sense that os.killpgrp() is killing my own python script, and therefore answer to that question is not helping me in diverting the output.
When, I tried the proposed answer from the above mentioned related question, I found that trash.txt gets created but still Killed is being printed in the std output. The file trash.txt remains empty.
A possible reason seems to me that my python script is getting killed so no further code is getting executed.
As I mentioned in my question, following statement was killing my own python script:
os.killpg(0, signal.SIGKILL)
To solve my problem I had to create a process group of all the children and killing them using:
os.killpg(<pgid_of_children>, signal.SIGTERM)
Thanks #ceving for explaining the benefit of SIGTERM over SIGKILL.
Related
I am creating a subprocess using this line of code:
p = subprocess.Popen(["doesItemExist.exe", id], shell=False)
and when I run the script while I have the Task Manager open, I can see that it creates two processes and not one. The issue is that when I go to kill it, it kills one (using p.kill()), but not the other. I've tried looking online but the only examples I find are about shell=True and their solutions don't work for me. I've confirmed that that line only gets called once.
What can I do? Popen is only giving me back the one pid so I don't understand how to get the other so I can kill both.
I ended up being able to deal with this issue by creating a clean up function which just uses the following:
subprocess.run(["taskkill", "/IM", "doesItemExist.exe", "/F"], shell=True)
This will kill any leftover tasks. If anyone uses this, be careful that your exe has a unique name to prevent you from killing anything you don't mean to. If you want to hide the output/errors, just set the stdout and stderr to subprocess.PIPE.
Also, if there is no process to kill it will report that as an error.
I am unable to post code for this sorry, but I am trying to run a python script at all times from another python script which creates a system tray. This system tray will show if the program is correctly running or not.
I have tried a few methods so far, the most promising method has been using something like:
p = subprocess.Popen([xxx, xxx], stdout=PIPE, stderr=PIPE)
Then I check if stderr has any output meaning there’s been an error.
However, this only works when I deliberately make an error occur (using the wrong file path) and nothing happens when I use the correct file path as the program never terminates.
So the main issue I’m having is that because I want the program to be running at all times it never terminates unless there’s an error. I want to be able to check that is it running so the user can check the status on the system tray.
In Unix OS we use certain system calls for process management. The process is started when its parent process executes a fork system call. The parent process may then wait() for the child process to terminate either normally by using the exit() system call, or abnormally due to a fatal exception or signal (e.g., SIGTERM, SIGINT, SIGKILL), the exit status is returned to the OS and a SIGCHLD signal is sent to the parent process. The exit status can then be retrieved by the parent process via the wait system call to learn about what actually happened. Cheers
Piping cmd.exe with a subprocess in order of embedding a console works fine in most cases. When using a stdout.read(1)-thread of course. However this thread is getting nothing for few commands (i spotted this for python itself as well as for python programms).
I know there are lots of questions about output from children, but this is about loosing a child's child's (and so on) output. The output of cmd.exe itself as well as most of commands is easily tracked out. Also I can assume that the same occurs for input as the interactive python shell within cmd.exe is not closing when exit() is entered.
This could be a buffering issue - but that would be strange as buffering is disabled for Popen (and p.stdin.flush() is used as python won't start within p elseways). Also this could be caused because of bad inheritance of the processes and their standard i/o streams, but i actually hope it's not.
I can see there could be good use for example code but this is actually as I mentioned embedded. So if someone sees a theoretical problem I can skip the process of exporting that code :) However I should add an example even tough this board gives lots of examples relating to cmd.exe and popen. It would take less time than I spent already on googling for the solution.
The basic problem is
subprocess.Popen("cmd.exe", stdin = -1, stdout = -1, stderr = -2, bufsize=0)
that not all children of that subprocess seem to be using the pipes. The cmd starts accepting further commands as soon as it's child has been killed (what's probably ensuring the loss of it's output).
I'm using subprocess.Popen to launch an external program with arguments, but when I've opened it the script is hanging, waiting for the program to finish and if I close the script the program immediately quits.
I thought I was just using a similar process before without issue, so I'm unsure if I've actually done it wrong or I'm misremembering what Popen can do. This is how I'm calling my command:
subprocess.Popen(["rv", rvFile, '-nc'])
raw_input("= Opened file")
The raw_input part is only there so the user has a chance to see the message and know that the file should be opened now. But what I end up getting is all the information that the process itself is spitting back out as if it were called in the command line directly. My understanding was that Popen made it an independent child process that would allow me to close the script and leave the other process open.
The linked duplicate question does have a useful answer for my purposes, though it's still not working as I want it.
This is the answer. And this is how I changed my code:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen(["rv", rvFile, '-nc'], creationflags=DETACHED_PROCESS).pid
raw_input("= Opened file")
It works from IDLE but not when I run the py file through the command prompt style interface. It's still tied to that window, printing the output and quitting the program as soon as I've run the script.
The stackoverflow question Calling an external command in python has a lot of useful answers which are related.
Take a look at os.spawnl, it can take a number of mode flags which include NOWAIT, WAIT.
import os
os.spawnl(os.P_NOWAIT, 'some command')
The NOWAIT option will return the process ID of the spawned task.
Sorry for such a short answer but I have not earned enough points to leave comments yet. Anyhow, put the raw_input("= Opened file") inside the file you are actually opening, rather than the program you are opening it from.
If the file you are opening is not a python file, then it will close upon finishing,regardless of what you declare from within python. If that is the case you could always try detaching it from it's parent using:
from subprocess import Popen, CREATE_NEW_PROCESS_GROUP
subprocess.Popen(["rv", rvFile, '-nc'], close_fds = True | CREATE_NEW_PROCESS_GROUP)
This is specifically for running the python script as a commandline process, but I eventually got this working by combining two answers that people suggested.
Using the combination of DETACHED_PROCESS suggested in this answer worked for running it through IDLE, but the commandline interface. But using shell=True (as ajsp suggested) and the DETACHED_PROCESS parameter it allows me to close the python script window and leave the other program still running.
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen(["rv", rvFile, '-nc'], creationflags=DETACHED_PROCESS, shell=True).pid
raw_input("= Opened file")
I am running multiple copies of the same python script on an Amazon EC2 Ubuntu instance. Each copy in turn launches the same child Python script using the solution proposed here
From time to time some of these child processes die. subprocess.check_output throws an exception and returns the error code -9. I ran the child process directly from the prompt and after running for some time, the process dies with a not-so-detailed message Killed.
Questions:
What does -9 mean?
How can I find out more about what went wrong? Specifically, my suspicion is that it might be caused by the machine getting overloaded by the several copies of the same script running at the same time. At the same time, the specific child process that I ran directly appears to be dying every time it's launched, directly or not, and more or less at the same moment (i.e. after processing more or less the same amount of input data). Python is not producing any error messages.
Assuming I have no bugs in the Python code, what can I do to try to prevent the crashes?
check_output() accumulates output from the subprocess in memory. If the process generates enough output it might be killed by oom killer due to the large RAM consumption.
If you don't need the output, you could use check_call() instead and discard the output:
import os
from subprocess import check_call, STDOUT
DEVNULL = open(os.devnull, "r+b")
check_call([command], stdout=DEVNULL, stderr=STDOUT)
-9 means kill signal that is not catchable or ignorable, or just quit immediately.
For example if you're trying to kill a process you could enter in your terminal:
ps aux | grep processname
or just this to get a list of all processes: ps aux
Once you have the pid of the process you want to terminate, you'd type kill -9 followed by the pid:
kill -9 1234
My memory is a little foggy when it comes to logs, but I'd cat around in /var/log/ and see if you find anything, or dmesg.
As far as preventing crashes in your Python code, have you tried any exception handling?
Exceptions in Python