Kill a batch file containing the python script within - python

I have a problem. I need to kill the batch file using the python script residing within the same batch file. The batch file has abc.py script as the first one along with other scripts. So I need to kill the batch file so that others don't get executed. Here is what I have tried:
for proc in psutil.process_iter():
if proc.name() == "python.exe" and len(proc.cmdline()) > 1 and
"abc.py" in proc.cmdline()[1]:
proc.terminate()
But this only kills the python script, not the batch file. Tried killing the pid with the same effect.
os.system("taskkill /F /PID " + str(os.getpid()))
Edit 1
The script checks for existence of another running script and then needs to terminate itself.

If you're just looking to kill whoever your parent is, that's easy: just use os.getppid() instead of os.getpid():
os.system("taskkill /F /PID " + str(os.getppid()))
Of course it's better to use subprocess instead of os.system for all the usual reasons, like getting a useful error if it fails:
subprocess.run(['taskkill', '/F', '/PID', str(os.getppid())])
Or, even better, don't use taskkill, just kill it directly. This also gives you the option of using a nicer Ctrl-C or Ctrl-Break kill instead of a hard kill, if preferred:
os.kill(os.getppid(), signal.CTRL_BREAK_EVENT)
If you're using Python 2.7, getppid doesn't work on Windows; that was only added in 3.2. (And I think the same is true for os.kill, and definitely for signal.CTRL_BREAK_EVENT.)
Since you're already apparently amenable to using psutil, you can use that.
There's no need to search through every process on the system to find yourself, just construct a default Process. And you can go from any process to its parent with parent. And then you can use the kill or terminate
proc = psutil.Process().parent()
proc.kill()
All of the above, except using CTRL_C_EVENT or CTRL_BREAK_EVENT instead of a standard signal), have the nice advantage of being cross-platform—you can run the same script on Linux or macOS or whatever and it'll kill the shell script that ran it.

Your batch file will need to check whether the last command succeeded and exit if it didn't.
See How do I make a batch file terminate upon encountering an error?

Related

Terminate Python Process and halt it's underlying os.system command line

I am running a script that launches a program via cmd and then, while the program is open, checks the log file of the program for errors. If any, close the program.
I cannot use taskkill command since I don't know the PID of the process and the image is the same as other processes that I don't want to kill.
Here is a code example:
import os, multiprocessing, time
def runprocess():
os.system('"notepad.exe"')
if __name__ == '__main__':
process = multiprocessing.Process(target=runprocess,args=[])
process.start()
time.sleep(5)
#Continuously checking if errors in log file here...
process_has_errors = True #We suppose an error has been found for our case.
if process_has_errors:
process.terminate()
The problem is that I want the notepad windows to close. It seems like the terminate() method will simply disconnect the process without closing all it's tasks.
What can I do to make sure to end all pending tasks in a process when terminating it, instead of simply disconnecting the process from those tasks?
You can use taskkill but you have to use the /T (and maybe /F) switch so all child processes of the cmd process are killed too. You get the process id of the cmd task via process.pid.
You could use a system call if you know the name of the process:
import os
...
if process_has_errors:
processName = "notepad.exe"
process.terminate()
os.system(f"TASKKILL /F /IM {processName}")

How to make a program close it's currently running instance upon startup?

I have a command line program which I'd like to keep running until I open it again, so basically a program which first checks if there's currently any already running instance of itself and kill it.
I tried os.system('TASKKILL /F /IM program.exe') but it turned out to be stupid because it also kills itself.
The most reliable way to make sure there's only one instance of your application is to create a pid file in a known (fixed) location. This location will usually be in your application data folder or in the temporary directory. At startup, you should check if the pid file exists and if the pid contained there still exists and refers to your target process. If it exists, send a kill signal to it, then overwrite the file with your current pid before starting the rest of the application.
For extra safetiness, you may want to wait until the previous process have completely terminated. This can be done by either waiting/polling to check if the process with that pid still exists, or by polling for the killed process to delete its own pid file. The latter may be necessary if process shutdown are very lengthy and you want to allow the current process to already start working while the old process is shutting down.
You can use the psutil library. It simply iterates through all running processes, filter processes with a specific filename and if they have a different PID than the current process, then it kills them. It will also run on any platform, considering you have a right process filename.
import psutil
process_to_kill = "program.exe"
# get PID of the current process
my_pid = os.getpid()
# iterate through all running processes
for p in psutil.process_iter():
# if it's process we're looking for...
if p.name() == process_to_kill:
# and if the process has a different PID than the current process, kill it
if not p.pid == my_pid:
p.terminate()
If just the program filename is not unique enough, you may use the method Process.exe() instead which is returning the full path of the process image:
process_to_kill = "c:\some\path\program.exe"
for p in psutil.process_iter():
if p.exe() == process_to_kill:
# ...
Because my working stations don't have access to internet and installing packages is a mess, I ended up coming up with this solution:
import os
os.system('tasklist > location/tasks.txt')
with open('location/tasks.txt', 'r') as pslist:
for line in pslist:
if line.startswith('python.exe'):
if line.split()[1] != str(os.getpid()):
os.system(f'TASKKILL /F /PID {line.split()[1]}')
break
os.remove('location/tasks.txt')
It prints the output of the tasklist command to a file and then checks the file to see if there's a runnig python process with a different PID from it's own.
edit: Figured out I can do it with popen so it's shorter and there are no files involved:
import os
for line in os.popen('tasklist').readlines():
if line.startswith('python.exe'):
if line.split()[1] != str(os.getpid()):
os.system(f'taskkill /F /PID {line.split()[1]}')
break
You can use the process id of already running instance.
import os
os.system("taskkill /pid <ProcessID>")

Python subprocess.call not waiting for process to finish blender

I have a python script in blender where it has
subprocess.call(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
followed by many other code which depends on this shell script to finish. What happens is that it doesn't wait for it to finish, I don't know why? I even tried using Popen instead of call as shown:
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
p1.wait()
and I tried using commuincate but it still didn't work:
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True).communicate()
this shell script works great on MacOS (after changing paths) and waits when using subprocess.call(['sh', '/userA/Test/run-my-script.sh'])
but on Windows this is what happens, I run the below python script in Blender then once it gets to the subprocess line Git bash is opened and runs the shell script while blender doesn't wait for it to finish it just prints Hello in its console without waiting for the Git Bash to finish. Any help?
import bpy
import subprocess
subprocess.call(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
print('Hello')
You can use subprocess.call to do exactly that.
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False, timeout=None)
Run the command described by args. Wait for command to complete, then return the returncode attribute.
Edit: I think I have a hunch on what's going on. The command works on your Mac because Macs, I believe, support Bash out of the box (at least something functionally equivalent) while on Windows it sees your attempt to run a ".sh" file and instead fires up Git Bash which I presume performs a couple forks when starting.
Because of this Python thinks that your script is done, the PID is gone.
If I were you I would do this:
Generate a unique, non-existing, absolute path in your "launching" script using the tempfile module.
When launching the script, pass the path you just made as an argument.
When the script starts, have it create a file at the path. When done, delete the file.
The launching script should watch for the creation and deletion of that file to indicate the status of the script.
Hopefully that makes sense.
You can use Popen.communicate API.
p1 = subprocess.Popen(os.path.abspath('D:/Test/run-my-script.sh'),shell=True)
sStdout, sStdErr = p1.communicate()
The command
Popen.communicate(input=None, timeout=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for the process to terminate.
subprocess.run will by default wait for the process to finish.
Use subprocess.Popen and Popen.wait:
process = subprocess.Popen(['D:/Test/run-my-script.sh'],shell=True, executable="/bin/bash")
process.wait()
You could also use check_call() instead of Popen.
You can use os.system, like this:
import bpy
import os
os.system("sh "+os.path.abspath('D:/Test/run-my-script.sh'))
print('Hello')
There are apparently cases when the run command fails.
This is my workaround:
def check_has_finished(pfi, interval=1, timeout=100):
if os.path.exists(pfi):
if pfi.endswith('.nii.gz'):
mustend = time.time() + timeout
while time.time() < mustend:
try:
# Command is an ad hoc one to check if the process has finished.
subprocess.check_output('command {}'.format(pfi), shell=True)
except subprocess.CalledProcessError:
print "Caught CalledProcessError"
else:
return True
time.sleep(interval)
msg = 'command {0} not working after {1} tests. \n'.format(pfi, timeout)
raise IOError(msg)
else:
return True
else:
msg = '{} does not exist!'.format(pfi)
raise IOError(msg)
A wild try, but are you running the shell as Admin while Blender as regular user or vice versa?
Long story short (very short), Windows UAC is a sort of isolated environment between admin and regular user, so random quirks like this can happen. Unfortunately I can't remember the source of this, the closest I found is this.
My problem was the exact opposite of yours, the wait() got stuck in a infinite loop because my python REPL was fired from an admin shell and wasn't able to read the state of the regular user subprocess. Reverting to normal user shell got it fixed. It's not the first time I'm bit from this UAC snafu.

How to kill process which created by python os.system()

I have a Flask application using python3. Sometimes it create daemon process to run script, then I want to kill daemon when timeout (use signal.SIGINT).
However, some processes which created by os.system (for example, os.system('git clone xxx')) are still running after daemon was killed.
so what should I do? Thanks all!
In order to be able to kill a process you need its process id (usually referred to as a pid). os.system doesn't give you that, simply returning the value of the subprocess's return code.
The newer subprocess module gives you much more control, at the expense of somewhat more complexity. In particular it allows you to wait for the process to finish, with a timeout if required, and gives you access to the subprocess's pid. While I am not an expert in its use, this seems to
work. Note that this code needs Python 3.3 or better to use the timeout argument to the Popen.wait call.
import subprocess
process = subprocess.Popen(['git', 'clone', 'https://github.com/username/reponame'])
try:
print('Running in process', process.pid)
process.wait(timeout=10)
except subprocess.TimeoutExpired:
print('Timed out - killing', process.pid)
process.kill()
print("Done")
The following command on the command line will show you all the running instances of python.
$ ps aux | grep -i python
username 6488 0.0 0.0 2434840 712 s003 R+ 1:41PM 0:00.00 python
The first number, 6488, is the PID, process identifier. Look through the output of the command on your machine to find the PID of the process you want to kill.
You can run another command to kill the correct process.
$ kill 6488
You might need to use sudo with this command. Be careful though, you don't want to kill the wrong thing or bad stuff could happen!

getting ProcessId within Python code

I am in Windows and Suppose I have a main python code that calls python interpreter in command line to execute another python script ,say test.py .
So test.py is executed as a new process.How can I find the processId for this porcess in Python ?
Update:
To be more specific , we have os.getpid() in os module. It returns the current process id.
If I have a main program that runs Python interpreter to run another script , how can I get the process Id for that executing script ?
If you used subprocess to spawn the shell, you can find the process ID in the pid property:
sp = subprocess.Popen(['python', 'script.py'])
print('PID is ' + str(sp.pid))
If you used multiprocessing, use its pid property:
p = multiprocessing.Process()
p.start()
# Some time later ...
print('PID is ' + str(p.pid))
It all depends on how you're launching the second process.
If you're using os.system or similar, that call won't report back anything useful about the child process's pid. One option is to have your 2nd script communicate the result of os.getpid() back to the original process via stdin/stdout, or write it to a predetermined file location. Another alternative is to use the third-party psutil library to figure out which process it is.
On the other hand, if you're using the subprocess module to launch the script, the resulting "popen" object has an attribute popen.pid which will give you the process id.
You will receive the process ID of the newly created process when you create it. At least, you will if you used fork() (Unix), posix_spawn(), CreateProcess() (Win32) or probably any other reasonable mechanism to create it.
If you invoke the "python" binary, the python PID will be the PID of this binary that you invoke. It's not going to create another subprocess for itself (Unless your python code does that).
Another option is that the process you execute will set a console window title for himself.
And the searching process will enumerate all windows, find the relevant window handle by name and use the handle to find PID. It works on windows using ctypes.

Categories