I am on Windows and I am starting a new process using subprocess.Popen that I want to terminate at a certain point. However, the gui that I initiated is still visible. A minimal example would be starting the PNG viewer:
import subprocess
proc = subprocess.Popen(['start', 'test.png'], shell=True)
proc.kill()
After the kill() command the gui is still running and I have to close it manually.
As fas as I understood this can be solved on Linux by passing preexec_fn=os.setsid to Popen (see How to terminate a python subprocess launched with shell=True). Since the command os.setsid is specific to Linux I do not know how to realize that on Windows.
Another way would be to get rid of the shell=True, however, I don't know how to realize that because I have to pass the file name.
Any help would be greatly appreciated...
If you want to get rid of the shell=True you have to give the full path to the executable.
import subprocess
proc = subprocess.Popen('/full/path/start %s' % filename)
proc.kill()
start is an internal command: it requires cmd.exe (that you could start using shell=True or run directly). Popen() does not wait for start command to finish and start does not wait for the PNG viewer to exit -- by the time you call proc.kill(), start might have finished already.
You could try to run PNG viewer directly instead (you don't need to provide the full path if the corresponding exe-file can be found in the standard locations).
How to terminate a python subprocess launched with shell=True has a solution for Windows too (you could try it if PNG viewer starts child processes).
Related
From python, I need to run a python file inside of git bash, while running in Windows.
That is, I have a configuration script written in python that calls other python scripts. Unfortunately, some of them use Unix commands, so they must be run using git bash in Windows.
Currently I'm using this:
cmd = f'{sys.executable} mydependency.py'
pipe = subprocess.Popen(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
# waiting for pipe is handled later...
However, this doesn't work, giving me a cannot execute binary file message. How can I get it to run?
PS: For slightly more context, mydependency.py is actually the amalgamate.py script from the simdjson (https://github.com/simdjson/simdjson) project.
EDIT:
I have also attempted the following:
Switch to run or call instead of subprocess.Popen
Use f'{git_bash_path} {sys.executable} mydependency.py'
Change the shell and executable parameters of Popen,run and call
I found a solution:
cmd = git_bash_path # Found with glob.
pipe = subprocess.Popen(cmd, stderr=subprocess.PIPE, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
pipe.communicate(input=f'{sys.executable} mydependency.py'.encode())
I'm not entirely sure why this works, if anyone has an explanation I'd be glad to hear it.
I'm currently running an OpenELEC (XBMC) installation on a Raspberry Pi and installed a tool named "Hyperion" which takes care of the connected Ambilight. I'm a total noob when it comes to Python-programming, so here's my question:
How can I run a script that checks if a process with a specific string in its name is running and:
kill the process when it's running
start the process when it's not running
The goal of this is to have one script that toggles the Ambilight. Any idea how to achieve this?
You may want to have a look at the subprocess module which can run shell commands from Python. For instance, have a look at this answer. You can then get the stdout from the shell command to a variable. I suspect you are going to need the pidof shell command.
The basic idea would be along the lines of:
import subprocess
try:
subprocess.check_output(["pidof", "-s", "-x", "hyperiond"])
except subprocess.CalledProcessError:
# spawn the process using a shell command with subprocess.Popen
subprocess.Popen("hyperiond")
else:
# kill the process using a shell command with subprocess.call
subprocess.call("kill %s" % output)
I've tested this code in Ubuntu with bash as the process and it works as expected. In your comments you note that you are getting file not found errors. You can try putting the complete path to pidof in your check_output call. This can be found using which pidof from the terminal. The code for my system would then become
subprocess.check_output(["/bin/pidof", "-s", "-x", "hyperiond"])
Your path may differ. On windows adding shell=True to the check_output arguments fixes this issue but I don't think this is relevant for Linux.
Thanks so much for your help #will-hart, I finally got it working. Needed to change some details because the script kept saying that "output" is not defined. Here's how it now looks like:
#!/usr/bin/env python
import subprocess
from subprocess import call
try:
subprocess.check_output(["pidof", "hyperiond"])
except subprocess.CalledProcessError:
subprocess.Popen(["/storage/hyperion/bin/hyperiond.sh", "/storage/.config/hyperion.config.json"])
else:
subprocess.call(["killall", "hyperiond"])
I have a python app that has lots of outputs on the screen which can be used for debugging. out of all the logging techniques, "script" command works well for me because I can see the output on the screen as well as logging it. I want to include that at the beginning of my python app to run automatically and log everything, when I do, however, the python program doesn't run. as soon as I type exit at the terminal (which stops script logging) the app starts working. The command I'm using is:
command="script /tmp/appdebug/debug.txt"
os.system(command)
I have also tried script -q but the same issue is there. Would appreciate any help.
Cheers
Well, I did find the answer for anyone who is interested:
https://stackoverflow.com/questions/15507602/logging-all-bash-in-and-out-with-script-command
and
Bash script: Using "script" command from a bash script for logging a session
I will keep this question as others might have the same issue and finding those answers wasn't exactly easy :)
Cheers
Try to use subprocess, like so:
from subprocess import Popen, PIPE
p = Popen(['script', '/tmp/appdebug/debug.txt'], stderr=PIPE, stdout=PIPE)
stdout, stderr = p.communicate()
script is a wrapper for a session of interactions. Even if it appears to terminate quickly after being started in a shell, this is not so; instead it starts a new shell in which you can interact so that everything is logged to a file.
What does this mean for you?
Your approach of using script cannot work. You start script using os.system which will wait for script to terminate before the next Python statement is executed. script's work will only happen before it terminates (i. e. during the uninteresting waiting period of your Python program).
I propose to use script -c yourprog.py yourprog.log instead. This will execute and wrap the yourprog.py and the session will be stored in yourprog.log.
I have successfully run several Python scripts, calling them from a base script using the subprocess module:
subprocess.popen([sys.executable, 'script.py'], shell=True)
However, each of these scripts executes some simulations (.exe files from a C++ application) that generate some output to the shell. All these outputs are written to the base shell from where I've launched those scripts. I'd like to generate a new shell for each script. I've tried to generate new shells using the shell=True attribute when calling subprocess.call (also tried with popen), but it doesn't work.
How do I get a new shell for each process generated with the subprocess.call?
I was reading the documentation about stdin and stdout as suggested by Spencer and found a flag the solved the problem: subprocess.CREATE_NEW_CONSOLE. Maybe redirecting the pipes does the job too, but this seems to be the simplest solution (at least for this specific problem). I've just tested it and worked perfectly:
subprocess.popen([sys.executable, 'script.py'], creationflags = subprocess.CREATE_NEW_CONSOLE)
To open in a different console, do (tested on Windows 7 / Python 3):
from sys import executable
from subprocess import Popen, CREATE_NEW_CONSOLE
Popen([executable, 'script.py'], creationflags=CREATE_NEW_CONSOLE)
input('Enter to exit from this launcher script...')
Popen already generates a sub process to handle things. You just need to redirect the output pipes. Look at the subprocess documentation, specifically the section on popen stdin, stdout and stderr redirection.
If you don't redirect these pipes, it inherits them from the parent. Just be careful about deadlocking your processes.
You wanted additional windows for each subprocess. This is handled as well. Look at the startupinfo section of subprocess. It explains what options to set on windows to spawn a new terminal for each subprocess. Note that it requires the use of the shell=True option.
This doesn't actually answer your question. But I've had my problems with subprocess too, and pexpect turned out to be really helpful.
I am using python 2.5 in windows xp.
In this i am using subprocess to run my shell,
now how should i has to run
gdb in shell using subprocess.
my code:
PID = subprocess.Popen('C:/STM/STxP70_Toolset_2010.2/bin/STxP70.bat', shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE).
Now shell will open, next
if i try to run gdb using communicate by
PID.communicate ("gdb"),
"gdb" is not running in the shell.
What should i has to do for this.
Your code:
Starts STxP70.bat
Writes string "gdb" (with no terminating newline) to it's standard input and closes the standard input.
Is reading it's output until end of file. PID.communicate won't let you to interact with the subprocess any further—it writes the provided string and than collects all output until the process terminates.
When STxP70.bat completes, the subprocess terminates.
Note, that if "shell will open" means a new window comes up with a shell prompt in it, you are screwed. It would mean the STxP70.bat stared it with 'start' command and you can't communicate with that, because it's not inheriting your stdin/stdout/stderr pipes. You would have to create your own modification of the batch that will not use 'start'.
.