Run tool via Python in cmd does not wait - python

I'm running a tool via Python in cmd. For each sample in a given directory I want that tool to do something. However, when I use process = subprocess.Popen(command) in the loop, the commands does not wait untill its finished, resulting in 10 prompts at once. And when I use subprocess.Popen(command, stdout=subprocess.PIPE) the command remains black and I can't see the progress, although it does wait untill the command is finished.
Does anyone know a way how to call an external tool via Python in cmd, that does wait untill the command is finished and thats able to show the progress of the tool in the cmd?
#main.py
for sample in os.listdir(os.getcwd()):
if ".fastq" in sample and '_R1_' in sample and "Temp" not in sample:
print time.strftime("%H:%M:%S")
DNA_Bowtie2.DNA_Bowtie2(os.getcwd()+'\\'+sample+'\\'+sample)
#DNA_Bowtie2.py
# Run Bowtie2 command and wait for process to be finished.
process = subprocess.Popen(command, stdout=subprocess.PIPE)
process.wait()
process.stdout.read()
Edit: command = a perl or java command. With above make-up I cannot see tool output since the prompt (perl window, or java window) remains black.

It seems like your subprocess forks otherwise there is no way the wait() would return before the process has finished.

The order is important here: first read the output, then wait.
If you do it this way:
process.wait()
process.stdout.read()
you can experience a deadlock if the pipe buffer is completely full: the subprocess blocks on waiting on stdout and never reaches the end, your program blocks on wait() and never reaches the read().
Do instead
process.stdout.read()
process.wait()
which will read until EOF.
This holds for if you want the stdout of the process at all.
If you don't want that, you should omit the stdout=PIPE stuff. Then the output is directed into that prompt window. Then you can omit process.stdout.read() as well.
Normally, the process.wait() should then prevent that 10 instances run at once. If that doesn't work, I don't know why not...

Related

Wrting to and reading from a running subprocess

I would like to control adb (Android Debug Bridge) from a python script.
In order to do this I want to use the shell command from adb:
import subprocess as sp
adb = 'path-to-adb.exe'
print("Running shell command!")
p = sp.Popen([adb, 'shell'], stdout=sp.PIPE, stderr=sp.PIPE, stdin=sp.PIPE)
p.stdin.write('ls\r\n'.encode('utf-8'))
print(p.stdout.readlines().decode('utf-8'))
print('Do more stuff and eventually shut down.')
The idea is that I would write a command to the android shell, wait for the response then write another and so on... But whenever I call read() or readlines() on the running process it just does not return.
If however I call communicate() it works fine and returns the expected result. The problem with communicate() is that it ends the process.
I have looked at several questions here on Stackoverflow but here the answere always seems to be to wait the process to terminate (by either using communicate() or subprocess.run()). Am I missing something here? Am I just not supposed to interact with a running process?

Running a subprocess in a script and based on the output, continuing the execution of the script

I am writing a script which uses subprocess to launch a server and then continues on with the execution of the script.
This is my current code
cmd = "some command"
process = check_output(cmd, shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
time.sleep(70)
print(process.returncode)
I am using time.sleep to delay the execution of the next lines in the script so that the server starts but this is not efficient.
Once the server starts, I get this output in the console: INFO:bridge_serving!
Is there a way that i can check the output of the console and once it says INFO:bridge_serving! the next lines of the script should continue running.
I am guessing that you want the process to stay open/running while continually checking the output.
subprocess.check_output() wont return anything until the process is complete.
You probably want to keep the process open and use a pipe to read the output as it is created.
See: https://stackoverflow.com/a/4760517/3389859 for more info.

subprocess.Popen makes terminal crash after KeyboardInterrupt

I wrote a simple python script ./vader-shell which uses subprocess.Popen to launch a spark-shell and I have to deal with KeyboardInterrupt, since otherwise the child process would not die
command = ['/opt/spark/current23/bin/spark-shell']
command.extend(params)
p = subprocess.Popen(command)
try:
p.communicate()
except KeyboardInterrupt:
p.terminate()
This is what I see with ps f
When I actually interrupt with ctrl-C, I see the processes dying (most of the time). However the terminal starts acting weird: I don't see any cursor, and all the lines starts to appear randomly
I am really lost in what is the best way to run a subprocess with this library and how to handle killing of the child processes. What I want to achieve is basic: whenever my python process is killed with a ctrl-C, I want all the family of process being killed. I googled several solutions os.kill, p.wait() after termination, calling subprocess.Popen(['reset']) after termination but none of them worked.
Do you know what is the best way to kill when KeyboardInterrupt happens? Or do you know any other more reliable library to use to spin-up processes?
There is nothing blatantly wrong with your code, the problem is that the command you are launching tries to do stuff with the current terminal, and does not correctly restore the settings where shutting down. Replacing your command with a "sleep" like below will run just fine and stop on Ctrl+C without problems:
import subprocess
command = ['/bin/bash']
command.extend(['-c', 'sleep 600'])
p = subprocess.Popen(command)
try:
p.communicate()
except KeyboardInterrupt:
p.terminate()
I don't know what you're trying to do with spark-shell, but if you don't need it's output you could try to redirect it to /dev/null so that it's doesn't mess up the terminal display:
p = subprocess.Popen(command, stdout=subprocess.DEVNULL)

python how to kill a popen process, shell false [why not working with standard methods]

I'm trying to kill a subprocess started with:
playing_long = Popen(["omxplayer", "/music.mp3"], stdout=subprocess.PIPE)
and after a while
pid = playing_long.pid
playing_long.terminate()
os.kill(pid,0)
playing_long.kill()
Which doesn't work.
Neither the solution pointed out here
How to terminate a python subprocess launched with shell=True
Noting that I am using threads, and it is not recommended to use preexec_fn when you use threads (or at least this is what I read, anyway it doesn't work either).
Why it is not working? There's no error message in the code, but I have to manually kill -9 the process to stop listening the mp3 file.
Thanks
EDIT:
From here, I have added a wait() after the kill().
Surprisingly, before re-starting the process I check if this is still await, so that I don't start a chorus with the mp3 file.
Without the wait(), the system sees that the process is alive.
With the wait(), the system understands that the process is dead and starts again it.
However, the process is still sounding. Definitively I can't seem to get it killed.
EDIT2: The problem is that omxplayer starts a second process that I don't kill, and it's the responsible for the actual music.
I've tried to use this code, found in several places in internet, it seems to work for everyone but not for me
playing_long.stdin.write('q')
playing_long.stdin.flush()
And it prints 'NoneType' object has no attribute 'write'. Even when using this code immediately after starting the popen process, it fails with the same message
playing_long = subprocess.Popen(["omxplayer", "/home/pi/Motion_sounds/music.mp3"], stdout=subprocess.PIPE)
time.sleep(5)
playing_long.stdin.write('q')
playing_long.stdin.flush()
EDIT3: The problem then was that I wasn't establishing the stdin line in the popen line. Now it is
playing_long = subprocess.Popen(["omxplayer", "/home/pi/Motion_sounds/music.mp3"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
time.sleep(5)
playing_long.stdin.write(b'q')
playing_long.stdin.flush()
*needing to specify that it is bytes what I write in stdin
Final solution then (see the process edited in the question):
playing_long = subprocess.Popen(["omxplayer", "/home/pi/Motion_sounds/music.mp3"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
time.sleep(5)
playing_long.stdin.write(b'q')
playing_long.stdin.flush()

How to execute a shell script in the background from a Python script

I am working on executing the shell script from Python and so far it is working fine. But I am stuck on one thing.
In my Unix machine I am executing one command in the background by using & like this. This command will start my app server -
david#machineA:/opt/kml$ /opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &
Now I need to execute the same thing from my Python script but as soon as it execute my command it never goes to else block and never prints out execute_steps::Successful, it just hangs over there.
proc = subprocess.Popen("/opt/kml/bin/kml_http --config=/opt/kml/config/httpd.conf.dev &", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
if proc.returncode != 0:
logger.error("execute_steps::Errors while executing the shell script: %s" % stderr)
sleep(0.05) # delay for 50 ms
else:
logger.info("execute_steps::Successful: %s" % stdout)
Anything wrong I am doing here? I want to print out execute_steps::Successful after executing the shell script in the background.
All other command works fine but only the command which I am trying to run in background doesn't work fine.
There's a couple things going on here.
First, you're launching a shell in the background, and then telling that shell to run the program in the background. I don't know why you think you need both, but let's ignore that for now. In fact, by adding executable='/bin/bash' on top of shell=True, you're actually trying to run a shell to run a shell to run the program in the background, although that doesn't actually quite work.*
Second, you're using PIPE for the process's output and error, but then not reading them. This can cause the child to deadlock. If you don't want the output, use DEVNULL, not PIPE. If you want the output to process yourself, use proc.communicate().**, or use a higher-level function like check_output. If you just want it to intermingle with your own output, just leave those arguments off.
* If you're using the shell because kml_http is a non-executable script that has to be run by /bin/bash, then don't use shell=True for that, or executable, just make make /bin/bash the first argument in the command line, and /opt/kml/bin/kml_http the second. But this doesn't seem likely; why would you install something non-executable into a bin directory?
** Or you can read it explicitly from proc.stdout and proc.stderr, but that gets more complicated.
At any rate, the whole point of executing something in the background is that it keeps running in the background, and your script keeps running in the foreground. So, you're checking its returncode before it's finished, and then moving on to whatever's next in your code, and never coming back again.
It seems like you want to wait for it to be finished. In that case, don't run it in the background—use proc.wait, or just use subprocess.call() instead of creating a Popen object. And don't use & either, of course. While we're at it, don't use the shell, either:
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
Now, you won't get to that if statement until kml_http finishes running.
If you want to wait for it to be finished, but at the same time keep doing other stuff, then you're trying to do two things at once in your program, which means you need a thread to do the waiting:
def run_kml_http():
retcode = subprocess.call(["/opt/kml/bin/kml_http",
"--config=/opt/kml/config/httpd.conf.dev"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
if retcode != 0:
# etc.
t = threading.Thread(target=run_kml_http)
t.start()
# Now you can do other stuff in the main thread, and the background thread will
# wait around until kml_http is finished and execute the `if` statement whenever
# that happens
You're using stderr=PIPE, stdout=PIPE which means that rather than letting the stdin and stdout of the child process be forwarded to the current process' standard output and error streams, they are being redirected to a pipe which you must read from in your python process (via proc.stdout and proc.stderr.
To "background" a process, simply omit the usage of PIPE:
#!/usr/bin/python
from subprocess import Popen
from time import sleep
proc = Popen(
['/bin/bash', '-c', 'for i in {0..10}; do echo "BASH: $i"; sleep 1; done'])
for x in range(10):
print "PYTHON: {0}".format(x)
sleep(1)
proc.wait()
which will show the process being "backgrounded".

Categories