Have been trying to get something like this to work for a while, the below doesn't seem to be sending the correct arg to the c program arg_count, which outputs argc = 1. When I'm pretty sure I would like it to be 2. ./arg_count -arg from the shell outputs 2...
I have tried with another arg (so it would output 3 in the shell) and it still outputs 1 when calling via subprocess.
import subprocess
pipe = subprocess.Popen(["./args/Release/arg_count", "-arg"], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = pipe.communicate()
result = out.decode()
print "Result : ",result
print "Error : ",err
Any idea where im falling over? I'm running linux btw.
From the documentation:
The shell argument (which defaults to False) specifies whether to use
the shell as the program to execute. If shell is True, it is
recommended to pass args as a string rather than as a sequence.
Thus,
pipe = subprocess.Popen("./args/Release/arg_count -arg", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
should give you what you want.
If shell=True then your call is equivalent to:
from subprocess import Popen, PIPE
proc = Popen(['/bin/sh', '-c', "./args/Release/arg_count", "-arg"],
stdout=PIPE, stderr=PIPE)
i.e., -arg is passed to the shell itself and not your program. Drop shell=True to pass -arg to the program:
proc = Popen(["./args/Release/arg_count", "-arg"],
stdout=PIPE, stderr=PIPE)
If you don't need to capture stderr separately from stdout then you could use check_output():
from subprocess import check_output, STDOUT
output = check_output(["./args/Release/arg_count", "-arg"]) # or
output_and_errors = check_output(["./args/Release/arg_count", "-arg"],
stderr=STDOUT)
Related
I've been working on a Python script to interact with ffmpeg; however, I've noticed that even though everything runs fine, stdout is empty and stderr returns what I should expect from stdout. How do I fix it so that the output will be returned by stdout?
Here's the simplest example that reproduces the phenomenon:
from subprocess import Popen, PIPE
p = Popen(['python', '-V'], stdin=PIPE, stdout=PIPE, stderr=PIPE)
out, err = p.communicate()
if out:
print("Output. ", out.decode())
else:
print("Error. ", err.decode())
Here's the output:
Error. Python 3.6.1 :: Anaconda 4.4.0 (64-bit)
I should note that I'm using Windows 10.
You can redirect the stderr of your process to its stdoutby doing so:
from subprocess import PIPE, STDOUT
p = subprocess.Popen(["python", "-V"], stdout=PIPE, stderr=STDOUT)
Then you can retrieve the output produced by the process like so:
out = p.stdout.read()
This will return the content of the stdout after your process has terminated.
I am trying to use popen to kick off a subprocess that calls two commands (with multiple arguements) one after the other. The second command relies on the first command running, so I was hoping to use a single subprocess to run both rather than spawning two processes and wait on the first.
But I am running into issues because I am not sure how to give two command inputs or to seperate the command as one single object.
Also, I am trying to avoid setting shell to true if possible.
This is essentially, what I am trying to do:
for test in resources:
command = [
'pgh',
'resource',
'create',
'--name', test['name'],
'--description', test['description'],
]
command2 = [
'pgh',
'assignment',
'create',
'--name', test['name'],
'--user', test['user'],
]
p = Popen(command, stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate()
print(stdout)
print(stderr)
As per my understanding the following should work for you.
To chain the execution once the previous completes use.
p1 = subprocess.Popen(command, stdout=subprocess.PIPE)
p2 = subprocess.Popen(command2, stdin=p1.stdout, stdout=subprocess.PIPE)
print p2.communicate()
You will have to launch command and wait for completion before launching another command. You should do this repeatedly for each command.
This can be done as
ps = [ Popen(c, stdout=PIPE, stderr=PIPE).communicate()
for c in command]
Note that this launches the next command irrespective of weather the first command succeeded or failed. If you want to launch the next command only if the previous command succeds then use
def check_execute(commands):
return_code = 0
for c in commands:
p = Popen(c, stdout=PIPE, stderr=PIPE)
result = p.communicate()
yield result
return_code = p.returncode
if return_code != 0:
break
I'm trying to get the standard output of a bash command as a string in Python. Following Popen documentation, I've tried:
import subprocess
p = subprocess.Popen(["echo", "hello"])
stdoutdata, stderrdata = p.communicate()
print stdoutdata
Running this script yields the following output:
hello
None
[Finished in 0.0s]
So although the output is getting printed by Python, the stdoutdata variable is None, and not "hello" as I would like. How can I make it so?
You're not providing any stdout to the Popen constructor, the default functionality simply writes the output to parent's stdout handle. Hence you're seeing it being printed in your shell.
Quoting from Popen's docs:
With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent.
To populate stdout in resulting tuple use subprocess.PIPE as stdout.
Quoting from docs:
To get anything other than None in the result tuple, you need to give
stdout=PIPE and/or stderr=PIPE too.
>>> import subprocess
>>> p = subprocess.Popen(["echo", "hello"], stdout=subprocess.PIPE)
>>> p.communicate()
('hello\n', None)
You need to pass stdout, stderr flags to Popen constructor.
Per default they are set to None, resulting in Popen is not capturing them.
cmd = subprocess.Popen(["echo", "hello"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = cmd.communicate()
# retCode = cmd.returncode
# retCode != 0, indicates an error occured in execution.
print (out)
>>> b'hello\n'
It seems like the subprocess.check_output method is what I need:
import subprocess
output = subprocess.check_output(["echo", "hello"])
The output is now 'hello\n' (including a newline character) as I would expect.
I want to check the DNS value from my system.
If the command goes wrong, the error should be stored in a different variable.
This is what I have so far:
proc = subprocess.Popen(['echo', '"to stdout"'], stdout=subprocess.PIPE,)
stdout_value = proc.communicate()
print '\tstdout:', repr(stdout_value)
subprocess.call('echo #user', shell=True)
#subprocess.check_call('echo #HOME', shell=True)
You should try this :
It captures errorcode, stdout and stderr from a command you passed as an argument :
import shlex
from subprocess import Popen, PIPE
def get_exitcode_stdout_stderr(cmd):
"""
Execute the external command and get its exitcode, stdout and stderr.
"""
args = shlex.split(cmd)
proc = Popen(args, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
exitcode = proc.returncode
#
return exitcode, out, err
cmd = "..." # arbitrary external command, e.g. "python mytest.py"
exitcode, out, err = get_exitcode_stdout_stderr(cmd)
For your need, I think you can use a python module to get what you want instead of using the bash cmd line. For example, to get your fully qualified domain name you can use :
socket.getfqdn()
I am trying to spawn a process using Popen and send it a particular string to its stdin.
I have:
pipe = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE)
pipe.communicate( my_stdin_str.encode(encoding='ascii') )
pipe.stdin.close()
However, the second line actually escapes the whitespace in my_stdin_str. For example, if I have:
my_stdin_str="This is a string"
The process will see:
This\ is\ a\ string
How can I prevent this behaviour?
I can't reproduce it on Ubuntu:
from subprocess import Popen, PIPE
shell_cmd = "perl -pE's/.\K/-/g'"
p = Popen(shell_cmd, shell=True, stdin=PIPE)
p.communicate("This $PATH is a string".encode('ascii'))
In this case shell=True is unnecessary:
from subprocess import Popen, PIPE
cmd = ["perl", "-pE" , "s/.\K/-/g"]
p = Popen(cmd, stdin=PIPE)
p.communicate("This $PATH is a string".encode('ascii'))
Both produce the same output:
T-h-i-s- -$-P-A-T-H- -i-s- -a- -s-t-r-i-n-g-
Unless you know you need it for some reason, don't run with "shell=True" in general (which, without testing, sounds like what's going on here).