subprocess is not working while reading pipe - python

I am creating and using the QGIS tool PlugIn myself.
In conclusion, the plugin needs logic to ensure that the user has Java installed.
So I try to run java -version and pass the output when it comes out.
However, the Java version is not printed.
It is my source.
try:
check_process = subprocess.Popen(["java", "-version", "2>&1"], stderr=subprocess.PIPE)
check_process = check_process.communicate()
# this is print func
QgsMessageLog.logMessage(str(check_process), tag="Validating", level=QgsMessageLog.INFO)
except Exception as e:
QgsMessageLog.logMessage(str(e), tag="Validating", level=QgsMessageLog.INFO)
return
and result is
2018-09-21T09:36:21 0 (None, '')
If you have any idea, I would appreciate your advice. Thank you.

Question: subprocess is not working
You are using 2>&1, this is as Shell command and will not work until you use shell=True.
You are right to, redirect stderr to stdout, as java -version will write to stderr.
Do this for example: (Note the diffs, no list and stdout=, to yours!)
check_process = subprocess.Popen("java -version 2>&1", shell=True, stdout=subprocess.PIPE)
As this will get the expected output for me, you get (None, '') using:
check_process = subprocess.Popen(["java", "-version", "2>&1"], stderr=subprocess.PIPE)
The first tuple is the output of stdout which is not used in Popen.
The second tuple is the output of stderr which is empty string.
For testing purpose try inside QGIS:
result = subprocess.check_output(["echo", "Hello World!"])
print(result)

Related

Python subprocess.Popen() not running command

I'm trying to use subprocess.Popen() to run a command in my script. The code is:
output = Popen(["hrun DAR_MeasLogDump " + log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
When I print the output, it's printing out the created shell output and not the actual command that's in the list. I tried getting rid of executable='/bin/csh', but then Popen wouldn't even run.
I also tried using subprocess.communicate(), but it didn't work either. I would also get the shell output and not the actual command run.
I want to completely avoid using shell=True because of security issues.
EDIT: In many different attempts, "hrun" is not being recoognized. "hrun" is a Pearl script that is being called, DAR_MeasLogDump is the action and log_file_name is the file that the script will call its action on. Is there any sort of set up or configuration that needs to be done in order for "hrun" to be recognized?
I think the problem is that Popen requires a list of every part of the command (command + options), the documentation for Popen inside subprocess has an example for that. So for that line in your script to work, you would need to write it like this:
output = Popen(["/bin/csh", "hrun", "DAR_MeasLogDump", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE)
I've removed the executable argument, but I guess it could work that way as well.
Try:
output = Popen(["-c", "hrun DAR_MeasLogDump " +log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, executable="/bin/csh", cwd=cwdir, encoding='utf-8')
csh is expecting -c "full command here". Without -c I think it just tries to open it as a file.
Specifying an odd shell and an explicit cwd seems completely out of place here (assuming cwdir is defined to the current directory).
If the first argument to subprocess is a list, no shell is involved.
result = subprocess.run(["hrun", "DAR_MeasLogDump", log_file_name],
stdout=subprocess.PIPE, stderr = subprocess.PIPE,
universal_newlines=True, check=True)
output = result.stdout
If you need this to be run under a legacy version of Python, maybe use check_output instead of run.
You generally want to avoid Popen unless you need to do something which the higher-level wrapper functions cannot do.
You are creating an instance of subprocess.Popen but not executing it.
You should try:
p = Popen(["hrun", "DAR_MeasLogDump ", log_file_name], stdout=subprocess.PIPE, stderr = subprocess.PIPE, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
Args should be passed as a sequence if you do not use shell=True, and then using executable should not be required.
Note that if you are not using advanced features from Popen, the doc recommends using subprocess.run:
from subprocess import run
p = run(["hrun", "DAR_MeasLogDump ", log_file_name], capture_output=True, cwd=cwdir, encoding='utf-8')
out, err = p.communicate() # This will get you output
This works with cat example:
import subprocess
log_file_name='-123.txt'
output = subprocess.Popen(['cat', 'DAR_MeasLogDump' + log_file_name],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
stdout, stderr = output.communicate()
print (stdout)
print (stderr)
I think you need only change to your 'hrun' command
It seems the same problem that I had at the beginning of a project: you have tried with windows "environment variables". It turns out that when entering the CMD or powershell it does not recognize perl, java, etc. unless you go to the folder where the .exe .py .java, etc. is located and enter the cmd, where the java.exe, python.py, etc. is.
In my ADB project, once I added in my environment variables, I no longer needed to go to the folder where the .exe .py or adb code was located.
Now you can open a CMD and it will execute any command even from your perl , so the interpreter that uses powershell will find and recognize the command.

subprocess library didn't work correctly for `py setup.py py2exe` command

I tried to write a code that can execute python codes easily.
but when I used subprocess library such:
import subprocess
print(subprocess.Popen("py setup.py install", shell = True, stdout = subprocess.PIPE).stdout.read())
print(subprocess.Popen("py setup.py py2exe", shell = True, stdout = subprocess.PIPE).stdout.read())
I saw just this result
b''
please help me please
Most likely the commands you are trying to run are producing a stderr, which your code does not display. It is possible to send the stderr messages to stdout if you don't want to handle it separately.
I'll use a different command in the subprocess that is relatively safe. And I will break it up a little instead of having one long line.
import subprocess
p = subprocess.Popen("python filedoesntexist",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
print(p.stdout.read())
See that I added the parameter stderr=subprocess.STDOUT, this sends all the error messages to stdout. The subprocess tries to run "python filedoesntexist" and since filedoesntexist is a file that doesn't exists, it will print this message:
b"python: can't open file 'filedoesntexist': [Errno 2] No such file or directory\n"
But you might just want to get the string instead of bytes, and you can add the parameter universal_newlines=True like this:
p = subprocess.Popen("python filedoesntexist",
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True)
print(p.stdout.read())
Now it prints just the string like this:
python: can't open file 'filedoesntexist': [Errno 2] No such file or directory
For additional information, visit the python documentation
Edit
The documentation recommends using run(), which can be done like this (updated after comments from J.F. Sebastian) :
subprocess.run(["python", "filedoesntexist"])
If you need to handle stdout in some way, add parameters described earlier in the Popen examples.

Can only get first line of stderr

allocating I want to launch a process and retrieve the stdout and stderr.
I don t really care about getting this in real time.
I wanted to use subprocess.check_ouput(), but the process might fail.
After reading StackOverflow and the Python docs I added a try .. catch block:
def execute(cmd,timeinsec=60):
print("execute ",cmd, " with time out ",timeinsec)
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, timeout=timeinsec,universal_newlines=True)
except subprocess.TimeoutExpired:
print("timeout expired")
return "",2
except subprocess.CalledProcessError:
print("called process failed")
return "",1
print ('The command returned the following back to python:'+output)
return output,0
But when I print the output with output.decode('utf-8')
I just get the first line of the output.
Note : I'm running this in the MSys environment distributed with Msysgit 1.8 on windows.
Do you have any idea of what can be wrong?
Do you know any better way to do this?
You must be using python-3.x. Please tag your question accordingly. Also, I am not sure why you are calling read() method of output. The output is a byte string and does not have a read() method. The following code works for me:
#! /usr/bin/env python3
import subprocess
try :
retcode = 0
cmd = ["/bin/ls", "/usr/local"]
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
output = e.output
retcode = e.returncode
print(output.decode('utf-8'))
print(retcode)
The output is:
bin
etc
games
include
lib
man
sbin
share
src
0
If I trigger an error by replacing /usr/local with /usr/localfoo (which does not exist), then the output is:
/bin/ls: cannot access /usr/localfoo: No such file or directory
2
Finally, you can add universal_newlines=True to check_output() call and not have to worry about calling decode() on the output:
...
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT,universal_newlines=True)
...
...
print(output)
Please take the above example and see if you can make it reproduce your problem. If you can reproduce the problem, please post your code, its output, and all error messages (copy, paste, and reformat for SO).
Solution
The problem was that the application in windows launched in the subprocess was allocating a console (built using Visual) and the stdout of the process was already redirected outside, and only one print was done in the original cout before this redirection

Python - pipelining subprocess in Windows

I'm using Windows 7, and I've tried this under Python 2.6.6 and Python 3.2.
So I'm trying to call this command line from Python:
netstat -ano | find ":80"
under Windows cmd, this line works perfectly fine.
So,
1st attempt:
output = subprocess.Popen(
[r'netstat -ano | find ":80"'],
stdout=subprocess.PIPE,
shell=True
).communicate()
An error is raised that 'find' actually didn't receive correct parameter (e.g. 'find ":80" \'):
Access denied - \
2nd attempt:
#calling netstat
cmd_netstat = subprocess.Popen(
['netstat','-ano'],
stdout = subprocess.PIPE
)
#pipelining netstat result into find
cmd_find = subprocess.Popen(
['find','":80"'],
stdin = cmd_netstat.stdout,
stdout = subprocess.PIPE
)
Again, the same error is raised.
Access denied - \
What did I do wrong? :(
EDIT:
3rd attempt (As #Pavel Repin suggested):
cmd_netstat = subprocess.Popen(
['cmd.exe', '-c', 'netstat -ano | find ":80"'],
stdout=subprocess.PIPE
).communicate()
Unfortunately, subprocess with ['cmd.exe','-c'] results in something resembling deadlock or a blank cmd window. I assume '-c' is ignored by cmd, resulting in communicate() waiting indefinitely for cmd termination. Since this is Windows, my bet bet is cmd only accepts parameter starting with slash (/). So I substituted '-c' with '/c':
cmd_netstat = subprocess.Popen(
['cmd.exe', '/c', 'netstat -ano | find ":80"'],
stdout=subprocess.PIPE
).communicate()
And...back to the same error:
Access denied - \
EDIT:
I gave up, I'll just process the string returned by 'netstat -ano' in Python. Might this be a bug?
What I suggest is that you do the maximum inside Python code. So, you can execute the following command:
# executing the command
import subprocess
output = subprocess.Popen(['netstat', '-ano'], stdout=subprocess.PIPE).communicate()
and then by parsing the output:
# filtering the output
valid_lines = [ line for line in output[0].split('\r\n') if ':80' in line ]
You will get a list of lines. On my computer, the output looks like this for port number 1900 (no html connexion active):
[' UDP 127.0.0.1:1900 *:* 1388', ' UDP 192.xxx.xxx.233:1900 *:* 1388']
In my opinion, this is easier to work with.
Note that :
option shell=True is not mandatory, but a command-line window is opened-closed quickly. See what suits you the most, but take care of command injection;
list of Popen arguments shall be a list of string. Quoting of the list parts is not necessary, subprocess will take care of it for you.
Hope this helps.
EDIT: oops, I missed the last line of the edit. Seems you've already got the idea on your own.
So I revisited this question, and found two solutions (I switched to Python 2.7 sometime ago, so I'm not sure about Python 2.6, but it should be the same.):
Replace find with findstr, and remove doublequotes
output = subprocess.Popen(['netstat','-ano','|','findstr',':80'],
stdout=subprocess.PIPE,
shell=True)
.communicate()
But this doesn't explain why "find" cannot be used, so:
Use string parameter instead of list
output = subprocess.Popen('netstat -ano | find ":80"',
stdout=subprocess.PIPE,
shell=True)
.communicate()
or
pipeout = subprocess.Popen(['netstat', '-ano'],
stdout = subprocess.PIPE)
output = subprocess.Popen('find ":80"',
stdin = pipeout.stdout,
stdout = subprocess.PIPE)
.communicate()
The problem arise from the fact that: ['find','":80"'] is actually translated into ['find,'\":80\"'].
Thus the following command is executed in Windows command shell:
>find \":80\"
Access denied - \
Proof:
Running:
output = subprocess.Popen(['echo','find','":80"'],
stdout=subprocess.PIPE,
shell=True)
.communicate()
print output[0]
returns:
find \":80\"
Running:
output = subprocess.Popen('echo find ":80"',
stdout=subprocess.PIPE,
shell=True)
.communicate()
print output[0]
returns:
find ":80"
New answer, after reading this old question again: this may be due to the two following facts:
The pipe operator executes the following commands in a sub-shell; see for instance this interesting consequence).
Python itself uses the pipe as a way to get the results back:
Note that (...) to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too.
Not sure if this 'conflict' is kind of a bug, or a design choice though.

What's a good equivalent to subprocess.check_call that returns the contents of stdout?

I'd like a good method that matches the interface of subprocess.check_call -- ie, it throws CalledProcessError when it fails, is synchronous, &c -- but instead of returning the return code of the command (if it even does that) returns the program's output, either only stdout, or a tuple of (stdout, stderr).
Does somebody have a method that does this?
Python 2.7+
from subprocess import check_output as qx
Python < 2.7
From subprocess.py:
import subprocess
def check_output(*popenargs, **kwargs):
if 'stdout' in kwargs:
raise ValueError('stdout argument not allowed, it will be overridden.')
process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
raise subprocess.CalledProcessError(retcode, cmd, output=output)
return output
class CalledProcessError(Exception):
def __init__(self, returncode, cmd, output=None):
self.returncode = returncode
self.cmd = cmd
self.output = output
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (
self.cmd, self.returncode)
# overwrite CalledProcessError due to `output` keyword might be not available
subprocess.CalledProcessError = CalledProcessError
See also Capturing system command output as a string for another example of possible check_output() implementation.
I can not get formatting in a comment, so this is in response to J.F. Sebastian's answer
I found this very helpful so I figured I would add to this answer. I wanted to be able to work seamlessly in the code without checking the version. This is what I did...
I put the code above into a file called 'subprocess_compat.py'. Then in the code where I use subprocess I did.
import sys
if sys.version_info < (2, 7):
import subprocess_compat
subprocess.check_output = subprocess_compat.check_output
Now anywhere in the code I can just call 'subprocess.check_output' with the params I want and it will work regardless of which version of python I am using.
After I read this twice, I realized it's ten years old and most answers apply to the now deprecated python2.7 rather than python3.
Now that we are - or should be - on python3, it seems that the best option for python >= 3.7 is to use the following as is mentioned in multiple comments:
result = subprocess.run(..., check=True, capture_output=True)
To save you searching for more details, I recommend an answer I found with wonderful detail by SethMMorton in an answer to "How to suppress or capture the output of subprocess.run()?" As described there, you can access stdout, stderr directly as:
print(result.stdout)
print(result.stderr)
If you need to support Python 3.6:
You can however easily "emulate" this by setting both stdout and stderr to PIPE:
from subprocess import PIPE
subprocess.run(["ls", "-l", "/dev/null"], stdout=PIPE, stderr=PIPE)
This info is from
Willem Van Onsem's answer to a related question.
I tend to go straight to https://docs.python.org/3/library/subprocess.html to refresh my memory on general subprocess things. (The SO examples are often easier for me to access quickly though.)
This function returns terminal output in the form of list of string.
import subprocess,sys
def output(cmd):
cmd_l = cmd.split()
output = subprocess.Popen(cmd_l, stdout=subprocess.PIPE).communicate()[0]
output = output.decode("utf-8")
return (output)

Categories