How to insert a subprocess output into a variable [duplicate] - python

This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 1 year ago.
I'm trying to work with Powershell in python and work with get-disk command
I tried to work with stdout and print it but the value of it is gone right after I use the Communicate() function
Here's my code:
proc = subprocess.Popen(r"C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe", stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
stdout, stderr = proc.communicate('get-disk')
print stdout
Any suggestions?

You could try the following, which will read the std output of your process
when starting the process with
proc=Popen('Some process',stdout=subprocess.PIPE)
stdout = proc.stdout.read()

You can do this as done below
from subprocess import Popen
from subprocess import Popen, CREATE_NEW_CONSOLE,PIPE,STDOUT
import subprocess
command="powershell.exe get-disk"
#This will open the command in a new console
proc=Popen(command,creationflags=CREATE_NEW_CONSOLE,stdout=subprocess.PIPE)
read_stdout=[]
for lines in proc.stdout.readlines():
read_stdout.append(lines)
print read_stdout

I recomend proc.stdout.read() instead of proc.communicate()
What you want is something like this:
import subprocess
cmd = r"C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe get-disk"
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
print(proc.stdout.read())
proc.stdout.close()

Related

Subprocesss.py issue when running "dir" in Windows [duplicate]

This question already has answers here:
Windows can't find the file on subprocess.call()
(7 answers)
Closed 4 months ago.
I'm playing around with subprocess.Popen for shell commands as it seems it has more flexbility with regards to piping compared to subprocess.run
I'm starting off with some simple examples but I'm getting FileNotFoundError:
I was told that shell = True is not necessary if I make the arguments as proper lists. However it doesn't seem to be working.
Here are my attempts:
import subprocess
p1 =subprocess.Popen(['dir'], stdout =subprocess.PIPE)
output = p1.communicate[0]
p = subprocess.Popen([ "dir", "c:\\Users"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outputs = p.communicate()
Both are leading to FileNotFoundError
As dir is simply a command understood by cmd.exe (or powershell.exe) then you could:
subprocess.Popen(["cmd", "/c", "dir", "c:\\Users"], stdout=subprocess.PIPE)
which corresponds to doing the following in a shell
C:\>cmd /c dir c:\Users
You may find you have to fully path cmd, as c:\\Windows\\System32\\cmd.exe
Your problem is that "dir" is an internal Windows command and your "popen" is looking for the name of an executable. You could try setting up a "dir.bat" file that runs the "dir" command to see if this works or simply try any of the commands in \Windows\system32 instead.
Try this (on windows):
import subprocess
file_name = "test.txt"
sp = subprocess.Popen(["cmd", "/c", 'dir', '/s', '/p', file_name], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output = sp.communicate()
output = output[1].decode()
if file_name in output:
print("yes")
else:
print("no")
On Linux, replace the call to subprocess like this:
sp = subprocess.Popen(['find', '-name', file_name, '/'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
I htink the key is in: stdout=subprocess.PIPE, stderr=subprocess.PIPE

Python subprocess multiple non blocking communicates

I start a script with:
t = subprocess.Popen('rosrun ros_pkg ros_node', shell=True,
stdout = subprocess.PIPE,
stdin = subprocess.PIPE, universal_newlines=True)
I then want to communictate with that process like this:
stdout = t.communicate('new command')[0]
print(stdout)
if stdout == []:
logic
stdout = t.communicate('new command')[0]
....
The problem is that after t.commincate the subprocess closes
There are solutions for similar problems but nothing worked for me yet please help
Using t.communicate() will close the input pipe after sending the data, meaning it can only be called once in order to send something to the subprocess.
However you can use t.stdin.write() to do sequential writes without closing the pipe and then use t.stdin.readline() to get the output. These work the same way the handler returned by open() does.
import subprocess
t = subprocess.Popen("cat", stdin=subprocess.PIPE, stdout=subprocess.PIPE, universal_newlines=True, shell=True)
#These calls will have their results written to the output pipe
t.stdin.write('hi there ')
t.stdin.write('hi again\n')
#However the input data is buffered first, so call flush() before reading
t.stdin.flush()
#Read from the pipe
a = t.stdout.readline()
print(a)
t.stdin.write('hi back at you\n')
t.stdin.flush()
a = t.stdout.readline()
print(a)
I now switched from using subprocess to using pexpect.
My syntax is now as follows:
child = pexpect.spawn('rosrun ros_pkg ros_node')
command = child.sendline('new command')
output = child.read_nonblocking(10000, timeout=1)
....
logic
....
command = child.sendline('new command')
output = child.read_nonblocking(10000, timeout=1)
Many thanks to novel_yet_trivial on reddit: https://www.reddit.com/r/learnpython/comments/2o2viz/subprocess_popen_multiple_times/

Why does subprocess hang when reading from stdout [duplicate]

This question already has answers here:
process.stdout.readline() hangs. How to use it properly?
(3 answers)
Closed 6 years ago.
I have a script called 'my_script.py' with the following contents:
my_input = ''
while my_input != 'quit':
my_input = raw_input()
print(my_input)
Then in the console, the following commands:
from subprocess import *
p1 = Popen(['python', 'my_script.py'], stdin=PIPE)
p1.stdin.write('some words\n')
prints "some words", but if instead I write
from subprocess import *
p2 = Popen(['python', 'my_script.py'], stdin=PIPE, stdout=PIPE)
p2.stdin.write('some words\n')
p2.stdout.readline()
the shell will hang and I have to terminate it manually. How can I get this to work if I want to be able to access the stdout of the script? I'm using python2.7
Edit: To clarify my question, the above snippet will run properly for other executables that have an I/O loop (the particular one I'm working with is the stockfish chess engine). Is there a way I can modify my_script.py such that the above snippet will run properly? Using
Popen(['python3', 'my_script.py'], ...)
will work, but is it not possible using Python 2.7?
It could happen due to deadlock in readline.
You need to use communicate method to read asynchronously.
Example:
def run_shell_command(cmd, params):
cmdline = [cmd]
p = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
raise RuntimeError("%r failed, status code %s stdout %r stderr %r" % (
cmd, p.returncode, stdout, stderr))
return stdout.strip() # This is the stdout from the shell command
To execute command in background you can use following example:
def run_shell_remote_command_background(cmd):
cmdline = [cmd]
cmdline = cmdline + params.split(' ')
subprocess.Popen(['cmdline'])

How to execute and save result of an OS command to a file [duplicate]

This question already has answers here:
How to redirect output with subprocess in Python?
(6 answers)
Closed 7 years ago.
In python 2.7, I would like to execute an OS command (for example 'ls -l' in UNIX) and save its output to a file. I don't want the execution results to show anywhere else other than the file.
Is this achievable without using os.system?
Use subprocess.check_call redirecting stdout to a file object:
from subprocess import check_call, STDOUT, CalledProcessError
with open("out.txt","w") as f:
try:
check_call(['ls', '-l'], stdout=f, stderr=STDOUT)
except CalledProcessError as e:
print(e.message)
Whatever you what to do when the command returns a non-zero exit status should be handled in the except. If you want a file for stdout and another to handle stderr open two files:
from subprocess import check_call, STDOUT, CalledProcessError, call
with open("stdout.txt","w") as f, open("stderr.txt","w") as f2:
try:
check_call(['ls', '-l'], stdout=f, stderr=f2)
except CalledProcessError as e:
print(e.message)
Assuming you just want to run a command have its output go into a file, you could use the subprocess module like
subprocess.call( "ls -l > /tmp/output", shell=True )
though that will not redirect stderr
You can open a file and pass it to subprocess.call as the stdout parameter and the output destined for stdout will go to the file instead.
import subprocess
with open("result.txt", "w") as f:
subprocess.call(["ls", "-l"], stdout=f)
It wont catch any output to stderr though that would have to be redirected by passing a file to subprocess.call as the stderr parameter. I'm not certain if you can use the same file.

How to check if subprocess terminated properly? [duplicate]

This question already has answers here:
Retrieving the output of subprocess.call() [duplicate]
(7 answers)
Closed 8 years ago.
I want to know if the subprocess.call() has terminated correctly without any error in the called function. For example, in the below code, if the path provided is not appropriate, the ls command gives an error as:
ERROR:No such file or directory.
I want same output to be stored as string.
import subprocess
path = raw_input("Enter the path")
subprocess.call(["ls","-l",path])
from subprocess import Popen, PIPE
p = Popen(["ls", "-l", path], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
status = p.returncode
if status:
# something went wrong
pass
else:
# we are ok
pass
Although consider using os.listdir
You cannot do that with call, because what it does is only:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
So you can only determine the return code of a program, which usually means zero if no error occurred, and non-zero otherwise.
Use the check_output method from the same module:
try:
result = subprocess.check_output(["ls", "-l", path],
stderr = subprocess.STDOUT)
print result
except subprocess.CalledProcessError, e:
print "Error:", e.output
Here is a working demo.

Categories