This question already has answers here:
process.stdout.readline() hangs. How to use it properly?
(3 answers)
Closed 6 years ago.
I have a script called 'my_script.py' with the following contents:
my_input = ''
while my_input != 'quit':
my_input = raw_input()
print(my_input)
Then in the console, the following commands:
from subprocess import *
p1 = Popen(['python', 'my_script.py'], stdin=PIPE)
p1.stdin.write('some words\n')
prints "some words", but if instead I write
from subprocess import *
p2 = Popen(['python', 'my_script.py'], stdin=PIPE, stdout=PIPE)
p2.stdin.write('some words\n')
p2.stdout.readline()
the shell will hang and I have to terminate it manually. How can I get this to work if I want to be able to access the stdout of the script? I'm using python2.7
Edit: To clarify my question, the above snippet will run properly for other executables that have an I/O loop (the particular one I'm working with is the stockfish chess engine). Is there a way I can modify my_script.py such that the above snippet will run properly? Using
Popen(['python3', 'my_script.py'], ...)
will work, but is it not possible using Python 2.7?
It could happen due to deadlock in readline.
You need to use communicate method to read asynchronously.
Example:
def run_shell_command(cmd, params):
cmdline = [cmd]
p = subprocess.Popen(cmdline, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode != 0:
raise RuntimeError("%r failed, status code %s stdout %r stderr %r" % (
cmd, p.returncode, stdout, stderr))
return stdout.strip() # This is the stdout from the shell command
To execute command in background you can use following example:
def run_shell_remote_command_background(cmd):
cmdline = [cmd]
cmdline = cmdline + params.split(' ')
subprocess.Popen(['cmdline'])
Related
This question already has answers here:
Store output of subprocess.Popen call in a string [duplicate]
(15 answers)
Closed 1 year ago.
I'm trying to work with Powershell in python and work with get-disk command
I tried to work with stdout and print it but the value of it is gone right after I use the Communicate() function
Here's my code:
proc = subprocess.Popen(r"C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe", stdin = subprocess.PIPE,
stdout = subprocess.PIPE)
stdout, stderr = proc.communicate('get-disk')
print stdout
Any suggestions?
You could try the following, which will read the std output of your process
when starting the process with
proc=Popen('Some process',stdout=subprocess.PIPE)
stdout = proc.stdout.read()
You can do this as done below
from subprocess import Popen
from subprocess import Popen, CREATE_NEW_CONSOLE,PIPE,STDOUT
import subprocess
command="powershell.exe get-disk"
#This will open the command in a new console
proc=Popen(command,creationflags=CREATE_NEW_CONSOLE,stdout=subprocess.PIPE)
read_stdout=[]
for lines in proc.stdout.readlines():
read_stdout.append(lines)
print read_stdout
I recomend proc.stdout.read() instead of proc.communicate()
What you want is something like this:
import subprocess
cmd = r"C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe get-disk"
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE)
print(proc.stdout.read())
proc.stdout.close()
I have a custom input method and I have a python module to communicate with it. I'm trying to control the shell with it so everything from local stdout is printed on the remote device and everything sent from the remote device goes into local stdin, so that remote device can control the input given to the program, like if there was an input function inside the program the remote device can answer to that too (like in ssh).
I used python subprocess to control the stdin and stdout:
#! /usr/bin/python
from subprocess import Popen, PIPE
import thread
from mymodule import remote_read, remote_write
def talk2proc(dap):
while True:
try:
remote_write(dap.stdout.read())
incmd = remote_read()
dap.stdin.write(incmd)
except Exception as e:
print (e)
break
while True:
cmd = remote_read()
if cmd != 'quit':
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
thread.start_new_thread(talk2proc, (p,))
p.wait()
else:
break
But it doesn't work, what should I do?
p.s.
is there a difference for windows?
I had this problem, I used this for STDIN
from subprocess import call
call(['some_app', 'param'], STDIN=open("a.txt", "rb"))
a.txt
:q
This I used for a git wrapper, this will enter the data line wise whenever there is an interrupt in some_app that is expecting and user input
There is a difference for Windows. This line won't work in Windows:
p = Popen(['bash', '-c', '"%s"'%cmd], stdout=PIPE, stdin=PIPE, stderr=PIPE)
because the equivalent of 'bash' is 'cmd.exe'.
I'm working on a web interface for the Django migrate/makemigrations commands. I've been checking the code, an they use the Python input() to get the answers of the questions.
At the moment I have tried to use the Python subprocess library in order to answer by an external script (a Python script running a Python script).
I have generated a simple script that could be useful as a test:
sum.py
first_number = input("Welcome to the program that sum two numbers, \
please give me the first number: ")
second_number = input("Now, please give me the second number: ")
print("Congratulations, the result of the sum of %s and %s is: %s" %
(first_number, second_number, (first_number + second_number)))
And the only way that I've found to make the script runs the first script:
from subprocess import Popen, PIPE
command = ["python", "sum.py"]
p = Popen(command, stdin=PIPE, stdout=PIPE)
p.stdin.write("1\n")
p.stdin.write("2\n")
print p.communicate()[0]
I've found in internet some days ago some code to make a ping an receive like real time the stdout:
from subprocess import Popen, PIPE
command = ["ping", "192.168.1.137"]
p = Popen(command, stdout=PIPE, stdin=PIPE, )
while p.poll() is None:
print p.stdout.readline()
I've modified the code and tried to run the script:
64 bytes from 192.168.1.137: icmp_seq=7 ttl=64 time=1.655 ms
from subprocess import Popen, PIPE
command = ["python", "suma.py"]
p = Popen(command, stdout=PIPE, stdin=PIPE, )
response = "1\n"
while p.poll() is None:
print p.stdout.readline() #Obtain the message
p.stdin.write(response) #Answer the question
I've found some possibles ways with websockets and node.js like tty.js or web-console, but it could be hard of mantain since I have no much idea of the programming language.
What I find is to receive and send to an HTML page the message from stdout and get the response from the user.
I'm a bit lost, any suggestions will be appreciated.
Thanks.
This question already has answers here:
Retrieving the output of subprocess.call() [duplicate]
(7 answers)
Closed 8 years ago.
I want to know if the subprocess.call() has terminated correctly without any error in the called function. For example, in the below code, if the path provided is not appropriate, the ls command gives an error as:
ERROR:No such file or directory.
I want same output to be stored as string.
import subprocess
path = raw_input("Enter the path")
subprocess.call(["ls","-l",path])
from subprocess import Popen, PIPE
p = Popen(["ls", "-l", path], stdin=PIPE, stdout=PIPE, stderr=PIPE)
output, err = p.communicate()
status = p.returncode
if status:
# something went wrong
pass
else:
# we are ok
pass
Although consider using os.listdir
You cannot do that with call, because what it does is only:
Run the command described by args. Wait for command to complete, then return the returncode attribute.
So you can only determine the return code of a program, which usually means zero if no error occurred, and non-zero otherwise.
Use the check_output method from the same module:
try:
result = subprocess.check_output(["ls", "-l", path],
stderr = subprocess.STDOUT)
print result
except subprocess.CalledProcessError, e:
print "Error:", e.output
Here is a working demo.
I have a python (v3.3) script that runs other shell scripts. My python script also prints message like "About to run script X" and "Done running script X".
When I run my script I'm getting all the output of the shell scripts separate from my print statements. I see something like this:
All of script X's output
All of script Y's output
All of script Z's output
About to run script X
Done running script X
About to run script Y
Done running script Y
About to run script Z
Done running script Z
My code that runs the shell scripts looks like this:
print( "running command: " + cmnd )
ret_code = subprocess.call( cmnd, shell=True )
print( "done running command")
I wrote a basic test script and do *not* see this behaviour. This code does what I would expect:
print("calling")
ret_code = subprocess.call("/bin/ls -la", shell=True )
print("back")
Any idea on why the output is not interleaved?
Thanks. This works but has one limitation - you can't see any output until after the command completes. I found an answer from another question (here) that uses popen but also lets me see the output in real time. Here's what I ended up with this:
import subprocess
import sys
cmd = ['/media/sf_git/test-automation/src/SalesVision/mswm/shell_test.sh', '4', '2']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, bufsize=1, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(process.stdout.readline, ''):
line = line.replace('\n', '')
print(line)
sys.stdout.flush()
process.wait() # Wait for the underlying process to complete.
errcode = process.returncode # Harvest its returncode, if needed.
print( 'Script ended with return code of: ' + str(errcode) )
This uses Popen and allows me to see the progress of the called script.
It has to do with STDOUT and STDERR buffering. You should be using subprocess.Popen to redirect STDOUT and STDERR from your child process into your application. Then, as needed, output them. Example:
import subprocess
cmd = ['ls', '-la']
print('running command: "{0}"'.format(cmd)) # output the command.
# Here, we join the STDERR of the application with the STDOUT of the application.
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
process.wait() # Wait for the underlying process to complete.
out, err = process.communicate() # Capture what it outputted on STDOUT and STDERR
errcode = process.returncode # Harvest its returncode, if needed.
print(out)
print('done running command')
Additionally, I wouldn't use shell = True unless it's really required. It forces subprocess to fire up a whole shell environment just to run a command. It's usually better to inject directly into the env parameter of Popen.