I have a flutter project called zed, my goal is to monitor the output of flutter run, as long as pressing r, the output will increase.
To automatically implement this workflow, my implementation is
import subprocess
bash_commands = f'''
cd ../zed
flutter run --device-id web-server --web-hostname 192.168.191.6 --web-port 8352
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
output, err= process.communicate(bash_commands.encode('utf-8'))
print(output, err)
output, _ = process.communicate('r'.encode('utf-8'))
print(output)
It's not working as I expected, there is nothing printed on the screen.
Use process.stdin.write() instead of process.communicate()
process.stdin.write(bash_commands)
process.stdin.flush()
But why you ask
Popen.communicate(input=None, timeout=None)
Interact with process:
Send data to stdin. Read data from stdout and stderr, until
end-of-file is reached
https://docs.python.org/3/library/subprocess.html#subprocess.Popen.communicate
communicate(...) doesn't return until the pipe is closed which typically happens when the subprocess closes. Great for ls -l not so good for long running subprocesses.
Related
I have a python process running, having a logger object configured to print logs in a log file.
Now, I am trying to call a scala script through this python process, by using subprocess module of Python.
subprocess.Popen(scala_run_command, stdout=subprocess.PIPE, shell=True)
The issue is, whenever the python process exits, it hangs the shell, which comes to life only after explicitly running stty sane command. My guess is that it is caused because the scala script outputs to shell and hence the shell hangs, because of its stdout [something in its stdout causes the shell to lose its sanity].
For the same reason, I wanted to try to put the output of scala run script to be captured in my default log file, which does not seem to be happening using multiple ways.
So, the query boils down to, how to the get the stdout output of shell command ran through subprocess module in a log file. Even if there is a better way to achieve this instead of subprocess, run, I would love to know the ideas.
The current state of code looks like this.
__echo_command = 'echo ":load %s"'
__spark_console_command = 'spark;'
def run_scala_script(self, script):
echo_command = self.__echo_command % script
spark_console_command = self.__spark_console_command
echo_result = subprocess.run(echo_command, stdout=subprocess.PIPE, shell=True)
result = subprocess.run(spark_console_command, stdout=subprocess.PIPE, shell=True, input=echo_result.stdout)
logger.info('Scala script %s completed successfully' % script)
logger.info(result.stdout)
Use
p = subprocess.Popen(...)
followed by
stdout, stderr = p.communicate()
and then stdout and stderr will contain the output bytes from the subprocess' output streams. You can then log the stdout value.
I am using subprocess of python27 lib to run a big python file(execute.py) from another python file(sample.py).
If I run the sample.py (which has subprocess statements) in windows command line,it is running properly and streaming the live output well.
But in the python GUI console,when I run the GUI python file(has same subprocess statements) the GUI window is Not responding for some minutes after some time the output is printing as whole(not streaming).
Here is the snippet:
cmdlist = ["python", "execute.py","name","xyz"]
proc = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(proc.stdout.readline, ""):
self.output.write(line)
self.output.write("\n Finished process\n")
Hitting my head for a week and could not find any solution so far.
When you run script in command line: the output will send directly to the terminal.
The Python GUI console(IDLE) base on tkinter tool kit so when you run the script:
the output will send to the buffer. The GUI tool take a bit of time to show the buffer to screen. Therefore the time to show the output will be longer when running with command line. If your script print too much, the buffer will be overflow then the "Not responding" occurs.
Do not set stdout or stderr if you want them to go to an outer (controlling) process.
cmdlist = ["python", "execute.py","name","xyz"]
proc = subprocess.Popen(cmdlist)
# do other stuff here if you like
self.output.write(
"\n Finished process (%s)\n" % proc.wait() # wait until process has finished; process will stream to stdout/stderr while we wait
)
If you set stdout or stderr to subprocess.PIPE your current (python) process creates a new pipe by which it communicates with the subprocess. Basically, your own process will buffer the subprocess' output.
Even if you don't explicitly buffer anything, python itself will do some buffering. This especially applies when you redirect stderr and write it to your own stdout - stdout is buffered by default, while stderr would not have been. Many programs also write to the same line and reset the cursor, thus doing proc.stdout.readline() will buffer, waiting for the line to finish.
Now, if you don't set stdout and stderr, the subprocess will inherit your stdout and stderr. This means it can write directly to the outer, controlling process.
I work in Unix, and I have a "general tool" that loads another process (GUI utility) on the background, and exits.
I call my "general tool" from a Python script, using Popen and proc.communicate() method.
My "general tool" runs for ~1 second, loads the GUI process on the background and exits immediately.
The problem is that proc.communicate() continues waiting to the process, although it's already terminated. I have to manually close the GUI (which is a subprocess that runs on the BG), so proc.communicate() returns.
How can this be solved?
I need proc.communicate() to return once the main process is terminated, and not to wait for the subprocesses that run on the background...
Thanks!!!
EDIT:
Adding some code snippets:
My "General Tool" last Main lines (Written in Perl):
if ($args->{"gui"}) {
my $script_abs_path = abs_path($0);
my $script_dir = dirname($script_abs_path);
my $gui_util_path = $script_dir . "/bgutil";
system("$gui_util_path $args->{'work_area'} &");
}
return 0;
My Python script that runs the "General Tool":
cmd = PATH_TO_MY_GENERAL_TOOL
proc = subprocess.Popen(cmd, shell = True, stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
stdout, dummy = proc.communicate()
exit_code = proc.returncode
if exit_code != 0:
print 'The tool has failed with status: {0}. Error message is:\n{1}'.format(exit_code, stdout)
sys.exit(1)
print 'This line is printed only when the GUI process is terminated...'
Don't use communicate. Communicate is explicitly designed to wait until the stdout of the process has closed. Presumably perl is not closing stdout as it's leaving it open for it's own subprocess to write to.
You also don't really need to use Popen as you're not really using its features. That is, you create pipes, and then just reprint to stdout with your own message. And it doesn't look like you need a shell at all.
Try using subprocess.call or even subprocess.check_call.
eg.
subprocess.check_call(cmd)
No need to check the return value as check_call throws an exception (which contains the exit code) if the process returns with a non-zero exit code. The output of the process is directly written to the controlling terminal -- no need to redirect the output.
Finally, if cmd is a compound of a path to an executable and its arguments then use shlex.split.
eg.
cmd = "echo whoop" # or cmd = "ls 'does not exist'"
subprocess.check_call(shlex.split(cmd))
Sample code to test with:
mypython.py
import subprocess, shlex
subprocess.check_call(shlex.split("perl myperl.pl"))
print("finishing top level process")
myperl.pl
print "starting perl subprocess\n";
my $cmd = 'python -c "
import time
print(\'starting python subprocess\')
time.sleep(3);
print(\'finishing python subprocess\')
" &';
system($cmd);
print "finishing perl subprocess\n";
Output is:
$ python mypython.py
starting perl subprocess
finishing perl subprocess
finishing top level process
$ starting python subprocess
finishing python subprocess
I'm trying to learn about the subprocessing module and am therefore making a hlds server administrator.
My goal is to be able to start server instances and send all commands through dispatcher.py to administrate multiple servers, e.g. send commands to subprocesses stdin.
what I've got so far for some initial testing, but got stuck already :]
#dispatcher.py
import subprocess
RUN = '/home/daniel/hlds/hlds_run -game cstrike -map de_dust2 -maxplayers 11'
#RUN = "ls -l"
hlds = subprocess.Popen(RUN.split(), stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
print hlds.communicate()[0]
print hlds.communicate()[1]
hlds.communicate('quit')
I am not getting any stdout from the hlds server, but it works fine if i dont set stdout to PIPE. And the hlds.communicate('quit') does not seem to be sent to the hlds process stdin either. The ls -l command returns stdout correctly but not hlds.
All help appreciated! :)
See the Popen.communicate docs (emphasis mine):
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
So you can only call communicate once per run of a process, since it waits for the process to terminate. That's why ls -l seems to work -- it terminates immediately, while hlds doesn't.
You'd need to do:
out, error = hlds.communicate('quit')
if you want to send in quit and get all output until it terminates.
If you need more interactivity, you'll need to use hlds.stdout, hlds.stdin, and hlds.stderr directly.
I trying to start a program (HandBreakCLI) as a subprocess or thread from within python 2.7. I have gotten as far as starting it, but I can't figure out how to monitor it's stderr and stdout.
The program outputs it's status (% done) and info about the encode to stderr and stdout, respectively. I'd like to be able to periodically retrieve the % done from the appropriate stream.
I've tried calling subprocess.Popen with stderr and stdout set to PIPE and using the subprocess.communicate, but it sits and waits till the process is killed or complete then retrieves the output then. Doesn't do me much good.
I've got it up and running as a thread, but as far as I can tell I still have to eventually call subprocess.Popen to execute the program and run into the same wall.
Am I going about this the right way? What other options do I have or how to I get this to work as described?
I have accomplished the same with ffmpeg. This is a stripped down version of the relevant portions. bufsize=1 means line buffering and may not be needed.
def Run(command):
proc = subprocess.Popen(command, bufsize=1,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
universal_newlines=True)
return proc
def Trace(proc):
while proc.poll() is None:
line = proc.stdout.readline()
if line:
# Process output here
print 'Read line', line
proc = Run([ handbrakePath ] + allOptions)
Trace(proc)
Edit 1: I noticed that the subprocess (handbrake in this case) needs to flush after lines to use this (ffmpeg does).
Edit 2: Some quick tests reveal that bufsize=1 may not be actually needed.