Get print() realtime output with subprocess - python

I want to execute a Python file from another Python file and show all print() outputs and error outputs without waiting (realtime).
The simplified version of my code is as follows and I would like to show "start" and an error message without waiting for "end" (the end of the script).
def main():
# Function that takes a long time (in my actual code)
x += 1 # this raises an error
if __name__ == "main":
print("start")
main()
print("end")
I also have run.py:
import subprocess
def run():
subprocess.run(["python", "main.py"])
if __name__ == '__main__':
run()
I tried this blog post and several other similar answers on stackoverflow, but none of them worked, so I decided to put my original code here, which is above.

The following seems to work for me (on Windows). It uses subprocess.Popen() to execute the other script because this gives more control over what goes on. It turns buffering off to eliminate any delays that might cause, plus it redirects stderr to stdout to so all output can be retrieved from a single source.. Also note it also includes the correction #Ketan Mukadam mentions is his answer tregarding the value of __name__ in your first script.
main_script.py:
def main():
# Function that takes a long time (in my actual code)
x += 1 # this raises an error
if __name__ == '__main__':
print("start")
main()
print("end")
run.py:
import subprocess
import sys
def run():
kwargs = dict(bufsize=0, # No buffering.
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT, # Redirect stderr to stdout.
universal_newlines=True)
args = [sys.executable, 'main_script.py']
with subprocess.Popen(args, **kwargs).stdout as output:
for line in output:
print(line, end='') # Process the output...
if __name__ == '__main__':
run()
Output from executing run.py:
start
Traceback (most recent call last):
File "main_script.py", line 10, in <module>
main()
File "main_script.py", line 6, in main
x += 1 # this raises an error
UnboundLocalError: local variable 'x' referenced before assignment

Is this line a mistake?
if __name__ == "main":
The symbol is __main__ set by interpreter and not main. It is possible that because of this typo error no code is running from main script. Try first executing the main script directly on command shell.

Related

Multiprocessing error when get key input with inputs

I am using the module Inputs to get key input in Python when I run this code below
import inputs
events = inputs.get_key()
if __name__ == '__main__':
freeze_support()
I get this error:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Exception ignored in: <bound method InputDevice.__del__ of inputs.Keyboard("/dev/input/by-id/usb-A_Nice_Keyboard-event-kbd")>
Traceback (most recent call last):
File "C:\Users\26099\AppData\Local\Programs\Python\Python36\lib\site-packages\inputs.py", line 2541, in __del__
File "C:\Users\26099\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 113, in terminate
AttributeError: 'NoneType' object has no attribute 'terminate'
This only happens when i run it in a python file. If i run it in the python shell i don't get this error.
freeze_support() has to be properly imported and be the first thing run. That would look like this:
from multiprocessing import freeze_support
import inputs
freeze_support()
if __name__ == '__main__':
events = inputs.get_key()
So the answer given by Nathaniel Taulbut was what I needed but it didn't run in a loop which is what I needed so I changed the code a bit to run in a loop.
from multiprocessing import freeze_support
import inputs
freeze_support()
def Keys():
if __name__ == '__main__':
while True:
events = inputs.get_key()
for event in events:
print(event.code)
Keys()
As far as I have tested it works but is there a way I could get this code working without if __name__ == '__main__'

Get output of subprocess after Keyboard Interrupt

I want to store into a variable the last output of a subprocess after the user performs a Keyboard Interrupt. My problem is mainly with a subprocess without end, i.e. tail in my exemple below. Here is my code:
class Testclass:
def Testdef(self):
try:
global out
print "Tail running"
tail_cmd='tail -f log.Reconnaissance'
proc = subprocess.Popen([tail_cmd], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
except KeyboardInterrupt:
print("KeyboardInterrupt received, stopping…")
finally:
print "program output:", out
if __name__ == "__main__":
app = Testclass()
app.Testdef()
Below is its output, which I don't understand at this moment.
Tail running
program output:
Traceback (most recent call last):
File "./2Test.py", line 19, in <module>
app.Testdef()
File "./2Test.py", line 15, in Testdef
print "program output:", out
NameError: global name 'out' is not defined
out not being defined indicates that the process proc.communicate() did not return any values, otherwise it would have populated your tuple (out, err). Now to find out whether the communicate() method was supposed to return or whether, more likely, your keyboard interrupt simply killed it, thus preventing out from being defined.
I assume you imported the subprocess module, but make sure you do that first. I rewrote your program without using global out or the try statements.
import subprocess
class Testclass:
def __init__(self,out): # allows you to pass in the value of out
self.out = out # makes out a member of this class
def Testdef(self):
print("Tail running")
tail_cmd='tail -f log.Reconnaissance'
proc = subprocess.Popen([tail_cmd], stdout=subprocess.PIPE, shell=True)
# Perhaps this is where you want to implement the try:
(self.out, err) = proc.communicate()
# and here the except:
# and here the finally:
if __name__ == "__main__":
app = Testclass(1) # pass 1 (or anything for testing) to the out variable
app.Testdef()
print('%r' % app.out) # print the contents of the out variable
# i get an empty string, ''
So as-is this program runs once. There is nothing in out. I believe to create a meaningful example of the user doing a keyboard interrupt, we need the program to be doing something that can be interrupted. Maybe I can provide an example in the future...

Call a module's multiprocessing function from a script

My module, which is also a script, calls some internally defined functions that use multiprocessing.
Running the module as a script works just fine on Windows and Linux. Calling its main function from another python script works fine on Linux but not on Windows.
The core, multi-processed function (the function passed to the multiprocessing.Process constructor as the target) never gets executed when my module calls the Process's start() function.
The module must be doing something too demanding for this usage (multiprocessing on Windows when called from a script), but how can I get to the source of this problem?
Here's some example code to demonstrate the behavior. First the module:
# -*- coding: utf-8 -*-
'my_mp_module.py'
import argparse
import itertools
import Queue
import multiprocessing
def meaty_function(**kwargs):
'Do a meaty calculation using multiprocessing'
task_values = kwargs['task_values']
# Set up a queue of tasks to perform, one for each element in the task_values array
in_queue = multiprocessing.Queue()
out_queue = multiprocessing.Queue()
reduce(lambda a, b: a or b,
itertools.imap(in_queue.put, enumerate(task_values)))
core_procargs=(
in_queue ,
out_queue,
)
core_processes = [multiprocessing.Process(target=_core_function,
args=core_procargs) for ii in xrange(len(task_values))]
for p in core_processes:
p.daemon = True # I've tried both ways, setting this to True and False
p.start()
sum_of_results = 0
for result_count in xrange(len(task_values)):
a_result = out_queue.get(block=True)
sum_of_results += a_result
for p in core_processes:
p.join()
return sum_of_results
def _core_function(inp_queue, out_queue):
'Perform the core calculation for each task in the input queue, placing the results in the output queue'
while 1:
try:
task_idx, task_value = inp_queue.get(block=False)
# Perform a calculation with this task value.
task_result = task_idx + task_value # The real calculation is more complicated than this
out_queue.put(task_result)
except Queue.Empty:
break
def get_command_line_arguments(command_line=None):
'parse the given command_line (list of strings) or from sys.argv, return the corresponding argparse.Namespace object'
aparse = argparse.ArgumentParser(description=__doc__)
aparse.add_argument('--task_values', '-t',
action='append',
type=int,
help='''The value for each task to perform.''')
return aparse.parse_args(args=command_line)
def main(command_line=None):
'perform a meaty calculation with the input from the command line, and print the results'
# collect input from the command line
args=get_command_line_arguments(command_line)
keywords = vars(args)
# perform a meaty calculation with the input
meaty_results = meaty_function(**keywords)
# display the results
print(meaty_results)
if __name__ == '__main__':
multiprocessing.freeze_support()
main(command_line=None)
Now the script that calls the module:
# -*- coding: utf-8 -*-
'my_mp_script.py:'
import my_mp_module
import multiprocessing
multiprocessing.freeze_support()
my_mp_module.main(command_line=None)
Running the module as a script gives the expected results:
C:\Users\greg>python -m my_mp_module -t 0 -t 1 -t 2
6
But running another script that simply calls the module's main() function gives an error message under Windows (here I stripped out the error message duplicated from each of the multiple processes):
C:\Users\greg>python my_mp_script.py -t 0 -t 1 -t 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 380, in main
prepare(preparation_data)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 510, in prepare
'__parents_main__', file, path_name, etc
File "C:\Users\greg\Documents\PythonCode\Scripts\my_mp_script.py", line 7, in <module>
my_mp_module.main(command_line=None)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 72, in main
meaty_results = meaty_function(**keywords)
File "C:\Users\greg\Documents\PythonCode\Lib\my_mp_module.py", line 28, in meaty_function
p.start()
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\process.py", line 130, in start
self._popen = Popen(self)
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 258, in __init__
cmd = get_command_line() + [rhandle]
File "C:\Users\greg\AppData\Local\Continuum\anaconda2-64\lib\multiprocessing\forking.py", line 358, in get_command_line
is not going to be frozen to produce a Windows executable.''')
RuntimeError:
Attempt to start a new process before the current process
has finished its bootstrapping phase.
This probably means that you are on Windows and you have
forgotten to use the proper idiom in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce a Windows executable.
Linux and Windows work a little differently in the way they create additional processes. Linux forks the code but Windows creates a new Python interpreter to run the spawned process. The effect here is that all your code gets re-loaded just as if it were the first time. There is a similar question that might be informative to look at see... How to stop multiprocessing in python running for the full script.
The solution here is to modify the my_mp_script.py script so the call to my_mp_module.main() is guarded like so..
import my_mp_module
import multiprocessing
if __name__ == '__main__':
my_mp_module.main(command_line=None)
Note that I've also removed the freeze_support() functions for now, however those may be acceptable to put back in if needed.

Get realtime output from python subprocess

I'm trying to invoke a command line utility from Python. The code is as follows
import subprocess
import sys
class Executor :
def executeEXE(self,executable ) :
CREATE_NO_WINDOW = 0x08000000
process = subprocess.Popen(executable, stdout=subprocess.PIPE,
creationflags=CREATE_NO_WINDOW )
while True:
line = process.stdout.readline()
if line == '' and process.poll() != None:
break
print line
The problem with above code is I want the real-time output of above process which I'm not getting. What I'm doing wrong here.
there are 2 problems in your code:
first of all, readline() will block untill when a new line is printed out and flushed.
That means you should execute the code
while True:
...
in a new Thread and call a callback function when the output is ready.
Since the readline is waiting for a new line, you must use
print 'Hello World'
sys.stdout.flush()
everytime in your executable.
You can see some code and example on my git:
pyCommunicator
Instead, if your external tool is buffered, the only thing you can try is to use stderr as PIPE:
https://stackoverflow.com/a/11902799/2054758

Python talking with other application(s) using subprocess

this is the idea. I'll have 'main' python script that will start (using subprocess) app1 and app2. 'main' script will send input to app1 and output result to app2 and vice versa (and main script will need to remember what was sent so I can't send pipe from app1 to app2).
This is main script.
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
while 1:
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
And this is 'random1.py' file.
import random
def main():
while 1:
inp = raw_input()
print inp, random.random()
if __name__ == '__main__':
main()
First I've tried with only one subprocess just to see if it's working. And it's not. It only outputs 'procitano' and waits there.
How can I read output from 'prvi' (without communicate(). When I use it, it exits my app and that's something that I don't want)?
Add prvi.stdin.flush() after prvi.stdin.write(...).
Explanation: To optimize communication between processes, the OS will buffer 4KB of data before it sends that whole buffer to the other process. If you send less data, you need to tell the OS "That's it. Send it now" -> flush()
[EDIT] The next problem is that prvi.stdout.read() will never return since the child doesn't exit.
You will need to develop a protocol between the processes, so each knows how many bytes of data to read when it gets something. A simple solution is to use a line based protocol (each "message" is terminated by a new line). To do that, replace read() with readline() and don't forget to append \n to everything you send + flush()
main.py
import subprocess
import time
def main():
prvi = subprocess.Popen(['python', 'random1.py'], stdin = subprocess.PIPE , stdout = subprocess.PIPE, stderr = subprocess.STDOUT)
prvi.stdin.write('131231\n')
time.sleep(1) # maybe it needs to wait
print "procitano", prvi.stdout.read()
if __name__ == '__main__':
main()
random1.py
import random
def main():
inp = raw_input()
print inp, random.random()
inp = raw_input()
if __name__ == '__main__':
main()
I've tested with the above codes, then I've got the same problem as your codes.
I think problem is timing.
Here is my guess,
When the main.py tries the code below
prvi.stdout.read() # i think this code may use the random1.py process
the code below grab the random1.py process
inp = raw_input()
To solve this problem, I think, as Aaron Digulla says, you need develope the protocol to make it.
use -u flag to make random1.py output unbuffered
use p.stdout.readline() instead of .read()
time.sleep is unnecessary due to .read blocks.

Categories