So I'm using some libraries that (unfortunately and much to my chagrin) print to stdout for certain debug info. Okay, no problem, I just disabled it with:
import sys,os
sys.stdout = open(os.devnull,'wb')
I've recently added code for getting user input and printing output in the terminal, which of course needs stdout. But again, no problem, instead of print I can just do:
sys.__stdout__.write('text\n')
Finally, since I'm doing raw_input in a separate thread and the program's output can interrupt the user typing, I wanted to re-echo the test as described in the first part of this answer to make it more or less seamless using readline.get_line_buffer(). For example, if the user entered foobar:
> foobar
And then another thread says Interrupting text! it should look like:
Interrupting text!
> foobar
But instead I observe:
Interrupting text!
>
It turns out that readline.get_line_buffer() is always blank, and the code fails to re-echo what the user had typed. If I remove the sys.stdout override, everything works as expected (except now the debug output from libraries isn't blocked).
I'm using Python 2.7 on Linux if that matters. Perhaps what I want to do is impossible, but I'd really like to know why this is happening. In my mind, doing things with stdout shouldn't affect the stdin line buffer, but I'm admittedly not familiar with the underlying libraries or implementation.
A minimum working example is below. Just type something and wait for the interrupting text. The prompt "> " will be re-echoed, but the text you entered won't be. Simply comment out the second line to see it work correctly. When the thread prints, it logs the line buffer to stdin.log.
import time,readline,thread,sys,os
sys.stdout = open(os.devnull,'wb') # comment this line!
def noisy_thread():
while True:
time.sleep(5)
line = readline.get_line_buffer()
with open('stdin.log','a') as f:
f.write(time.asctime()+' | "'+line+'"\n')
sys.__stdout__.write('\r'+' '*(len(line)+2)+'\r')
sys.__stdout__.write('Interrupting text!\n')
sys.__stdout__.write('> ' + line)
sys.__stdout__.flush()
thread.start_new_thread(noisy_thread, ())
while True:
sys.__stdout__.write('> ')
s = raw_input()
I've also tried sys.stdout = sys.__stdout__ right before the get_line_buffer() call, but that doesn't work either. Any help or explanation is greatly appreciated.
Its the raw_input() that stops working when you redirect the stdout.
If you try sys.stdout = sys.__stdout__ just before the raw_input() it works again.
Both stdin and stdout need to be connected to the terminal in order for raw_input to work.
Related
I am trying to use subprocess to call an swipl.exe (prolog interpreter) and simulate terminal interaction via pipes.
USE_UNIVERSAL_ENDLINES = True
proc = subprocess.Popen([SWI_PROLOG_PATH,PROLOG_FILE_PATH],bufsize=0,stdin=subprocess.PIPE, stdout=subprocess.PIPE,stderr=subprocess.PIPE,universal_newlines=USE_UNIVERSAL_ENDLINES)
def output_reader(proc,sms):
while True:
c= proc.stdout.read(1)
print('got data in pipe')
sms.add_data(c)
def error_reader(proc,sms):
while True:
c = proc.stderr.read(1)
# sms.add_data(c)
def input_sender(proc):
while True:
input_data = input()
proc.stdin.write((input_data + '\n') if USE_UNIVERSAL_ENDLINES else (input_data + '\n').encode(sys.stdout.encoding))
proc.stdin.flush()
print('send: ' + input_data)
to = threading.Thread(target=output_reader, args=(proc,global_sms))
to.start()
te = threading.Thread(target=error_reader, args=(proc,global_sms))
te.start()
ti = threading.Thread(target=input_sender, args=(proc,))
ti.start()
this setup works with simple echo program that reads input and print to output.
Notice also that i set bufforsize to 0 which should mean no bufforing.
As a response i get message via piped stderr:
recieved : Welcome to SWI-Prolog (threaded, 64 bits, version 8.2.1)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.
For online help and background, visit https://www.swi-prolog.org
For built-in help, use ?- help(Topic). or ?- apropos(Word).
but not a single character from piped stdout and i should see at least '1 ?-'
when i use terminal to redirect output
c:\Program Files\swipl\bin>swipl c:/agh/s9/SE/project/pp.pl > c:/agh/s9/SE/project/prolog_out.txt
and in file i get:
1 ?-
Meaning a) swiprolog does send something down it's stdout
b) for reasons unknown to me i do not get anything in pipe, even though i try to read single character.
Question:
how to read what swiprolog writes to stdout? I am aware of existence of pyswip. I do not wish to use it, cause redoing prolog file would take probably more time than fixing this
So if anyone stumbles with similar problem not willing to use any other api, SWI-Prolog seems to go into buffering output mode if both stdin and stdout are redirected. (only when both, which was quite hard to find). Adding a procedure with (or adding as first in procedure you want to call):
set_stream(current_output, tty(true))
it will force stdout to work in terminal mode and not buffer, allowing for message communication via pipes.
I'm using python 3 on Komodo, and I want for there to be a time delay between the execution of commands. However, using the code below, all of the print commands are launched at the same time, but it does show that the time after all the commands are executed is two seconds greater than the time before the commands are executed. Is there a way for the first line to be printed, wait a second, second line be printed, wait a second, and have third and fourth lines be print?
import time
from time import sleep
t=time.asctime(time.localtime(time.time()));
print(t)
time.sleep(1)
print('Good Night')
time.sleep(1)
print('I"m back')
t=time.asctime(time.localtime(time.time()));
print(t)
By default, print prints to sys.stdout, which is line-buffered when writing to an interactive terminal,1 but block-buffered when writing to a file.
So, when you run your code with python myscript.py from your Terminal or Command Prompt, you will see each line appear as it's printed, as desired.
But if you run it with, say, python myscript.py >outfile, nothing will get written until the buffer fills up (or until the script exits, if that never happens). Normally, that's fine. But apparently, however you're running your script in Komodo, it looks like a regular file, not an interactive terminal, to Python.
It's possible that you can fix that just by using or configuring Komodo differently.
I don't know much about Komodo, but I do see that there's an addon for embedding a terminal; maybe if you use that instead of sending output to the builtin JavaScript (?) console, things will work better, but I really have no idea.
Alternatively, you can make sure that the output buffer is flushed after each line by doing it manually, e.g., by passing the flush argument to print:
print(t, flush=True)
If you really want to, you can even replace print in your module with a function that always does this:
import builtins
import functools
print = functools.partial(builtins.print, flush=True)
… but you probably don't want to do that.
Alternatively, you can replace sys.stdout with a line-buffered file object over the raw stdout, just by calling open on its underlying raw file or file descriptor:
sys.stdout = open(sys.stdout.fileno(), buffering=1)
If you search around Stack Overflow or the web, you'll find a lot of suggestions to disable buffering. And you can force Python to use unbuffered output with the -u flag or the PYTHONUNBUFFERED environment variable. But that may not do any good in Python 3.2
1. As sys.stdout explains, it's just a regular text file, like those returned by open. As explained in open, this distinction is made by calling isatty.
2. Python 2's stdout is just a thin wrapper around the C stdio object, so if you open it unbuffered, there's no buffering. Python 3's stdout is a hefty wrapper around the raw file descriptor that does its own buffering and decoding (see the io docs for details), so -u will make sys.stdout.buffer.raw unbuffered, but sys.stdout itself will still be buffered, as explained in the -u docs.
This is a weird one that's so general I can't to properly narrow the search terms to find an answer.
My python script has a raw_input to prompt the user for values. But, when I try to run the script and funnel it into a file, it crashes.
Something like "script.py > save.txt"
wont work. It doesn't even properly prompt me at the command line for my input. There doesn't seem to be anything indicating why this doesn't work as intuitively as it should.
raw_output prints its prompt to stdout, which you are redirecting to a file. So your prompt will end up in the file and the program does not appear to show a prompt. One solution is to output your prompt to stderr.
import sys
sys.stderr.write('prompt> ')
value = raw_input()
print('value was: ', value)
You could also avoid using both pipes and interactive input with the same script. Either take input from command line flags using argparse and use pipes, or create an interactive program that saves output to a file itself.
Depending on your program's logic, you can also check whether stdout is connected to a live console or not:
is_tty = os.isatty(sys.stdout.fileno())
Dolda2000 also has a good point about writing to /dev/tty, which will write to the controlling terminal of the script being run even if both stdin and stderr are redirected. The deal there, though, is that you can't use it if you're not running in a terminal.
import errno
try:
with open('/dev/tty', 'w') as tty:
tty.write('prompt> ')
except IOError as exc:
if exc.errno == errno.ENXIO:
pass # no /dev/tty available
else:
pass # something else went wrong
I have a long-running Python script that I run from the command-line. The script writes progress messages and results to the standard output. I want to capture everything the script write to the standard output in a file, but also see it on the command line. Alternatively, I want the output to go to the file immediately, so I can use tail to view the progress. I have tried this:
python MyLongRunngingScript.py | tee log.txt
But it does not produce any output (just running the script produces output as expected). Can anyone propose a simple solution? I am using Mac OS X 10.6.4.
Edit I am using print for output in my script.
You are on the right path but the problem is python buffering the output.
Fortunately there is a way to tell it not to buffer output:
python -u MyLongRunngingScript.py | tee log.txt
The fact that you don't see anything is probably related to the fact that buffering is occurring. So you only get output every 4 Ko of text or so.
instead, try something like this :
class OutputSplitter(object):
def __init__(self, real_output, *open_files):
self.__stdout = real_output
self.__fds = open_files
self.encoding = real_output.encoding
def write(self, string):
self.__stdout.write(string) # don't catch exception on that one.
self.__stdout.flush()
for fd in self.__fds:
try:
fd.write(string)
fd.flush()
except IOError:
pass # do what you want here.
def flush(self):
pass # already flushed
Then decorate sys.stdout with that class with some code like that :
stdout_saved = sys.stdout
logfile = open("log.txt","a") # check exception on that one.
sys.stdout = OutputSplitter(stdout_saved, logfile)
That way, every output (print included) is flushed to the standard output and to the specified file. Might require tweaking because i haven't tested that implementation.
Of course, expect to see a (small most of the time) performance penalty when printing messages.
Another simple solution could also be
python script.py > output.log
You could try doing sys.stdout.flush() occasionally in your script, and running with tee again. When stdout is redirected through to tee, it might get buffered for longer than if it's going straight to a terminal.
Is it possible to capture Python interpreter's output from a Python script?
Is it possible to capture Windows CMD's output from a Python script?
If so, which librar(y|ies) should I look into?
If you are talking about the python interpreter or CMD.exe that is the 'parent' of your script then no, it isn't possible. In every POSIX-like system (now you're running Windows, it seems, and that might have some quirk I don't know about, YMMV) each process has three streams, standard input, standard output and standard error. Bu default (when running in a console) these are directed to the console, but redirection is possible using the pipe notation:
python script_a.py | python script_b.py
This ties the standard output stream of script a to the standard input stream of script B. Standard error still goes to the console in this example. See the article on standard streams on Wikipedia.
If you're talking about a child process, you can launch it from python like so (stdin is also an option if you want two way communication):
import subprocess
# Of course you can open things other than python here :)
process = subprocess.Popen(["python", "main.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
x = process.stderr.readline()
y = process.stdout.readline()
process.wait()
See the Python subprocess module for information on managing the process. For communication, the process.stdin and process.stdout pipes are considered standard file objects.
For use with pipes, reading from standard input as lassevk suggested you'd do something like this:
import sys
x = sys.stderr.readline()
y = sys.stdin.readline()
sys.stdin and sys.stdout are standard file objects as noted above, defined in the sys module. You might also want to take a look at the pipes module.
Reading data with readline() as in my example is a pretty naïve way of getting data though. If the output is not line-oriented or indeterministic you probably want to look into polling which unfortunately does not work in windows, but I'm sure there's some alternative out there.
I think I can point you to a good answer for the first part of your question.
1. Is it possible to capture Python interpreter's output from a Python
script?
The answer is "yes", and personally I like the following lifted from the examples in the PEP 343 -- The "with" Statement document.
from contextlib import contextmanager
import sys
#contextmanager
def stdout_redirected(new_stdout):
saved_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout.close()
sys.stdout = saved_stdout
And used like this:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
A nice aspect of it is that it can be applied selectively around just a portion of a script's execution, rather than its entire extent, and stays in effect even when unhandled exceptions are raised within its context. If you re-open the file in append-mode after its first use, you can accumulate the results into a single file:
with stdout_redirected(open("filename.txt", "w")):
print "Hello world"
print "screen only output again"
with stdout_redirected(open("filename.txt", "a")):
print "Hello world2"
Of course, the above could also be extended to also redirect sys.stderr to the same or another file. Also see this answer to a related question.
Actually, you definitely can, and it's beautiful, ugly, and crazy at the same time!
You can replace sys.stdout and sys.stderr with StringIO objects that collect the output.
Here's an example, save it as evil.py:
import sys
import StringIO
s = StringIO.StringIO()
sys.stdout = s
print "hey, this isn't going to stdout at all!"
print "where is it ?"
sys.stderr.write('It actually went to a StringIO object, I will show you now:\n')
sys.stderr.write(s.getvalue())
When you run this program, you will see that:
nothing went to stdout (where print usually prints to)
the first string that gets written to stderr is the one starting with 'It'
the next two lines are the ones that were collected in the StringIO object
Replacing sys.stdout/err like this is an application of what's called monkeypatching. Opinions may vary whether or not this is 'supported', and it is definitely an ugly hack, but it has saved my bacon when trying to wrap around external stuff once or twice.
Tested on Linux, not on Windows, but it should work just as well. Let me know if it works on Windows!
You want subprocess. Look specifically at Popen in 17.1.1 and communicate in 17.1.2.
In which context are you asking?
Are you trying to capture the output from a program you start on the command line?
if so, then this is how to execute it:
somescript.py | your-capture-program-here
and to read the output, just read from standard input.
If, on the other hand, you're executing that script or cmd.exe or similar from within your program, and want to wait until the script/program has finished, and capture all its output, then you need to look at the library calls you use to start that external program, most likely there is a way to ask it to give you some way to read the output and wait for completion.