Goal
I'd like to make my curses Python application display its output on a Linux machine's first physical console (TTY1) by adding it to /etc/inittab, reloading init with telinit q and so on.
I'd like to avoid a hacky way of using IO redirection when starting it from /etc/inittab with:
1:2345:respawn:/path/to/app.py > /dev/tty1 < /dev/tty1
What I'm after is doing it natively from within my app, similar to the way getty does it, i.e. you use a command line argument to tell it on which TTY to listen to:
S0:2345:respawn:/sbin/getty -L ttyS1 115200 vt100
Example code
For simplicity, let's say I've written this very complex app that when invoked, prints some content using ncurses routines.
import curses
class CursesApp(object):
def __init__(self, stdscr):
self.stdscr = stdscr
# Code producing some output, accepting user input, etc.
# ...
curses.wrapper(CursesApp)
The code I already have does everything I need, except that it only shows its output on the terminal it's run from. When invoked from inittab without the hacky redirection I mentioned above, it works but there's no output on TTY1.
I know that init doesn't redirect input and output by itself, so that's expected.
How would I need to modify my existing code to send its output to the requested TTY instead of STDOUT?
PS. I'm not asking how to add support for command line arguments, I already have this but removed it from the code sample for brevity.
This is rather simple. Just open the terminal device once for input and once for output; then duplicate the input descriptor to the active process' file descriptor 0, and output descriptor over file descriptors 1 and 2. Then close the other handles to the TTY:
import os
import sys
with open('/dev/tty6', 'rb') as inf, open('/dev/tty6', 'wb') as outf:
os.dup2(inf.fileno(), 0)
os.dup2(outf.fileno(), 1)
os.dup2(outf.fileno(), 2)
I tested this with the cmd module running on TTY6:
import cmd
cmd.Cmd().cmdloop()
Works perfectly. With curses it is apparent from their looks that something is missing: TERM environment variable:
os.environ['TERM'] = 'linux'
Execute all these statements before even importing curses and it should work.
Related
I have two Python files (main.py and main_test.py). The file main_test.py is executed within main.py. When I do not use a log file this is what gets printed out:
Main file: 17:41:18
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
When I use a log file and execute main.py>log, then I get the following:
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
Main file: 17:41:18
Also, when I use python3 main.py | tee log to print out and log the output, it waits and prints out after finishing everything. In addition, the problem of reversing remains.
Questions
How can I fix the reversed print out?
How can I print out results simultaneously in terminal and log them in a correct order?
Python files for replication
main.py
import os
import time
import datetime
import pytz
python_file_name = 'main_test'+'.py'
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Main file: {curr_time}')
cwd = os.path.join(os.getcwd(), python_file_name)
os.system(f'python3 {cwd}')
main_test.py
import pytz
import datetime
import time
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
for i in range(3):
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Executed file: {curr_time}')
time.sleep(1)
When you run a script like this:
python main.py>log
The shell redirects output from the script to a file called log. However, if the script launches other scripts in their own subshell (which is what os.system() does), the output of that does not get captured.
What is surprising about your example is that you'd see anything at all when redirecting, since the output should have been redirected and no longer echo - so perhaps there's something you're leaving out here.
Also, tee waits for EOF on standard in, or for some error to occur, so the behaviour you're seeing there makes sense. This is intended behaviour.
Why bother with shells at all though? Why not write a few functions to call, and import the other Python module to call its functions? Or, if you need things to run in parallel (which they didn't in your example), look at multiprocessing.
In direct response to your questions:
"How can I fix the reversed print out?"
Don't use redirection, and write to file directly from the script, or ensure you use the same redirection when calling other scripts from the first (that will get messy), or capture the output from the subprocesses in the subshell and pipe it to the standard out of your main script.
"How can I print out results simultaneously in terminal and log them in a correct order?"
You should probably just do it in the script, otherwise this is not a really a Python question and you should try SuperUser or similar sites to see if there's some way to have tee or similar tools write through live.
In general though, unless you have really strong reasons to have the other functionality running in other shells, you should look at solving your problems in the Python script. And if you can't, use you can use something like Popen or derivatives to capture the subscript's output and do what you need instead of relying on tools that may or may not be available on the host OS running your script.
I am using Spyder for Python and sometime I would like to print the console into a log file (in cases where the output is quite long) and sometimes I just want to have the output at the console. For this purpose I use the following construction in my Python files:
In the beginning of the file:
import sys
# specify if the output should be printed on a separate log file or on the console
printLogToFile = False
if printLogToFile == True:
#Specify output file for the logs
sys.stdout = open('C:/Users/User 1/logfile.txt', 'w')
At the end of the file:
# Close the log file if output is printed on a log file and not on the console
if printLogToFile == True:
sys.stdout.close()
sys.stdout = sys.__stdout__
Basically, whenever my boolean variable printLogToFile has the value False then everything is printed on the console as it should and whenever it has the value True everything is printed into the logfile. However, once I run just once the file with printLogToFile=True this can't be reversed any longer. Even when the variable has the value False it still prints everything into the log file and not onto the console. What is even more strange is that also for other Python files, that do not have any connection to this file, the console is not printed any longer onto the console. The only way to solve this problem is to close Spyder and restart it again.
Do you have any idea why this is happening and how to avoid this? I'd appreciate every comment.
The console in Spyder is an IPython console, not a plain Python console, so I think IPython is doing something with stdout that causes your approach to fail.
The docs for sys.__stdout__ say
It can also be used to restore the actual files to known working file
objects in case they have been overwritten with a broken object.
However, the preferred way to do this is to explicitly save the
previous stream before replacing it, and restore the saved object.
In other words, try:
if printLogToFile:
prev_stdout = sys.stdout
sys.stdout = open('C:/Users/User 1/logfile.txt', 'w')
# code that generates the output goes here
if printLogToFile:
sys.stdout.close()
sys.stdout = prev_stdout
As an alternative, based on this answer and this answer assuming Python >= 3.7, you can use contextlib and a with statement to selectively capture the output of some of your code. This seems to work for me in Spyder 4 and 5:
from contextlib import redirect_stdout, nullcontext
if printLogToFile:
f = open('myfile.txt', 'w')
cm = redirect_stdout(f)
else:
cm = nullcontext()
with cm:
# code that generates the output goes here
If you want to execute the whole of your Python script myscript.py and capture everything it outputs, it's probably easier to leave your script unmodified and call it from a wrapper script:
# put this in the same folder as myscript.py
from contextlib import redirect_stdout
with redirect_stdout(open('myfile.txt', 'w')):
import myscript
If you want anything more flexible than that, it's probably time to start using logging.
To add an ad hoc debugger breakpoint in a Python script, I can insert the line
import pdb; pdb.set_trace()
Pdb reads from standard input, so this doesn't work if the script itself also reads from standard input. As a workaround, on a Unix-like system, I can tell pdb to read from the terminal:
import pdb; pdb.Pdb(stdin=open('/dev/tty', 'r'), stdout=open('/dev/tty', 'w')).set_trace()
This works, but unlike with a plain pdb.set_trace, I don't get the benefit of command line edition provided by the readline library (arrow keys, etc.).
How can I enter pdb without interfering with the script's stdin and stdout, and still get command line edition?
Ideally the same code should work in both Python 2 and Python 3. Compatibility with non-Unix systems would be a bonus.
Toy program as a test case:
#!/usr/bin/env python
import sys
for line in sys.stdin:
#import pdb; pdb.set_trace()
import pdb; pdb.Pdb(stdin=open('/dev/tty', 'r'), stdout=open('/dev/tty', 'w')).set_trace()
sys.stdout.write(line)
Usage: { echo one; echo two; } | python cat.py
I hope I have not missed anything important, but it seems like you cannot really do that in an entirely trivial way, because readline would only get used if pdb.Pdb (resp. cmd.Cmd it sublcasses) has use_rawinput set to non-zero, which however would result in ignoring your stdin and mixing inputs for debugger and script itself. That said, the best I've come up with so far is:
#!/usr/bin/env python3
import os
import sys
import pdb
pdb_inst = pdb.Pdb()
stdin_called = os.fdopen(os.dup(0))
console_new = open('/dev/tty')
os.dup2(console_new.fileno(), 0)
console_new.close()
sys.stdin = os.fdopen(0)
for line in stdin_called:
pdb_inst.set_trace()
sys.stdout.write(line)
It is relatively invasive to your original script, even though it could be at least placed outside of it and imported and called or used as a wrapper.
I've redirected (duplicated) the incoming STDIN to a file descriptor and opened that as stdin_called. Then (based on your example) I've opened /dev/tty for reading, replaced process' file descriptor 0 (for STDIN; it should rather use value returned by sys.stdin.fileno()) with this one I've just opened and also reassigned a corresponding file-like object to sys.stdin. This way the programs loop and pdb are using their own input streams while pdb gets to interact with what appears to be just a "normal" console STDIN it is happy to enable readline on.
It isn't pretty, but should be doing what you were after and it hopefully provides useful hints. It uses (if available) readline (line editing, history, completion) when in pdb:
$ { echo one; echo two; } | python3 cat.py
> /tmp/so/cat.py(16)<module>()
-> sys.stdout.write(line)
(Pdb) c
one
> /tmp/so/cat.py(15)<module>()
-> pdb_inst.set_trace()
(Pdb) con[TAB][TAB]
condition cont continue
(Pdb) cont
two
Note starting with version 3.7 you could use breakpoint() instead of import pdb; pdb.Pdb().set_trace() for convenience and you could also check result of dup2 call to make sure the file descriptor got created/replaced as expected.
EDIT: As mentioned earlier and noted in a comment by OP, this is both ugly and invasive to the script. It's not making it any prettier, but we can employ few tricks to reduce impact on its surrounding. One such option I've hacked together:
import sys
# Add this: BEGIN
import os
import pdb
import inspect
pdb_inst = pdb.Pdb()
class WrapSys:
def __init__(self):
self.__stdin = os.fdopen(os.dup(0))
self.__console = open('/dev/tty')
os.dup2(self.__console.fileno(), 0)
self.__console.close()
self.__console = os.fdopen(0)
self.__sys = sys
def __getattr__(self, name):
if name == 'stdin':
if any((f.filename.endswith("pdb.py") for f in inspect.stack())):
return self.__console
else:
return self.__stdin
else:
return getattr(self.__sys, name)
sys = WrapSys()
# Add this: END
for line in sys.stdin:
pdb_inst.set_trace() # Inject breakpoint
sys.stdout.write(line)
I have not dug all the way through, but as is, pdb/cmd really seems to not only need sys.stdin but also for it to use fd 0 in order for readline to kick in. Above example takes things up a notch and within our script hijacks what sys stands for in order to preset different meaning for sys.stdin when code from pdb.py is on a stack. One obvious caveat. If anything else other then pdb also expects and depends on sys.stdin fd to be 0, it still would be out of luck (or reading its input from a different stream if it just went for it).
Today I managed to run my first Python script ever. I'm a newb, on Windows 7 machine.
When I run python.exe and enter following (Python is installed in C:/Python27)
import os
os.chdir('C:\\Pye\\')
from decoder import *
decode("12345")
I get the desired result in the python command prompt window so the code works fine. Then I tried to output those results to a text file, just so I don't have to copy-paste it all manually in the prompt window. After a bit of Googling (again, I'm kinda guessing what I'm doing here) I came up with this;
I wrote "a.py" script in the C:/Pye directory, and it looked like this;
from decoder import *
decode("12345")
And then I wrote a 01.py file that looked like this;
import subprocess
with open("result.txt", "w+") as output:
subprocess.call(["python", "c:/Pye/a.py"], stdout=output);
I see the result.txt gets created in the directory, but 0 bytes. Same happens if I already make an empty result.txt and execute the 01.py (I use Python Launcher).
Any ideas where am I screwing things up?
You didn't print anything in a.py. Change it to this:
from decoder import *
print(decode("12345"))
In the Python shell, it prints it automatically; but the Python shell is just a helper. In a file, you have to tell it explicitly.
When you run python and enter commands, it prints to standard out (the console by default) because you're using the shell. What is printed in the python shell is just a representation of what object is returned by that line of code. It's not actually equivalent to explicitly calling print.
When you run python with a file argument, it executes that script, line by line, without printing any variables to stdout unless you explicitly call "print()" or write directly to stdout.
Consider changing your script to use the print statement.:
print(decode("12345"))
I'm working on a wrapper script for invocations to the Ninja c/c++ buildsystem, the script is in python and one thing it should do is to log the output from Ninja and the underlying compiler but without supressing standard output.
The part that gives me trouble is that Ninja seems to detect that it is writing to a terminal or not, so simply catching the output and sending it to standard output ends up changing it (most notably, Ninja does not fill the screen with lists of warning and errorless buildfiles but removes the line of the last successfully built translation unit as a new one comes in). Is there any way to let Ninja write to the terminal, while still capturing its output? The writing to the terminal should happen as the Ninja subprocess runs, but the capturing of said output may wait until the subprocess has completed.
pty.spawn() allows you to log output to a file while hoodwinking Ninja subprocess into thinking that it works with a terminal (tty):
import os
import pty
logfile = open('logfile', 'wb')
def read(fd):
data = os.read(fd, 1024)
logfile.write(data)
return data
pty.spawn("ninja", read)