I'm working on a wrapper script for invocations to the Ninja c/c++ buildsystem, the script is in python and one thing it should do is to log the output from Ninja and the underlying compiler but without supressing standard output.
The part that gives me trouble is that Ninja seems to detect that it is writing to a terminal or not, so simply catching the output and sending it to standard output ends up changing it (most notably, Ninja does not fill the screen with lists of warning and errorless buildfiles but removes the line of the last successfully built translation unit as a new one comes in). Is there any way to let Ninja write to the terminal, while still capturing its output? The writing to the terminal should happen as the Ninja subprocess runs, but the capturing of said output may wait until the subprocess has completed.
pty.spawn() allows you to log output to a file while hoodwinking Ninja subprocess into thinking that it works with a terminal (tty):
import os
import pty
logfile = open('logfile', 'wb')
def read(fd):
data = os.read(fd, 1024)
logfile.write(data)
return data
pty.spawn("ninja", read)
Related
I have two Python files (main.py and main_test.py). The file main_test.py is executed within main.py. When I do not use a log file this is what gets printed out:
Main file: 17:41:18
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
When I use a log file and execute main.py>log, then I get the following:
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
Main file: 17:41:18
Also, when I use python3 main.py | tee log to print out and log the output, it waits and prints out after finishing everything. In addition, the problem of reversing remains.
Questions
How can I fix the reversed print out?
How can I print out results simultaneously in terminal and log them in a correct order?
Python files for replication
main.py
import os
import time
import datetime
import pytz
python_file_name = 'main_test'+'.py'
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Main file: {curr_time}')
cwd = os.path.join(os.getcwd(), python_file_name)
os.system(f'python3 {cwd}')
main_test.py
import pytz
import datetime
import time
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
for i in range(3):
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Executed file: {curr_time}')
time.sleep(1)
When you run a script like this:
python main.py>log
The shell redirects output from the script to a file called log. However, if the script launches other scripts in their own subshell (which is what os.system() does), the output of that does not get captured.
What is surprising about your example is that you'd see anything at all when redirecting, since the output should have been redirected and no longer echo - so perhaps there's something you're leaving out here.
Also, tee waits for EOF on standard in, or for some error to occur, so the behaviour you're seeing there makes sense. This is intended behaviour.
Why bother with shells at all though? Why not write a few functions to call, and import the other Python module to call its functions? Or, if you need things to run in parallel (which they didn't in your example), look at multiprocessing.
In direct response to your questions:
"How can I fix the reversed print out?"
Don't use redirection, and write to file directly from the script, or ensure you use the same redirection when calling other scripts from the first (that will get messy), or capture the output from the subprocesses in the subshell and pipe it to the standard out of your main script.
"How can I print out results simultaneously in terminal and log them in a correct order?"
You should probably just do it in the script, otherwise this is not a really a Python question and you should try SuperUser or similar sites to see if there's some way to have tee or similar tools write through live.
In general though, unless you have really strong reasons to have the other functionality running in other shells, you should look at solving your problems in the Python script. And if you can't, use you can use something like Popen or derivatives to capture the subscript's output and do what you need instead of relying on tools that may or may not be available on the host OS running your script.
I'm writing a simple CLI application using python.
I have a list of records that I want to print in the terminal and I would like to output them just like git log does. So as a partial list you can load more using the down arrow, from which you quit by typing "q" (basically like the output of less, but without opening and closing a file).
How does git log do that?
You can pipe directly to a pager like this answer should work.
Alternatively, you can use a temporary file:
import os
import tempfile
import subprocess
# File contents for testing, replace with whatever
file = '\n'.join(f"{i} abc 123"*15 for i in range(400))
# Save the file to the disk
with tempfile.NamedTemporaryFile('w+', delete=False) as f:
f.write(file)
# Run `less` on the saved file
subprocess.check_call(["less", f.name])
# Delete the temporary file now that we are done with it.
os.unlink(f.name)
Device you are looking for is called pager, there exists pipepager function inside pydoc, which is not documented in linked pydoc docs, but using interactive python console you might learn that
>>> help(pydoc.pipepager)
Help on function pipepager in module pydoc:
pipepager(text, cmd)
Page through text by feeding it to another program.
therefore it seems that you should use this as follows
import pydoc
pydoc.pipepager("your text here", "less")
with limitation that it does depends on availability of less command.
How does git log do that?
git log invokes less when the output will not fit on the terminal. You can check that by running git log (if the repo doesn't have a lot of commits you can just resize the terminal before running the command) and then checking the running processes like so ps aux | grep less
I am trying to compile a script utilizing multiprocessing into a Windows executable. At first I ran into the same issue as Why python executable opens new window instance when function by multiprocessing module is called on windows when I compiled it into an executable. Following the accepted answer I adjusted my script such that
from multiprocessing import freeze_support
# my functions
if __name__ == "__main__":
freeze_support()
# my script
And this again works perfectly when run as a script. However, when I compile and run it I encounter:
Where I've underlined in green part of the error. This specific line refers to
freeze_support()
in my script. Furthermore, it is not actually encountered on this line, but when my script goes to multiprocess which is something like:
p = multiprocessing.Process(target=my_function, args=[my_list])
p.start()
p1 = multiprocessing.Process(target=my_function, args=[my_list])
p1.start()
p.join()
p1.join()
Is this an error in the multiprocessing module (specifically line 148) or am I misunderstanding the answer I linked, or something else?
I'll also note that the script does work correctly when compiled, but you have to click "OK" on an error message for every multiprocess that is spawned (quite a lot) and every error message is exactly the same. Would this mean i am improperly ending the process with the p.join()?
I've also tried the solution at Python 3.4 multiprocessing does not work with py2exe which recommends adding
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
to your script, yet this causes an error in the script form (not even compiled yet) of:
FileNotFoundError: [WinError 2] The system cannot find the file specified
Thanks for the help!
freeze_support documentation: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.freeze_support
This appears to have been a problem for quite some time - I found references going back to 2014, at least. Since it appears to be harmless, the general recommendation is to suppress the error by replacing sys.stdout (and sys.stderr, which is flushed on the next line) with a dummy. Try this:
import os
import sys
from multiprocessing import freeze_support
if __name__ == '__main__':
if sys.stdout is None:
sys.stdout = sys.stderr = open(os.devnull, 'w')
freeze_support()
This is not an issue of the multiprocessing library or py2exe per se but a side effect of the way you run the application. The py2exe documentation contains some discussion on this topic:
A program running under Windows can be of two types: a console
program or a windows program. A console program is one that runs in
the command prompt window (cmd). Console programs interact with users
using three standard channels: standard input, standard output and
standard error […].
As opposed to a console application, a windows application interacts
with the user using a complex event-driven user interface and
therefore has no need for the standard channels whose use in such
applications usually results in a crash.
Py2exe will work around these issues automatically in some cases, but at least one of your processes has no attached standard output: sys.stdout is None), which means that sys.stdout.flush() is None.flush(), which yields the error you are getting. The documentation linked above has an easy fix that redirects all outputs to files.
import sys
sys.stdout = open(“my_stdout.log”, “w”)
sys.stderr = open(“my_stderr.log”, “w”)
Simply add those lines at the entry point of your processes. There is also a relevant documentation page on the interactions between Py2Exe and subprocesses.
I have a python script which when run, logs information on the terminal, i want to send this logging information to a text file,
To achieve this in the beginning of the file i am inserting
import subprocess
subprocess.call(['script', 'logfile'])
and at the end of the file,i put in,
subprocess.call(['exit'])
The problem with this is when it calls the first commandscript logfile,it terminates the script,
Any suggestions on how i could make this work would be really helpful,Thanks in advance
The problem is that subprocess.call isn't returning until the shell spawned by script exits, at which point your Python script will resume.
The simplest way to do what you want is to call script itself with your Python script as an argument. Instead of
#!/usr/bin/python
import subprocess
subprocess.call(['script', 'logfile'])
# Rest of your Python code
subprocess.call(['exit'])
you will use
#!/usr/bin/python
import os
import sys
if '_underscript' not in os.environ:
os.environ['_underscript'] = "yes"
cmd_args = ['script', 'logfile', 'python'] + sys.argv
os.execvp('script', cmd_args)
# Rest of your Python code
The environment variable prevents your script from entering an infinite loop of re-running itself with script. When you run your Python script, it first checks its environment for a variable that should not yet exist. If it doesn't, it sets that variable, then runs script to re-run the Python script. execvp replaces your script with the call to script; nothing else in the Python script executes. This second time your script runs, the variable _underscript does exist, meaning the if block is skipped and the rest of your script runs as intended.
Seems like that's the expected behaviour of subprocess.call(...). If you want to capture the output of the script to a file, you'll need to open a new file handler in write mode, and tell the subprocess.call where to direct the stdout, which is the terminal output you typically would see.
Try:
import subprocess
f = open('/tmp/mylogfile.log', 'w')
subprocess.call(['/path/to/script'], stdout=f)
f.close()
Then in the terminal you can run tail /tmp/mylogfile.log to confirm.
I'm not sure the last exit call is required for what you're trying to achieve.
You can read more in the python docs, depending which version of Python you're using. https://docs.python.org/2/library/subprocess.html
The file doesn't need to pre-exist. Hope that helps!
Goal
I'd like to make my curses Python application display its output on a Linux machine's first physical console (TTY1) by adding it to /etc/inittab, reloading init with telinit q and so on.
I'd like to avoid a hacky way of using IO redirection when starting it from /etc/inittab with:
1:2345:respawn:/path/to/app.py > /dev/tty1 < /dev/tty1
What I'm after is doing it natively from within my app, similar to the way getty does it, i.e. you use a command line argument to tell it on which TTY to listen to:
S0:2345:respawn:/sbin/getty -L ttyS1 115200 vt100
Example code
For simplicity, let's say I've written this very complex app that when invoked, prints some content using ncurses routines.
import curses
class CursesApp(object):
def __init__(self, stdscr):
self.stdscr = stdscr
# Code producing some output, accepting user input, etc.
# ...
curses.wrapper(CursesApp)
The code I already have does everything I need, except that it only shows its output on the terminal it's run from. When invoked from inittab without the hacky redirection I mentioned above, it works but there's no output on TTY1.
I know that init doesn't redirect input and output by itself, so that's expected.
How would I need to modify my existing code to send its output to the requested TTY instead of STDOUT?
PS. I'm not asking how to add support for command line arguments, I already have this but removed it from the code sample for brevity.
This is rather simple. Just open the terminal device once for input and once for output; then duplicate the input descriptor to the active process' file descriptor 0, and output descriptor over file descriptors 1 and 2. Then close the other handles to the TTY:
import os
import sys
with open('/dev/tty6', 'rb') as inf, open('/dev/tty6', 'wb') as outf:
os.dup2(inf.fileno(), 0)
os.dup2(outf.fileno(), 1)
os.dup2(outf.fileno(), 2)
I tested this with the cmd module running on TTY6:
import cmd
cmd.Cmd().cmdloop()
Works perfectly. With curses it is apparent from their looks that something is missing: TERM environment variable:
os.environ['TERM'] = 'linux'
Execute all these statements before even importing curses and it should work.