I have 3 scripts. one is starttest.py which kicks the execution of methods called in test.py. Methods are defined in module.py.
There are many print statements in each of file and I want to capture each print statement in my log file from Starttest.py file itself. I tried using sys.stdout in starttest.py file but this function only takes print statements from starttest.py file. It does not have any control on test.py and module.py file print statements.
Any suggestions to capture the print statements from all of the files in a single place only?
Before importing anything from test.py or module.py, replace tye sys.stdout file object with one of your liking:
import sys
sys.stdout = open("test-output.txt", "wt")
# Import the rest
If you're running on a Unix-like operating system, there is a safer method that does not need to replace the file object reference. Specially since overwriting sys.stdout does not guarantee that previous object is destroyed:
import os
import sys
fd = os.open("test-output.txt", os.O_WRONLY | os.O_CREAT, 0644)
os.dup2(fd, sys.stdout.fileno())
os.close(fd)
Note that the above trick is used by almost all daemonization implementations for Python.
Even though not directly related, remember also that you can use your shell to redirect command output to a file (works on Windows too):
python starttest.py >test-output.txt
Maybe look at the logging module that comes with python
Related
I have two Python files (main.py and main_test.py). The file main_test.py is executed within main.py. When I do not use a log file this is what gets printed out:
Main file: 17:41:18
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
When I use a log file and execute main.py>log, then I get the following:
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
Main file: 17:41:18
Also, when I use python3 main.py | tee log to print out and log the output, it waits and prints out after finishing everything. In addition, the problem of reversing remains.
Questions
How can I fix the reversed print out?
How can I print out results simultaneously in terminal and log them in a correct order?
Python files for replication
main.py
import os
import time
import datetime
import pytz
python_file_name = 'main_test'+'.py'
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Main file: {curr_time}')
cwd = os.path.join(os.getcwd(), python_file_name)
os.system(f'python3 {cwd}')
main_test.py
import pytz
import datetime
import time
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
for i in range(3):
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Executed file: {curr_time}')
time.sleep(1)
When you run a script like this:
python main.py>log
The shell redirects output from the script to a file called log. However, if the script launches other scripts in their own subshell (which is what os.system() does), the output of that does not get captured.
What is surprising about your example is that you'd see anything at all when redirecting, since the output should have been redirected and no longer echo - so perhaps there's something you're leaving out here.
Also, tee waits for EOF on standard in, or for some error to occur, so the behaviour you're seeing there makes sense. This is intended behaviour.
Why bother with shells at all though? Why not write a few functions to call, and import the other Python module to call its functions? Or, if you need things to run in parallel (which they didn't in your example), look at multiprocessing.
In direct response to your questions:
"How can I fix the reversed print out?"
Don't use redirection, and write to file directly from the script, or ensure you use the same redirection when calling other scripts from the first (that will get messy), or capture the output from the subprocesses in the subshell and pipe it to the standard out of your main script.
"How can I print out results simultaneously in terminal and log them in a correct order?"
You should probably just do it in the script, otherwise this is not a really a Python question and you should try SuperUser or similar sites to see if there's some way to have tee or similar tools write through live.
In general though, unless you have really strong reasons to have the other functionality running in other shells, you should look at solving your problems in the Python script. And if you can't, use you can use something like Popen or derivatives to capture the subscript's output and do what you need instead of relying on tools that may or may not be available on the host OS running your script.
i want to run multiple python scripts (let say they are all start
from a main script called main.py) which will write to one log
file contains the date and time creation, also, I need the
logger to be written to the console and the file. python 2.
I tried many different ways with no success,
Example:
main.py : run scripts python1.py and python2.py one after one, and
the three python scripts will write to the same log file which have
a date and time in it's name, and the log showed on the console
while running.
Also, Is something like this can be done through a python script
which is apareted from these files? for example 4th file called log_to_one_file.py?
if somebody know how to make it happen - i will be glad to know...
From the docs:
Multiple calls to logging.getLogger('someLogger') return a reference to the same logger object. This is true not only within the same module, but also across modules as long as it is in the same Python interpreter process. It is true for references to the same object;
That sounds like a homework exercise, which assumes the non-existence of ready python modules.
In that case it will be better to check the python file operations and create a fuction that receives as arguments the log content and the log file path, which appends to that file the contents, after prefixing the date and time. Then you can create a file in the same folder called log_to_one_file.py, place the function there, and then import it to your file you need logging using from log_to_one_file import function
In case a more plug-in solution is needed, a more advanced answer is that you could ovewrite the default sys.STDOUT variable and append a file streamer to that like that:
import sys
import sys
class Logger(object):
def __init__(self, path):
self.terminal = sys.stdout
self.log = open(path, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
pass
sys.stdout = Logger(path)
In the above solution wou can modify the message with a date and time prefix inside write method of the Logger class.
Then, after this piece of code has run, whenever you call print in your program, it will automatically write to the file and log to your console, using the formatting you have set it to have.
You can have a reading on Python context managers , if you want to apply the above solution in a more limited environment
For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())
I'm trying to do some simple IPC in Python as follows: One Python process launches another with subprocess. The child process sends some data into a pipe and the parent process receives it.
Here's my current implementation:
# parent.py
import pickle
import os
import subprocess
import sys
read_fd, write_fd = os.pipe()
if hasattr(os, 'set_inheritable'):
os.set_inheritable(write_fd, True)
child = subprocess.Popen((sys.executable, 'child.py', str(write_fd)), close_fds=False)
try:
with os.fdopen(read_fd, 'rb') as reader:
data = pickle.load(reader)
finally:
child.wait()
assert data == 'This is the data.'
# child.py
import pickle
import os
import sys
with os.fdopen(int(sys.argv[1]), 'wb') as writer:
pickle.dump('This is the data.', writer)
On Unix this works as expected, but if I run this code on Windows, I get the following error, after which the program hangs until interrupted:
Traceback (most recent call last):
File "child.py", line 4, in <module>
with os.fdopen(int(sys.argv[1]), 'wb') as writer:
File "C:\Python34\lib\os.py", line 978, in fdopen
return io.open(fd, *args, **kwargs)
OSError: [Errno 9] Bad file descriptor
I suspect the problem is that the child process isn't inheriting the write_fd file descriptor. How can I fix this?
The code needs to be compatible with Python 2.7, 3.2, and all subsequent versions. This means that the solution can't depend on either the presence or the absence of the changes to file descriptor inheritance specified in PEP 446. As implied above, it also needs to run on both Unix and Windows.
(To answer a couple of obvious questions: The reason I'm not using multiprocessing is because, in my real-life non-simplified code, the two Python programs are part of Django projects with different settings modules. This means they can't share any global state. Also, the child process's standard streams are being used for other purposes and are not available for this.)
UPDATE: After setting the close_fds parameter, the code now works in all versions of Python on Unix. However, it still fails on Windows.
subprocess.PIPE is implemented for all platforms. Why don't you just use this?
If you want to manually create and use an os.pipe(), you need to take care of the fact that Windows does not support fork(). It rather uses CreateProcess() which by default not make the child inherit open files. But there is a way: each single file descriptor can be made explicitly inheritable. This requires calling Win API. I have implemented this in gipc, see the _pre/post_createprocess_windows() methods here.
As #Jan-Philip Gehrcke suggested, you could use subprocess.PIPE instead of os.pipe():
#!/usr/bin/env python
# parent.py
import sys
from subprocess import check_output
data = check_output([sys.executable or 'python', 'child.py'])
assert data.decode().strip() == 'This is the data.'
check_output() uses stdout=subprocess.PIPE internally.
You could use obj = pickle.loads(data) if child.py uses data = pickle.dumps(obj).
And the child.py could be simplified:
#!/usr/bin/env python
# child.py
print('This is the data.')
If the child process is written in Python then for greater flexibility you could import the child script as a module and call its function instead of using subprocess. You could use multiprocessing, concurrent.futures modules if you need to run some Python code in a different process.
If you can't use standard streams then your django applications could use sockets to talk to one another.
The reason I'm not using multiprocessing is because, in my real-life non-simplified code, the two Python programs are part of Django projects with different settings modules. This means they can't share any global state.
This seems bogus. multiprocessing under-the-hood also may use subprocess module. If you don't want to share global state -- don't share it -- it is the default for multiple processes. You should probably ask a more specific for your particular case question about how to organize the communication between various parts of your project.
I am writing a test_examples.py to test the execution of a folder of python examples. Currently I use glob to parse the folder and then use subprocess to execute each python file. The issue is that some of these files are plots and they open a Figure window that halts until the window is closed.
A lot of the questions on this issue offer solutions from within the file, but how could I suppress the output whilst running the file externally without any modification?
What I have done so far is:
import subprocess as sb
import glob
from nose import with_setup
def test_execute():
files = glob.glob("../*.py")
files.sort()
for fl in files:
try:
sb.call(["ipython", "--matplotlib=Qt4", fl])
except:
assert False, "File: %s ran with some errors\n" % (fl)
This kind of works, in that it suppresses the Figures, but it doesn't throw any exceptions (even if the program has an error). I am also not 100% sure what it is doing. Is it appending all of the figures to Qt4 or will the Figure be removed from memory when that script has finished?
Ideally I would like to ideally run each .py file and capture its stdout and stderr, then use the exit condition to report the stderr and fail the tests. Then when I run nosetests it will run the examples folder of programs and check that they all run.
You could force matplotlib to use the Agg backend (which won't open any windows) by inserting the following lines at the top of each source file:
import matplotlib
matplotlib.use('Agg')
Here's a one-liner shell command that will dynamically insert these lines at the top of my_script.py (without modifying the file on disk) before piping the output to the Python interpreter for execution:
~$ sed "1i import matplotlib\nmatplotlib.use('Agg')\n" my_script.py | python
You should be able to make the equivalent call using subprocess, like this:
p1 = sb.Popen(["sed", "1i import matplotlib\nmatplotlib.use('Agg')\n", fl],
stdout=sb.PIPE)
exit_cond = sb.call(["python"], stdin=p1.stdout)
You could capture the stderr and stdout from your scripts by passing the stdout= and stderr= arguments to sb.call(). This would, of course, only work in Unix environments that have the sed utility.
Update
This is actually quite an interesting problem. I thought about it a bit more, and I think this is a more elegant solution (although still a bit of a hack):
#!/usr/bin/python
import sys
import os
import glob
from contextlib import contextmanager
import traceback
set_backend = "import matplotlib\nmatplotlib.use('Agg')\n"
#contextmanager
def redirected_output(new_stdout=None, new_stderr=None):
save_stdout = sys.stdout
save_stderr = sys.stderr
if new_stdout is not None:
sys.stdout = new_stdout
if new_stderr is not None:
sys.stderr = new_stderr
try:
yield None
finally:
sys.stdout = save_stdout
sys.stderr = save_stderr
def run_exectests(test_dir, log_path='exectests.log'):
test_files = glob.glob(os.path.join(test_dir, '*.py'))
test_files.sort()
passed = []
failed = []
with open(log_path, 'w') as f:
with redirected_output(new_stdout=f, new_stderr=f):
for fname in test_files:
print(">> Executing '%s'" % fname)
try:
code = compile(set_backend + open(fname, 'r').read(),
fname, 'exec')
exec(code, {'__name__':'__main__'}, {})
passed.append(fname)
except:
traceback.print_exc()
failed.append(fname)
pass
print ">> Passed %i/%i tests: " %(len(passed), len(test_files))
print "Passed: " + ', '.join(passed)
print "Failed: " + ', '.join(failed)
print "See %s for details" % log_path
return passed, failed
if __name__ == '__main__':
run_exectests(*sys.argv[1:])
Conceptually this is very similar to my previous solution - it works by reading in the test scripts as strings, and prepending them with a couple of lines that will import matplotlib and set the backend to a non-interactive one. The string is then compiled to Python bytecode, then executed. The main advantage is that it this ought to be platform-independent, since sed is not required.
The {'__name__':'__main__'} trick with the globals is necessary if, like me, you tend to write your scripts like this:
def run_me():
...
if __name__ == '__main__':
run_me()
A few points to consider:
If you try to run this function from within an ipython session where you've already imported matplotlib and set an interactive backend, the set_backend trick won't work and you'll still get figures popping up. The easiest way is to run it directly from the shell (~$ python exectests.py testdir/ logfile.log), or from an (i)python session where you haven't set an interactive backend for matplotlib. It should also work if you run it in a different subprocess from within your ipython session.
I'm using the contextmanager trick from this answer to redirect stdin and stdout to a log file. Note that this isn't threadsafe, but I think it's pretty unusual for scripts to open subprocesses.
Coming to this late, but I am trying to figure something similar out myself, and this is what I have come up with so far. Basically, if your plots are calling, for example, matplotlib.pyplot.show to show the plot, you can mock that method out using a patch decorator. Something like:
from unittest.mock import patch
#patch('matplotlib.pyplot.show') # passes a mock object to the decorated function
def test_execute(mock_show):
assert mock_show() == None # shouldn't do anything
files = glob.glob("../*.py")
files.sort()
for fl in files:
try:
sb.call(["ipython", fl])
except:
assert False, "File: %s ran with some errors\n" % (fl)
Basically the patch decorator should replace any call to matplotlib.pyplot.show within the decorated function with a mock object that doesn't do anything. At least that's how it's supposed to work in theory. In my application, my terminal is still trying to open plots and this is resulting in errors. I hope it works better for you, and I will update if I figure out something wrong in the above that is leading to my issue.
Edit: for completeness, you might be generating figures with a call to matplotlib.pyplot.figure() or matplotlib.pyplot.subplots(), in which case these are what you would mock out instead of matplotlib.pyplot.show(). Same syntax as above, you would just use:
#patch('matplotlib.pyplot.figure')
or:
#patch('matplotlib.pyplot.subplots')