Redirect stdout to a file in Python? - python

How do I redirect stdout to an arbitrary file in Python?
When a long-running Python script (e.g, web application) is started from within the ssh session and backgounded, and the ssh session is closed, the application will raise IOError and fail the moment it tries to write to stdout. I needed to find a way to make the application and modules output to a file rather than stdout to prevent failure due to IOError. Currently, I employ nohup to redirect output to a file, and that gets the job done, but I was wondering if there was a way to do it without using nohup, out of curiosity.
I have already tried sys.stdout = open('somefile', 'w'), but this does not seem to prevent some external modules from still outputting to terminal (or maybe the sys.stdout = ... line did not fire at all). I know it should work from simpler scripts I've tested on, but I also didn't have time yet to test on a web application yet.

If you want to do the redirection within the Python script, setting sys.stdout to a file object does the trick:
# for python3
import sys
with open('file', 'w') as sys.stdout:
print('test')
A far more common method is to use shell redirection when executing (same on Windows and Linux):
$ python3 foo.py > file

There is contextlib.redirect_stdout() function in Python 3.4+:
from contextlib import redirect_stdout
with open('help.txt', 'w') as f:
with redirect_stdout(f):
print('it now prints to `help.text`')
It is similar to:
import sys
from contextlib import contextmanager
#contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target # replace sys.stdout
try:
yield new_target # run some code with the replaced stdout
finally:
sys.stdout = old_target # restore to the previous value
that can be used on earlier Python versions. The latter version is not reusable. It can be made one if desired.
It doesn't redirect the stdout at the file descriptors level e.g.:
import os
from contextlib import redirect_stdout
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, redirect_stdout(f):
print('redirected to a file')
os.write(stdout_fd, b'not redirected')
os.system('echo this also is not redirected')
b'not redirected' and 'echo this also is not redirected' are not redirected to the output.txt file.
To redirect at the file descriptor level, os.dup2() could be used:
import os
import sys
from contextlib import contextmanager
def fileno(file_or_fd):
fd = getattr(file_or_fd, 'fileno', lambda: file_or_fd)()
if not isinstance(fd, int):
raise ValueError("Expected a file (`.fileno()`) or a file descriptor")
return fd
#contextmanager
def stdout_redirected(to=os.devnull, stdout=None):
if stdout is None:
stdout = sys.stdout
stdout_fd = fileno(stdout)
# copy stdout_fd before it is overwritten
#NOTE: `copied` is inheritable on Windows when duplicating a standard stream
with os.fdopen(os.dup(stdout_fd), 'wb') as copied:
stdout.flush() # flush library buffers that dup2 knows nothing about
try:
os.dup2(fileno(to), stdout_fd) # $ exec >&to
except ValueError: # filename
with open(to, 'wb') as to_file:
os.dup2(to_file.fileno(), stdout_fd) # $ exec > to
try:
yield stdout # allow code to be run with the redirected stdout
finally:
# restore stdout to its previous value
#NOTE: dup2 makes stdout_fd inheritable unconditionally
stdout.flush()
os.dup2(copied.fileno(), stdout_fd) # $ exec >&copied
The same example works now if stdout_redirected() is used instead of redirect_stdout():
import os
import sys
stdout_fd = sys.stdout.fileno()
with open('output.txt', 'w') as f, stdout_redirected(f):
print('redirected to a file')
os.write(stdout_fd, b'it is redirected now\n')
os.system('echo this is also redirected')
print('this is goes back to stdout')
The output that previously was printed on stdout now goes to output.txt as long as stdout_redirected() context manager is active.
Note: stdout.flush() does not flush
C stdio buffers on Python 3 where I/O is implemented directly on read()/write() system calls. To flush all open C stdio output streams, you could call libc.fflush(None) explicitly if some C extension uses stdio-based I/O:
try:
import ctypes
from ctypes.util import find_library
except ImportError:
libc = None
else:
try:
libc = ctypes.cdll.msvcrt # Windows
except OSError:
libc = ctypes.cdll.LoadLibrary(find_library('c'))
def flush(stream):
try:
libc.fflush(None)
stream.flush()
except (AttributeError, ValueError, IOError):
pass # unsupported
You could use stdout parameter to redirect other streams, not only sys.stdout e.g., to merge sys.stderr and sys.stdout:
def merged_stderr_stdout(): # $ exec 2>&1
return stdout_redirected(to=sys.stdout, stdout=sys.stderr)
Example:
from __future__ import print_function
import sys
with merged_stderr_stdout():
print('this is printed on stdout')
print('this is also printed on stdout', file=sys.stderr)
Note: stdout_redirected() mixes buffered I/O (sys.stdout usually) and unbuffered I/O (operations on file descriptors directly). Beware, there could be buffering issues.
To answer, your edit: you could use python-daemon to daemonize your script and use logging module (as #erikb85 suggested) instead of print statements and merely redirecting stdout for your long-running Python script that you run using nohup now.

you can try this too much better
import sys
class Logger(object):
def __init__(self, filename="Default.log"):
self.terminal = sys.stdout
self.log = open(filename, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
sys.stdout = Logger("yourlogfilename.txt")
print "Hello world !" # this is should be saved in yourlogfilename.txt

The other answers didn't cover the case where you want forked processes to share your new stdout.
To do that:
from os import open, close, dup, O_WRONLY
old = dup(1)
close(1)
open("file", O_WRONLY) # should open on 1
..... do stuff and then restore
close(1)
dup(old) # should dup to 1
close(old) # get rid of left overs

Quoted from PEP 343 -- The "with" Statement (added import statement):
Redirect stdout temporarily:
import sys
from contextlib import contextmanager
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout
Used as follows:
with open(filename, "w") as f:
with stdout_redirected(f):
print "Hello world"
This isn't thread-safe, of course, but neither is doing this same dance manually. In single-threaded programs (for example in scripts) it is a popular way of doing things.

import sys
sys.stdout = open('stdout.txt', 'w')

Here is a variation of Yuda Prawira answer:
implement flush() and all the file attributes
write it as a contextmanager
capture stderr also
.
import contextlib, sys
#contextlib.contextmanager
def log_print(file):
# capture all outputs to a log file while still printing it
class Logger:
def __init__(self, file):
self.terminal = sys.stdout
self.log = file
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def __getattr__(self, attr):
return getattr(self.terminal, attr)
logger = Logger(file)
_stdout = sys.stdout
_stderr = sys.stderr
sys.stdout = logger
sys.stderr = logger
try:
yield logger.log
finally:
sys.stdout = _stdout
sys.stderr = _stderr
with log_print(open('mylogfile.log', 'w')):
print('hello world')
print('hello world on stderr', file=sys.stderr)
# you can capture the output to a string with:
# with log_print(io.StringIO()) as log:
# ....
# print('[captured output]', log.getvalue())

You need a terminal multiplexer like either tmux or GNU screen
I'm surprised that a small comment by Ryan Amos' to the original question is the only mention of a solution far preferable to all the others on offer, no matter how clever the python trickery may be and how many upvotes they've received. Further to Ryan's comment, tmux is a nice alternative to GNU screen.
But the principle is the same: if you ever find yourself wanting to leave a terminal job running while you log-out, head to the cafe for a sandwich, pop to the bathroom, go home (etc) and then later, reconnect to your terminal session from anywhere or any computer as though you'd never been away, terminal multiplexers are the answer. Think of them as VNC or remote desktop for terminal sessions. Anything else is a workaround. As a bonus, when the boss and/or partner comes in and you inadvertently ctrl-w / cmd-w your terminal window instead of your browser window with its dodgy content, you won't have lost the last 18 hours-worth of processing!

Based on this answer: https://stackoverflow.com/a/5916874/1060344, here is another way I figured out which I use in one of my projects. For whatever you replace sys.stderr or sys.stdout with, you have to make sure that the replacement complies with file interface, especially if this is something you are doing because stderr/stdout are used in some other library that is not under your control. That library may be using other methods of file object.
Check out this way where I still let everything go do stderr/stdout (or any file for that matter) and also send the message to a log file using Python's logging facility (but you can really do anything with this):
class FileToLogInterface(file):
'''
Interface to make sure that everytime anything is written to stderr, it is
also forwarded to a file.
'''
def __init__(self, *args, **kwargs):
if 'cfg' not in kwargs:
raise TypeError('argument cfg is required.')
else:
if not isinstance(kwargs['cfg'], config.Config):
raise TypeError(
'argument cfg should be a valid '
'PostSegmentation configuration object i.e. '
'postsegmentation.config.Config')
self._cfg = kwargs['cfg']
kwargs.pop('cfg')
self._logger = logging.getlogger('access_log')
super(FileToLogInterface, self).__init__(*args, **kwargs)
def write(self, msg):
super(FileToLogInterface, self).write(msg)
self._logger.info(msg)

Programs written in other languages (e.g. C) have to do special magic (called double-forking) expressly to detach from the terminal (and to prevent zombie processes). So, I think the best solution is to emulate them.
A plus of re-executing your program is, you can choose redirections on the command-line, e.g. /usr/bin/python mycoolscript.py 2>&1 1>/dev/null
See this post for more info: What is the reason for performing a double fork when creating a daemon?

I know this question is answered (using python abc.py > output.log 2>&1 ), but I still have to say:
When writing your program, don't write to stdout. Always use logging to output whatever you want. That would give you a lot of freedom in the future when you want to redirect, filter, rotate the output files.

As mentioned by #jfs, most solutions will not properly handle some types of stdout output such as that from C extensions. There is a module that takes care of all this on PyPI called wurlitzer. You just need its sys_pipes context manager. It's as easy as using:
from contextlib import redirect_stdout
import os
from wurlitzer import sys_pipes
log = open("test.log", "a")
with redirect_stdout(log), sys_pipes():
print("print statement")
os.system("echo echo call")

Based on previous answers on this post I wrote this class for myself as a more compact and flexible way of redirecting the output of pieces of code - here just to a list - and ensure that the output is normalized afterwards.
class out_to_lt():
def __init__(self, lt):
if type(lt) == list:
self.lt = lt
else:
raise Exception("Need to pass a list")
def __enter__(self):
import sys
self._sys = sys
self._stdout = sys.stdout
sys.stdout = self
return self
def write(self,txt):
self.lt.append(txt)
def __exit__(self, type, value, traceback):
self._sys.stdout = self._stdout
Used as:
lt = []
with out_to_lt(lt) as o:
print("Test 123\n\n")
print(help(str))
Updating. Just found a scenario where I had to add two extra methods, but was easy to adapt:
class out_to_lt():
...
def isatty(self):
return True #True: You're running in a real terminal, False:You're being piped, redirected, cron
def flush(self):
pass

There are other versions using context but nothing this simple. I actually just googled to double check it would work and was surprised not to see it, so for other people looking for a quick solution that is safe and directed at only the code within the context block, here it is:
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
Tested like so:
$ cat redirect_stdout.py
import sys
with open('test_file', 'w') as sys.stdout:
print('Testing 1 2 3')
$ python redirect_stdout.py
$ cat test_file
Testing 1 2 3

Related

the simplest interface to let subprocess output to both file and stdout/stderr?

I want something have similar effect of cmd > >(tee -a {{ out.log }}) 2> >(tee -a {{ err.log }} >&2) in python subporcess without calling tee. Basically write stdout to both stdout and out.log files and write stderr to both stderr and err.log. I knew I could use a loop to handle it. But since I have lots of Popen, subprocess.run calls in my code already and I do not want to rewrite the entire thing I wonder is there any easier interface provided by some package could just allow me to do something like:
subprocess.run(["ls", "-l"], stdout=some_magic_file_object(sys.stdout, 'out.log'), stderr=some_magic_file_object(sys.stderr, 'out.log') )
No simple way as far as I can tell, but here is a way:
import os
class Tee:
def __init__(self, *files, bufsize=1):
files = [x.fileno() if hasattr(x, 'fileno') else x for x in files]
read_fd, write_fd = os.pipe()
pid = os.fork()
if pid:
os.close(read_fd)
self._fileno = write_fd
self.child_pid = pid
return
os.close(write_fd)
while buf := os.read(read_fd, bufsize):
for f in files:
os.write(f, buf)
os._exit(0)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
def fileno(self):
return self._fileno
def close(self):
os.close(self._fileno)
os.waitpid(self.child_pid, 0)
This Tee object takes a list of file objects (i.e. objects that either are integer file descriptors, or have a fileno method). It creates a child process that reads from its own fileno (which is what subprocess.run will write to) and writes that content to all of the files it was provided.
There's some lifecycle management needed, as its file descriptor must be closed, and the child process must be waited on afterwards. For that you either have to manage it manually by calling the Tee object's close method, or by using it as a context manager as shown below.
Usage:
import subprocess
import sys
logfile = open('out.log', 'w')
stdout_magic_file_object = Tee(sys.stdout, logfile)
stderr_magic_file_object = Tee(sys.stderr, logfile)
# Use the file objects with as many subprocess calls as you'd like here
subprocess.run(["ls", "-l"], stdout=stdout_magic_file_object, stderr=stderr_magic_file_object)
# Close the files after you're done with them.
stdout_magic_file_object.close()
stderr_magic_file_object.close()
logfile.close()
A cleaner way would be to use context managers, shown below. It would require more refactoring though, so you may prefer manually closing the files instead.
import subprocess
import sys
with open('out.log', 'w') as logfile:
with Tee(sys.stdout, logfile) as stdout, Tee(sys.stderr, logfile) as stderr:
subprocess.run(["ls", "-l"], stdout=stdout, stderr=stderr)
One issue with this approach is that the child process writes to stdout immediately, and so Python's own output will often get mixed up in it. You can work around this by using Tee on a temp file and the log file, and then printing the content of the temp file (and deleting it) once the Tee context block is exited. Making a subclass of Tee that does this automatically would be straightforward, but using it would be a bit cumbersome since now you need to exit the context block (or otherwise have it run some code) to print out the output of the subprocess.

nohup: redirecting stderr to stdout error [duplicate]

I'm using Pygame/SDL's joystick module to get input from a gamepad. Every time I call its get_hat() method it prints to the console. This is problematic since I use the console to help me debug and now it gets flooded with SDL_JoystickGetHat value:0: 60 times every second. Is there a way I can disable this? Either through an option in Pygame/SDL or suppress console output while the function calls? I saw no mention of this in the Pygame documentation.
edit: This turns out to be due to debugging being turned on when the SDL library was compiled.
Just for completeness, here's a nice solution from Dave Smith's blog:
from contextlib import contextmanager
import sys, os
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
With this, you can use context management wherever you want to suppress output:
print("Now you see it")
with suppress_stdout():
print("Now you don't")
To complete charles's answer, there are two context managers built in to python, redirect_stdout and redirect_stderr which you can use to redirect and or suppress a commands output to a file or StringIO variable.
import contextlib
with contextlib.redirect_stdout(None):
do_thing()
For a more complete explanation read the docs
A quick update: In some cases passing None might raise some reference errors (e.g. keras.models.Model.fit calls sys.stdout.write which will be problematic), in that case pass an io.StringIO() or os.devnull.
You can get around this by assigning the standard out/error (I don't know which one it's going to) to the null device. In Python, the standard out/error files are sys.stdout/sys.stderr, and the null device is os.devnull, so you do
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
This should disable these error messages completely. Unfortunately, this will also disable all console output. To get around this, disable output right before calling the get_hat() the method, and then restore it by doing
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
which restores standard out and error to their original value.
Here's the relevant block of code from joystick.c (via SVN at http://svn.seul.org/viewcvs/viewvc.cgi/trunk/src/joystick.c?view=markup&revision=2652&root=PyGame)
value = SDL_JoystickGetHat (joy, _index);
#ifdef DEBUG
printf("SDL_JoystickGetHat value:%d:\n", value);
#endif
if (value & SDL_HAT_UP) {
Looks like a problem with having debugging turned on.
Building on #charleslparker's answer:
from contextlib import contextmanager
import sys, os
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
print("Now you see it")
with suppress_stdout():
print("Now you don't")
Tests
>>> with suppress_stdout():
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV')
/mnt/Vancouver/programming/scripts/PHASER.WAV:
File Size: 1.84k Bit Rate: 90.4k
Encoding: Unsigned PCM
Channels: 1 # 8-bit
Samplerate: 11025Hz
Replaygain: off
Duration: 00:00:00.16
In:100% 00:00:00.16 [00:00:00.00] Out:1.79k [!=====|=====!] Clip:0
Done.
Use this to completely suppress os.system() output:
>>> with suppress_stdout():
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV >/dev/null 2>&1')
>>> ## successfully executed
>>> import time
>>> with suppress_stdout():
for i in range(3):
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV >/dev/null 2>&1')
time.sleep(0.5)
>>> ## successfully executed
Useful (e.g.) to signal completion of long-running scripts.
I use pythonw.exe (on Windows) instead of python.exe.
In other OSes, you could also redirect output to /dev/nul.
And in order to still see my debug output, I am using the logging module.
As Demolishun mentions in an answer to a closed duplicate question, there is a thread talking about this issue. The thread is from August of 2009 and one of the developers says the debug code was left in on accident. I had installed Pygame 1.9.1 from pip and the debug output is still present.
To get around it for now, I downloaded the source from pygame.org, removed the print statements from src/joystick.c and compiled the code.
I am on OS X 10.7.5 for what it's worth.
If you are on a Debian or Ubuntu machine you can just simply recompile pygame without the messages.
cd /tmp
sudo apt-get build-dep pygame
apt-get source pygame
vim pygame-1.9.1release+dfsg/src/joystick.c
# search for the printf("SDL.. messages and put a // in front
apt-get source --compile pygame
sudo dpkg -i python-pygame_1.9.1release+dfsg-9ubuntu1_amd64.deb
Greetings
Max
The solutions using os.devnull could cause synchronization issues with multi-processing - so multiple processes would wait on the same resource. At least this is what I encountered in windows.
Following solution might be better:
class DummyOutput(object):
def __init__(self, *args, **kwargs):
pass
def write(self, *args, **kwargs):
pass
class suppress_stdout_stderr(object):
def __init__(self):
self.stdout = sys.stdout
self.stderr = sys.stderr
def __enter__(self, *args, **kwargs):
out = DummyOutput()
sys.stdout = out
sys.stderr = out
def __exit__(self, *args, **kwargs):
sys.stdout = self.stdout
sys.stderr = self.stderr
with suppress_stdout_stderr():
print(123, file=sys.stdout)
print(123, file=sys.stderr)

Python os.dup2 redirect enables output buffering on windows python consoles

I'm using a strategy based around os.dup2 (similar to examples on this site) to redirect C/fortran level output into a temporary file for capturing.
The only problem I've noticed is, if you use this code from an interactive shell in windows (either python.exe or ipython) it has the strange side effect of enabling output buffering in the console.
Before capture sys.stdout is some kind of file object that returns True for istty(). Typing print('hi') causes hi to be output directly.
After capture sys.stdout points to exactly the same file object but print('hi') no longer shows anything until sys.stdout.flush() is called.
Below is a minimal example script "test.py"
import os, sys, tempfile
class Capture(object):
def __init__(self):
super(Capture, self).__init__()
self._org = None # Original stdout stream
self._dup = None # Original system stdout descriptor
self._file = None # Temporary file to write stdout to
def start(self):
self._org = sys.stdout
sys.stdout = sys.__stdout__
fdout = sys.stdout.fileno()
self._file = tempfile.TemporaryFile()
self._dup = None
if fdout >= 0:
self._dup = os.dup(fdout)
os.dup2(self._file.fileno(), fdout)
def stop(self):
sys.stdout.flush()
if self._dup is not None:
os.dup2(self._dup, sys.stdout.fileno())
os.close(self._dup)
sys.stdout = self._org
self._file.seek(0)
out = self._file.readlines()
self._file.close()
return out
def run():
c = Capture()
c.start()
os.system('echo 10')
print('20')
x = c.stop()
print(x)
if __name__ == '__main__':
run()
Opening a command prompt and running the script works fine. This produces the expected output:
python.exe test.py
Running it from a python shell does not:
python.exe
>>> import test.py
>>> test.run()
>>> print('hello?')
No output is shown until stdout is flushed:
>>> import sys
>>> sys.stdout.flush()
Does anybody have any idea what's going on?
Quick info:
The issue appears on Windows, not on linux (so probably not on mac).
Same behaviour in both Python 2.7.6 and Python 2.7.9
The script should capture C/fortran output, not just python output
It runs without errors on windows, but afterwards print() no longer flushes
I could confirm a related problem with Python 2 in Linux, but not with Python 3
The basic problem is
>>> sys.stdout is sys.__stdout__
True
Thus you are using the original sys.stdout object all the time. And when you do the first output, in Python 2 it executes the isatty() system call once for the underlying file, and stores the result.
You should open an altogether new file and replace sys.stdout with it.
Thus the proper way to write the Capture class would be
import sys
import tempfile
import time
import os
class Capture(object):
def __init__(self):
super(Capture, self).__init__()
def start(self):
self._old_stdout = sys.stdout
self._stdout_fd = self._old_stdout.fileno()
self._saved_stdout_fd = os.dup(self._stdout_fd)
self._file = sys.stdout = tempfile.TemporaryFile(mode='w+t')
os.dup2(self._file.fileno(), self._stdout_fd)
def stop(self):
os.dup2(self._saved_stdout_fd, self._stdout_fd)
os.close(self._saved_stdout_fd)
sys.stdout = self._old_stdout
self._file.seek(0)
out = self._file.readlines()
self._file.close()
return out
def run():
c = Capture()
c.start()
os.system('echo 10')
x = c.stop()
print(x)
time.sleep(1)
print("finished")
run()
With this program, in both Python 2 and Python 3, the output will be:
['10\n']
finished
with the first line appearing on the terminal instantaneously, and the second after one second delay.
This would fail for code that import stdout from sys, however. Luckily not much code does that.

Redirecting stdout to "nothing" in python

I have a large project consisting of sufficiently large number of modules, each printing something to the standard output. Now as the project has grown in size, there are large no. of print statements printing a lot on the std out which has made the program considerably slower.
So, I now want to decide at runtime whether or not to print anything to the stdout. I cannot make changes in the modules as there are plenty of them. (I know I can redirect the stdout to a file but even this is considerably slow.)
So my question is how do I redirect the stdout to nothing ie how do I make the print statement do nothing?
# I want to do something like this.
sys.stdout = None # this obviously will give an error as Nonetype object does not have any write method.
Currently the only idea I have is to make a class which has a write method (which does nothing) and redirect the stdout to an instance of this class.
class DontPrint(object):
def write(*args): pass
dp = DontPrint()
sys.stdout = dp
Is there an inbuilt mechanism in python for this? Or is there something better than this?
Cross-platform:
import os
import sys
f = open(os.devnull, 'w')
sys.stdout = f
On Windows:
f = open('nul', 'w')
sys.stdout = f
On Linux:
f = open('/dev/null', 'w')
sys.stdout = f
A nice way to do this is to create a small context processor that you wrap your prints in. You then just use is in a with-statement to silence all output.
Python 2:
import os
import sys
from contextlib import contextmanager
#contextmanager
def silence_stdout():
old_target = sys.stdout
try:
with open(os.devnull, "w") as new_target:
sys.stdout = new_target
yield new_target
finally:
sys.stdout = old_target
with silence_stdout():
print("will not print")
print("this will print")
Python 3.4+:
Python 3.4 has a context processor like this built-in, so you can simply use contextlib like this:
import contextlib
with contextlib.redirect_stdout(None):
print("will not print")
print("this will print")
If the code you want to surpress writes directly to sys.stdout using None as redirect target won't work. Instead you can use:
import contextlib
import sys
import os
with contextlib.redirect_stdout(open(os.devnull, 'w')):
sys.stdout.write("will not print")
sys.stdout.write("this will print")
If your code writes to stderr instead of stdout, you can use contextlib.redirect_stderr instead of redirect_stdout.
Running this code only prints the second line of output, not the first:
$ python test.py
this will print
This works cross-platform (Windows + Linux + Mac OSX), and is cleaner than the ones other answers imho.
If you're in python 3.4 or higher, there's a simple and safe solution using the standard library:
import contextlib
with contextlib.redirect_stdout(None):
print("This won't print!")
(at least on my system) it appears that writing to os.devnull is about 5x faster than writing to a DontPrint class, i.e.
#!/usr/bin/python
import os
import sys
import datetime
ITER = 10000000
def printlots(out, it, st="abcdefghijklmnopqrstuvwxyz1234567890"):
temp = sys.stdout
sys.stdout = out
i = 0
start_t = datetime.datetime.now()
while i < it:
print st
i = i+1
end_t = datetime.datetime.now()
sys.stdout = temp
print out, "\n took", end_t - start_t, "for", it, "iterations"
class devnull():
def write(*args):
pass
printlots(open(os.devnull, 'wb'), ITER)
printlots(devnull(), ITER)
gave the following output:
<open file '/dev/null', mode 'wb' at 0x7f2b747044b0>
took 0:00:02.074853 for 10000000 iterations
<__main__.devnull instance at 0x7f2b746bae18>
took 0:00:09.933056 for 10000000 iterations
If you're in a Unix environment (Linux included), you can redirect output to /dev/null:
python myprogram.py > /dev/null
And for Windows:
python myprogram.py > nul
You can just mock it.
import mock
sys.stdout = mock.MagicMock()
Your class will work just fine (with the exception of the write() method name -- it needs to be called write(), lowercase). Just make sure you save a copy of sys.stdout in another variable.
If you're on a *NIX, you can do sys.stdout = open('/dev/null'), but this is less portable than rolling your own class.
How about this:
from contextlib import ExitStack, redirect_stdout
import os
with ExitStack() as stack:
if should_hide_output():
null_stream = open(os.devnull, "w")
stack.enter_context(null_stream)
stack.enter_context(redirect_stdout(null_stream))
noisy_function()
This uses the features in the contextlib module to hide the output of whatever command you are trying to run, depending on the result of should_hide_output(), and then restores the output behavior after that function is done running.
If you want to hide standard error output, then import redirect_stderr from contextlib and add a line saying stack.enter_context(redirect_stderr(null_stream)).
The main downside it that this only works in Python 3.4 and later versions.
sys.stdout = None
It is OK for print() case. But it can cause an error if you call any method of sys.stdout, e.g. sys.stdout.write().
There is a note in docs:
Under some conditions stdin, stdout and stderr as well as the original
values stdin, stdout and stderr can be None. It is usually
the case for Windows GUI apps that aren’t connected to a console and
Python apps started with pythonw.
Supplement to iFreilicht's answer - it works for both python 2 & 3.
import sys
class NonWritable:
def write(self, *args, **kwargs):
pass
class StdoutIgnore:
def __enter__(self):
self.stdout_saved = sys.stdout
sys.stdout = NonWritable()
return self
def __exit__(self, *args):
sys.stdout = self.stdout_saved
with StdoutIgnore():
print("This won't print!")
If you don't want to deal with resource-allocation nor rolling your own class, you may want to use TextIO from Python typing. It has all required methods stubbed for you by default.
import sys
from typing import TextIO
sys.stdout = TextIO()
There are a number of good answers in the flow, but here is my Python 3 answer (when sys.stdout.fileno() isn't supported anymore) :
import os
import sys
oldstdout = os.dup(1)
oldstderr = os.dup(2)
oldsysstdout = sys.stdout
oldsysstderr = sys.stderr
# Cancel all stdout outputs (will be lost) - optionally also cancel stderr
def cancel_stdout(stderr=False):
sys.stdout.flush()
devnull = open('/dev/null', 'w')
os.dup2(devnull.fileno(), 1)
sys.stdout = devnull
if stderr:
os.dup2(devnull.fileno(), 2)
sys.stderr = devnull
# Redirect all stdout outputs to a file - optionally also redirect stderr
def reroute_stdout(filepath, stderr=False):
sys.stdout.flush()
file = open(filepath, 'w')
os.dup2(file.fileno(), 1)
sys.stdout = file
if stderr:
os.dup2(file.fileno(), 2)
sys.stderr = file
# Restores stdout to default - and stderr
def restore_stdout():
sys.stdout.flush()
sys.stdout.close()
os.dup2(oldstdout, 1)
os.dup2(oldstderr, 2)
sys.stdout = oldsysstdout
sys.stderr = oldsysstderr
To use it:
Cancel all stdout and stderr outputs with:
cancel_stdout(stderr=True)
Route all stdout (but not stderr) to a file:
reroute_stdout('output.txt')
To restore stdout and stderr:
restore_stdout()
Why don't you try this?
sys.stdout.close()
sys.stderr.close()
Will add some example to the numerous answers here:
import argparse
import contextlib
class NonWritable:
def write(self, *args, **kwargs):
pass
parser = argparse.ArgumentParser(description='my program')
parser.add_argument("-p", "--param", help="my parameter", type=str, required=True)
#with contextlib.redirect_stdout(None): # No effect as `argparse` will output to `stderr`
#with contextlib.redirect_stderr(None): # AttributeError: 'NoneType' object has no attribute 'write'
with contextlib.redirect_stderr(NonWritable): # this works!
args = parser.parse_args()
The normal output would be:
>python TEST.py
usage: TEST.py [-h] -p PARAM
TEST.py: error: the following arguments are required: -p/--param
I use this. Redirect stdout to a string, which you subsequently ignore. I use a context manager to save and restore the original setting for stdout.
from io import StringIO
...
with StringIO() as out:
with stdout_redirected(out):
# Do your thing
where stdout_redirected is defined as:
from contextlib import contextmanager
#contextmanager
def stdout_redirected(new_stdout):
save_stdout = sys.stdout
sys.stdout = new_stdout
try:
yield None
finally:
sys.stdout = save_stdout

How to suppress console output in Python?

I'm using Pygame/SDL's joystick module to get input from a gamepad. Every time I call its get_hat() method it prints to the console. This is problematic since I use the console to help me debug and now it gets flooded with SDL_JoystickGetHat value:0: 60 times every second. Is there a way I can disable this? Either through an option in Pygame/SDL or suppress console output while the function calls? I saw no mention of this in the Pygame documentation.
edit: This turns out to be due to debugging being turned on when the SDL library was compiled.
Just for completeness, here's a nice solution from Dave Smith's blog:
from contextlib import contextmanager
import sys, os
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
With this, you can use context management wherever you want to suppress output:
print("Now you see it")
with suppress_stdout():
print("Now you don't")
To complete charles's answer, there are two context managers built in to python, redirect_stdout and redirect_stderr which you can use to redirect and or suppress a commands output to a file or StringIO variable.
import contextlib
with contextlib.redirect_stdout(None):
do_thing()
For a more complete explanation read the docs
A quick update: In some cases passing None might raise some reference errors (e.g. keras.models.Model.fit calls sys.stdout.write which will be problematic), in that case pass an io.StringIO() or os.devnull.
You can get around this by assigning the standard out/error (I don't know which one it's going to) to the null device. In Python, the standard out/error files are sys.stdout/sys.stderr, and the null device is os.devnull, so you do
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
This should disable these error messages completely. Unfortunately, this will also disable all console output. To get around this, disable output right before calling the get_hat() the method, and then restore it by doing
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
which restores standard out and error to their original value.
Here's the relevant block of code from joystick.c (via SVN at http://svn.seul.org/viewcvs/viewvc.cgi/trunk/src/joystick.c?view=markup&revision=2652&root=PyGame)
value = SDL_JoystickGetHat (joy, _index);
#ifdef DEBUG
printf("SDL_JoystickGetHat value:%d:\n", value);
#endif
if (value & SDL_HAT_UP) {
Looks like a problem with having debugging turned on.
Building on #charleslparker's answer:
from contextlib import contextmanager
import sys, os
#contextmanager
def suppress_stdout():
with open(os.devnull, "w") as devnull:
old_stdout = sys.stdout
sys.stdout = devnull
try:
yield
finally:
sys.stdout = old_stdout
print("Now you see it")
with suppress_stdout():
print("Now you don't")
Tests
>>> with suppress_stdout():
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV')
/mnt/Vancouver/programming/scripts/PHASER.WAV:
File Size: 1.84k Bit Rate: 90.4k
Encoding: Unsigned PCM
Channels: 1 # 8-bit
Samplerate: 11025Hz
Replaygain: off
Duration: 00:00:00.16
In:100% 00:00:00.16 [00:00:00.00] Out:1.79k [!=====|=====!] Clip:0
Done.
Use this to completely suppress os.system() output:
>>> with suppress_stdout():
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV >/dev/null 2>&1')
>>> ## successfully executed
>>> import time
>>> with suppress_stdout():
for i in range(3):
os.system('play /mnt/Vancouver/programming/scripts/PHASER.WAV >/dev/null 2>&1')
time.sleep(0.5)
>>> ## successfully executed
Useful (e.g.) to signal completion of long-running scripts.
I use pythonw.exe (on Windows) instead of python.exe.
In other OSes, you could also redirect output to /dev/nul.
And in order to still see my debug output, I am using the logging module.
As Demolishun mentions in an answer to a closed duplicate question, there is a thread talking about this issue. The thread is from August of 2009 and one of the developers says the debug code was left in on accident. I had installed Pygame 1.9.1 from pip and the debug output is still present.
To get around it for now, I downloaded the source from pygame.org, removed the print statements from src/joystick.c and compiled the code.
I am on OS X 10.7.5 for what it's worth.
If you are on a Debian or Ubuntu machine you can just simply recompile pygame without the messages.
cd /tmp
sudo apt-get build-dep pygame
apt-get source pygame
vim pygame-1.9.1release+dfsg/src/joystick.c
# search for the printf("SDL.. messages and put a // in front
apt-get source --compile pygame
sudo dpkg -i python-pygame_1.9.1release+dfsg-9ubuntu1_amd64.deb
Greetings
Max
The solutions using os.devnull could cause synchronization issues with multi-processing - so multiple processes would wait on the same resource. At least this is what I encountered in windows.
Following solution might be better:
class DummyOutput(object):
def __init__(self, *args, **kwargs):
pass
def write(self, *args, **kwargs):
pass
class suppress_stdout_stderr(object):
def __init__(self):
self.stdout = sys.stdout
self.stderr = sys.stderr
def __enter__(self, *args, **kwargs):
out = DummyOutput()
sys.stdout = out
sys.stderr = out
def __exit__(self, *args, **kwargs):
sys.stdout = self.stdout
sys.stderr = self.stderr
with suppress_stdout_stderr():
print(123, file=sys.stdout)
print(123, file=sys.stderr)

Categories