I am running a web server, tornado, and I am trying to redirect all the log output to a file using the following command. But I don't see the output in the file.
/usr/bin/python -u index.py 2>&1 >> /tmp/tornado.log
I pass -u option to python interpreter. I still don't see any output logged to my log file.
However, I see the output on stdout when I do the following
/usr/bin/python index.py
Tornado uses the built-in logging module. You can easily attach a file handler to the root logger and set its level to NOTSET so it records everything, or some other level if you want to filter.
Reference docs: logging, logging.handlers
Example that works with Tornado's logging:
import logging
# the root logger is created upon the first import of the logging module
# create a file handler to add to the root logger
filehandler = logging.FileHandler(
filename = 'test.log',
mode = 'a',
encoding = None,
delay = False
)
# set the file handler's level to your desired logging level, e.g. INFO
filehandler.setLevel(logging.INFO)
# create a formatter for the file handler
formatter = logging.Formatter('%(asctime)s.%(msecs)d [%(name)s](%(process)d): %(levelname)s: %(message)s')
# add filters if you want your handler to only handle events from specific loggers
# e.g. "main.sub.classb" or something like that. I'll leave this commented out.
# filehandler.addFilter(logging.Filter(name='root.child'))
# set the root logger's level to be at most as high as your handler's
if logging.root.level > filehandler.level:
logging.root.setLevel = filehandler.level
# finally, add the handler to the root. after you do this, the root logger will write
# records to file.
logging.root.addHandler(filehandler)
More often than not, I actually wish to suppress tornado's loggers (because I have my own, and catch their exceptions anyway, and they just end up polluting my logs,) and this is where adding a filter on your filehandlers can come in very handy.
Related
I have a Python script and I want that the info method write the messages in the console. But the warning, critical or error writes the messages to a file. How can I do that?
I tried this:
import logging
console_log = logging.getLogger("CONSOLE")
console_log.setLevel(logging.INFO)
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
console_log.addHandler(stream_handler)
file_log = logging.getLogger("FILE")
file_log.setLevel(logging.WARNING)
file_handler = logging.FileHandler('log.txt')
file_handler.setLevel(logging.WARNING)
file_log.addHandler(file_handler)
def log_to_console(message):
console_log.info(message)
def log_to_file(message):
file_log.warning(message)
log_to_console("THIS SHOULD SHOW ONLY IN CONSOLE")
log_to_file("THIS SHOULD SHOW ONLY IN FILE")
but the message that should be only in the file is going to the console too, and the message that should be in the console, is duplicating. What am I doing wrong here?
What happens is that the two loggers you created propagated the log upwards to the root logger. The root logger does not have any handlers by default, but will use the lastResort handler if needed:
A "handler of last resort" is available through this attribute. This
is a StreamHandler writing to sys.stderr with a level of WARNING, and
is used to handle logging events in the absence of any logging
configuration. The end result is to just print the message to
sys.stderr.
Source from the Python documentation.
Inside the Python source code, you can see where the call is done.
Therefore, to solve your problem, you could set the console_log and file_log loggers' propagate attribute to False.
On another note, I think you should refrain from instantiating several loggers for you use case. Just use one custom logger with 2 different handlers that will each log to a different destination.
Create a custom StreamHandler to log only the specified level:
import logging
class MyStreamHandler(logging.StreamHandler):
def emit(self, record):
if record.levelno == self.level:
# this ensures this handler will print only for the specified level
super().emit(record)
Then, use it:
my_custom_logger = logging.getLogger("foobar")
my_custom_logger.propagate = False
my_custom_logger.setLevel(logging.INFO)
stream_handler = MyStreamHandler()
stream_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler("log.txt")
file_handler.setLevel(logging.WARNING)
my_custom_logger.addHandler(stream_handler)
my_custom_logger.addHandler(file_handler)
my_custom_logger.info("THIS SHOULD SHOW ONLY IN CONSOLE")
my_custom_logger.warning("THIS SHOULD SHOW ONLY IN FILE")
And it works without duplicate and without misplaced log.
I'd like to send all logs (including debug) to a file and to print only logs with a level of info or higher to stderr.
Currently, every module begins with this:
logger = logging.getLogger(__name__)
I haven't been able to achieve this without duplication of the logs (from the child and root loggers) and without modification of every module in my application.
What's the simplest way to do it?
You can configure two different handlers:
import logging
from logging import FileHandler, StreamHandler, INFO, DEBUG
file_handler = FileHandler("info.log")
file_handler.setLevel(DEBUG)
std_err_handler = StreamHandler()
std_err_handler.setLevel(INFO)
logging.basicConfig(handlers=[file_handler, std_err_handler], level=DEBUG)
logging.info("info") # writes to file and stderr
logging.debug("debug") # writes only to file
How can I redirect logging within a context to a file. Also redirecting logging in other modules (3rd party e.g. requests, numpy, etc.), if called in the scope.
The use case is that we want to integrate algorithms of another team.
we need to output logs of the algorithm to additional file so we can give it to them for debug purpases.
For example:
import some_func_3rd_party
some_func_3rd_party() # logs will only be written to predefined handlers
logger.info("write only to predefined handlers")
with log2file("somefile.txt"):
logger.info("write to file and predefined handlers")
some_func_3rd_party() # logs will be written to predefined handlers and to file
logger.info("write only to predefined handlers")
At the moment, I don't see a way of achieving what you want, without accessing and modifying the root logger.
If you wanted a more targeted approach, you would need to know how the 3rd party library's logger is configured.
Please have a look at the answers to "Log all requests from the python-requests module" to better understand what I mean.
Here is one approach that might work in your particular case:
import contextlib
import logging
import requests
import sys
#contextlib.contextmanager
def special_logger(app_logger, log_file, log_level=logging.DEBUG):
# Get all handlers added to the app_logger.
handlers = app_logger.handlers
# Add handlers specific to this context.
handlers.append(logging.FileHandler(filename=log_file))
handlers.append(logging.StreamHandler(stream=sys.stderr))
# Get the root logger, set the logging level,
# and add all the handlers above.
root_logger = logging.getLogger()
root_logger_level = root_logger.level
root_logger.setLevel(log_level)
for handler in handlers:
root_logger.addHandler(handler)
# Yield the modified root logger.
yield root_logger
# Clean up handlers.
for handler in handlers:
root_logger.removeHandler(handler)
# Reset log level to what it was.
root_logger.setLevel(root_logger_level)
# Get logger for this module.
app_logger = logging.getLogger('my_app')
# Add a handler logging to stdout.
sh = logging.StreamHandler(stream=sys.stdout)
app_logger.addHandler(sh)
app_logger.setLevel(logging.DEBUG)
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
# Use the new context with modified root logger.
with special_logger(app_logger, 'my_app.log') as spec_logger:
# The output will appear twice in the console because there is
# one handler logging to stdout, and one to stderr.
# This is for demonstration purposes only.
spec_logger.info("Logs go to stdout, stderr, and file.")
# 'requests' logs go to stdout, stderr, and file.
requests.get("http://www.google.com")
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
Overview
I want to use httpimport as a logging library common to several scripts. This module generates logs of its own which I do not know how to silence.
In other cases such as this one, I would have used
logging.getLogger('httpimport').setLevel(logging.ERROR)
but it did not work.
Details
The following code is a stub of the "common logging code" mentioned above:
# toconsole.py
import logging
import os
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s %(message)s')
handler_console = logging.StreamHandler()
level = logging.DEBUG if 'DEV' in os.environ else logging.INFO
handler_console.setLevel(level)
handler_console.setFormatter(formatter)
log.addHandler(handler_console)
# disable httpimport logging except for errors+
logging.getLogger('httpimport').setLevel(logging.ERROR)
A simple usage such as
import httpimport
httpimport.INSECURE = True
with httpimport.remote_repo(['githublogging'], 'http://localhost:8000/') :
from toconsole import log
log.info('yay!')
gives the following output
[!] Using non HTTPS URLs ('http://localhost:8000//') can be a security hazard!
2019-08-25 13:56:48,671 yay!
yay!
The second (bare) yay! must be coming from httpimport, namely from its logging setup.
How can I disable the logging for such a module, or better - raise its level so that only errors+ are logged?
Note: this question was initially asked at the Issues section of the GitHub repository for httpimport but the author did not know either how to fix that.
Author of httpimport here.
I totally forgot I was using the basicConfig logger thing.
It is fixed in master right now (0.7.2) - will be included in next PyPI release:
https://github.com/operatorequals/httpimport/commit/ff2896c8f666c3f16b0f27716c732d68be018ef7
The reason why this is happening is because when you do import httpimport they do the initial configuration for the logging machinery. This happens right here. What this means is that the root logger already has a StreamHandler attached to it. Because of this, and the fact that all loggers inherit from the root logger, when you do log.info('yay') it not only uses your Handler and Formatter, but it also propagates all they way to the root logger, which also emits the message.
Remember that whoever calls basicConfig first when an application starts that sets up the default configuration for the root logger, which in turn, is inherited by all loggers, unless otherwise specified.
If you have a complex logging configuration you need to ensure that you call it before you do any third-party imports which might call basicConfig. basicConfig is idempotent meaning the first call seals the deal, and subsequent calls have no effect.
Solutions
You could do log.propagate = False and you will see that the 2nd yay will not show.
You could attach the Formatter directly to the already existent root Handler by doing something like this (without adding another Handler yourself)
root = logging.getLogger('')
formatter = logging.Formatter('%(asctime)s %(message)s')
root_handler = root.handlers[0]
root_handler.setFormatter(formatter)
You could do a basicConfig call when you initialize your application (if you had such a config available, with initial Formatters and Handlers, etc. that will elegantly attach everything to the root logger neatly) and then you would only do something like logger = logging.getLogger(__name__) and logger.info('some message') that would work the way you'd expect because it would propagate all the way to the root logger which already has your configuration.
You could remove the initial Handler that's present on the root logger by doing something like
root = logging.getLogger('')
root.handlers = []
... and many more solutions, but you get the idea.
Also do note that logging.getLogger('httpimport').setLevel(logging.ERROR) this works perfectly fine. No messages below logging.ERROR would be logged by that logger, it's just that the problem wasn't from there.
If you however want to completely disable a logger you can just do logger.disabled = True (also do note, again, that the problem wasn't from the httpimport logger, as aforementioned)
One example demonstrated
Change your toconsole.py with this and you won't see the second yay.
import logging
import os
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
root_logger = logging.getLogger('')
root_handler = root_logger.handlers[0]
formatter = logging.Formatter('%(asctime)s %(message)s')
root_handler.setFormatter(formatter)
# or you could just keep your old code and just add log.propagate = False
# or any of the above solutions and it would work
logging.getLogger('httpimport').setLevel(logging.ERROR)
In my Django application I have set up my logging to log all levels to a file, which works well.
During management commands (and only there), I want to log (some levels) to the console aswell.
How can I (dynamically) set up the logging to achieve this?
It was actually quite easy, all I had to do was to add a new handler to each logger I wanted to redirect:
loggernames = [ ... ]
level = logging.DEBUG
handler = logging.StreamHandler()
handler.setLevel(level)
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
for name in loggernames:
logging.getLogger(name).addHandler(handler)