I tried to enable logging for my python library. I want all logs to go into my custom subset my_logger instead of root. So this is what I tried:
import logging
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
And for some reason my custom logger didn't output subset name (my_logger) in front of it.
my hello!
WARNING:root:hello!
A simple change of the order of root and my loggers fixed the issue:
import logging
logging.warning("hello!")
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
Output
WARNING:root:hello!
WARNING:my_logger:my hello!
I do not ever want to use a root logger at all. Is it possible to get WARNING:my_logger prefix to my output without logging to the root logger first?
Seems like the logging module is doing first-time configuration during the first call. You can do that first to get it out of the way before using your logger:
import logging
logging.basicConfig() # do the configuration
my_logger = logging.getLogger('my_logger')
my_logger.warning("my hello!")
logging.warning("hello!")
Result:
WARNING:my_logger:my hello!
WARNING:root:hello!
For a library that you expect others to import, this should be done by the code importing the module instead of the library.
Related
I've got a sagemaker instance running a jupyter notebook. I'd like to use python's logging module to write to a log file, but it doesn't work.
My code is pretty straightforward:
import logging
logger = logging.getLogger()
formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S")
fhandler = logging.FileHandler("taxi_training.log")
fhandler.setFormatter(formatter)
logger.addHandler(fhandler)
logger.debug("starting log...")
This should write a line to my file taxi_training.log but it doesn't.
I tried using the reload function from importlib, I also tried setting the output stream to sys.stdout explicitly. Nothing is logging to the file or in cloudwatch.
Do I need to add anything to my Sagemaker instance for this to work properly?
The Python logging module requires a logging level and one or more handlers to process output. By default, the logging level is set to WARNING (30) with a STDOUT handler for that level. If a logging level and/or handler is not explicitly defined, these settings are inherited from the parent root logger settings. These settings can be verified by adding the following lines to the bottom of your code:
# Verify levels and handlers
print("Parent Logger: "+logger.parent.name)
print("Parent Level: "+str(logger.parent.level))
print("Parent Handlers: "+str(logger.parent.handlers))
print("Logger Level: "+str(logger.level))
print("Logger Handlers: "+str(logger.handlers))
The easiest way to instantiate a handler and set a logging level is by running the logging.basicConfig() function (documentation). This will set a logging level and STDOUT handler at the root logger level which will propagate to any child loggers created in the same code. Here is an example using the code provided:
import logging
logger = logging.getLogger('log')
logging.basicConfig(level=logging.INFO) # Set logging level and STDOUT handler
logger.info(5)
How can I redirect logging within a context to a file. Also redirecting logging in other modules (3rd party e.g. requests, numpy, etc.), if called in the scope.
The use case is that we want to integrate algorithms of another team.
we need to output logs of the algorithm to additional file so we can give it to them for debug purpases.
For example:
import some_func_3rd_party
some_func_3rd_party() # logs will only be written to predefined handlers
logger.info("write only to predefined handlers")
with log2file("somefile.txt"):
logger.info("write to file and predefined handlers")
some_func_3rd_party() # logs will be written to predefined handlers and to file
logger.info("write only to predefined handlers")
At the moment, I don't see a way of achieving what you want, without accessing and modifying the root logger.
If you wanted a more targeted approach, you would need to know how the 3rd party library's logger is configured.
Please have a look at the answers to "Log all requests from the python-requests module" to better understand what I mean.
Here is one approach that might work in your particular case:
import contextlib
import logging
import requests
import sys
#contextlib.contextmanager
def special_logger(app_logger, log_file, log_level=logging.DEBUG):
# Get all handlers added to the app_logger.
handlers = app_logger.handlers
# Add handlers specific to this context.
handlers.append(logging.FileHandler(filename=log_file))
handlers.append(logging.StreamHandler(stream=sys.stderr))
# Get the root logger, set the logging level,
# and add all the handlers above.
root_logger = logging.getLogger()
root_logger_level = root_logger.level
root_logger.setLevel(log_level)
for handler in handlers:
root_logger.addHandler(handler)
# Yield the modified root logger.
yield root_logger
# Clean up handlers.
for handler in handlers:
root_logger.removeHandler(handler)
# Reset log level to what it was.
root_logger.setLevel(root_logger_level)
# Get logger for this module.
app_logger = logging.getLogger('my_app')
# Add a handler logging to stdout.
sh = logging.StreamHandler(stream=sys.stdout)
app_logger.addHandler(sh)
app_logger.setLevel(logging.DEBUG)
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
# Use the new context with modified root logger.
with special_logger(app_logger, 'my_app.log') as spec_logger:
# The output will appear twice in the console because there is
# one handler logging to stdout, and one to stderr.
# This is for demonstration purposes only.
spec_logger.info("Logs go to stdout, stderr, and file.")
# 'requests' logs go to stdout, stderr, and file.
requests.get("http://www.google.com")
app_logger.info("Logs go only to stdout.")
# 'requests' logs go to stdout only but won't be emitted in this case
# because root logger level is not set to DEBUG.
requests.get("http://www.google.com")
I configure the root logger:
logging.basicConfig(filename='logfile.log', level=logging.DEBUG)
Then I put log messages in my code like this:
logging.debug("This is a log message")
Question: How do I add a RotatingFileHandler such that my logs will be rotated?
Note: I do not want a logger instance which I then have to pass around everywhere.
You can do this by using the handlers kwarg of basicConfig. Be aware that this needs to be an iterable and you can not use the filename argument together with it.
import logging
import logging.handlers
rot_handler = logging.handlers.RotatingFileHandler('filename.txt')
logging.basicConfig(level=logging.DEBUG, handlers=[rot_handler])
Link to relevant part of documentation: https://docs.python.org/3/library/logging.html#logging.basicConfig
I have following setup
../test/dirA
../test/Conftest.py
../test/Test_1.py
../test/logging.config
Code for Conftest.py
import logging.config
from os import path
import time
config_file_path = path.join(path.dirname(path.abspath(__file__)), 'logging.conf')
log_file_path = path.join(path.dirname(path.abspath(__file__)), ‘logFile.log’)
logging.config.fileConfig(config_file_path)
logger = logging.getLogger(__name__)
fh = logging.FileHandler(log_file_path)
fh.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.ERROR)
# add the handlers to the logger
logger.addHandler(fh)
logger.addHandler(ch)
logging.info('done')
Code for ../test/Test_1.py
import logging
logger = logging.getLogger(__name__)
def test_foo():
logger = logging.getLogger(__name__)
logger.info("testing foo")
def test_foobar():
logger = logging.getLogger(__name__)
logger.info("testing foobar")
I need to see logs from both files in logFile.log but currently I see only log from conftest.py. What am I missing?
Also, I noticed that if I execute Conftest.py from test folder (one dir up), for some reason it does not see logging.config.
Is my usage for conftest for correct for logging?
How else I can achieve same result?
Thank you
Update:
I used approach described
https://blog.muya.co.ke/configuring-multiple-loggers-python/.
https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/
With some changes, this addresses my question.
Another lesson I learned that use
logging.getLogger(__name__)
at function level, not at module level.
You can see a lots of example out there (including this article, I did it just for giving example in short) get logger at module level. They looks harmless, but actually, there is a pitfall – Python logging module respects all created logger before you load the configuration from a file, if you get logger at the module level like this
Well, my first guess is that if you don't import the Conftest.py in the test_1.py, the logger setup code is not reached and executed, so the logging in the Test_1 file is in a default location log file.
Try this line to find the location of the log file:
logging.getLoggerClass().root.handlers[0].baseFilename
I used approach described at https://blog.muya.co.ke/configuring-multiple-loggers-python/.
With some changes, this addresses my question.
I want to find out how logging should be organised given that I write many scripts and modules that should feature similar logging. I want to be able to set the logging appearance and the logging level from the script and I want this to propagate the appearance and level to my modules and only my modules.
An example script could be something like the following:
import logging
import technicolor
import example_2_module
def main():
verbose = True
global log
log = logging.getLogger(__name__)
logging.root.addHandler(technicolor.ColorisingStreamHandler())
# logging level
if verbose:
logging.root.setLevel(logging.DEBUG)
else:
logging.root.setLevel(logging.INFO)
log.info("example INFO message in main")
log.debug("example DEBUG message in main")
example_2_module.function1()
if __name__ == '__main__':
main()
An example module could be something like the following:
import logging
log = logging.getLogger(__name__)
def function1():
print("printout of function 1")
log.info("example INFO message in module")
log.debug("example DEBUG message in module")
You can see that in the module there is minimal infrastructure written to import the logging of the appearance and the level set in the script. This has worked fine, but I've encountered a problem: other modules that have logging. This can result in output being printed twice, and very detailed debug logging from modules that are not my own.
How should I code this such that the logging appearance/level is set from the script but then used only by my modules?
You need to set the propagate attribute to False so that the log message does not propagate to ancestor loggers. Here is the documentation for Logger.propagate -- it defaults to True. So just:
import logging
log = logging.getLogger(__name__)
log.propagate = False