I am trying to add logging to a medium size Python project with minimal disruption. I would like to log from multiple modules to rotating files silently (without printing messages to the terminal). I have tried to modify this example and I almost have the functionality I need except for one issue.
Here is how I am attempting to configure things in my main script:
import logging
import logging.handlers
import my_module
LOG_FILE = 'logs\\logging_example_new.out'
#logging.basicConfig(level=logging.DEBUG)
# Define a Handler which writes messages to rotating log files.
handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=100000, backupCount=1)
handler.setLevel(logging.DEBUG) # Set logging level.
# Create message formatter.
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
# Tell the handler to use this format
handler.setFormatter(formatter)
# Add the handler to the root logger
logging.getLogger('').addHandler(handler)
# Now, we can log to the root logger, or any other logger. First the root...
logging.debug('Root debug message.')
logging.info('Root info message.')
logging.warning('Root warning message.')
logging.error('Root error message.')
logging.critical('Root critical message.')
# Use a Logger in another module.
my_module.do_something() # Call function which logs a message.
Here is a sample of what I am trying to do in modules:
import logging
def do_something():
logger = logging.getLogger(__name__)
logger.debug('Module debug message.')
logger.info('Module info message.')
logger.warning('Module warning message.')
logger.error('Module error message.')
logger.critical('Module critical message.')
Now, here is my problem. I currently get the messages logged into rotating files silently. But I only get warning, error, and critical messages. Despite setting handler.setLevel(logging.DEBUG).
If I uncomment logging.basicConfig(level=logging.DEBUG), then I get all the messages in the log files but I also get the messages printed to the terminal.
How do I get all messages above the specified threshold to my log files without outputing them to the terminal?
Thanks!
Update:
Based on this answer, it appears that calling logging.basicConfig(level=logging.DEBUG) automatically adds a StreamHandler to the Root logger and you can remove it. When I did remove it leaving only my RotatingFileHandler, messages no longer printed to the terminal. I am still wondering why I have to use logging.basicConfig(level=logging.DEBUG) to set the message level threshold, when I am setting handler.setLevel(logging.DEBUG). If anyone can shed a little more light on these issues it would still be appreciated.
You need to call set the logging level on the logger itself as well. I believe by default, the logging level on the logger is logging.WARNING
Ex.
root_logger = logging.getLogger('')
root_logger.setLevel(logging.DEBUG)
# Add the handler to the root logger
root_logger.addHandler(handler)
The loggers log level determines what the logger will actually log (i.e what messages will actually get handed to the handlers). The handlers log level determines what it will actually handle (i.e. what messages actually are output to file, stream, etc). So you could potentially have multiple handlers attached to a logger each handling a different log level.
Here's an SO answer that explains the way this works
Related
I have this python code:
import logging
LOGGER = logging.getLogger(__name__)
LOGGER.info('test')
It does not get written to the console, so where does this get logged?
This does not get logged anywhere, because you did not configure any logging handlers. Without a handler configured, the log event goes nowhere. When there are no handlers configured, the root logger gets a handler automatically added if an event at WARNING or above is seen, but your event was just at INFO level.
If you put a line like this before, then you will see it logged to terminal:
logging.basicConfig(level=logging.INFO)
Basic config will add a StreamHandler writing to sys.stderr if you don't specify otherwise.
I have spent far too much time on this and I couldn't figure out by myself.
I am writing some code that use some modules, so I want to create a specific logger for my message, not those from the other modules (they are far too many so I don't want to set a different level for each of them).
I have simplified my use case to:
import logging
logger = logging.getLogger('test')
logger.setLevel(level=logging.INFO)
print(logging.Logger.manager.loggerDict)
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
When I execute the previous code, I would expect to have my info message logged.
Instead I got:
python3 test.py
{'test': <Logger test (INFO)>}
This is a warning message
This is an error message
This is a critical message
Same with DEBUG, it always print warning/error/critical messages, not the debug/info.
If I raise the logging level (let's say ERROR), it works as expected.
python --version
Python 3.9.1
Change the configuration at the package level / root logger (documentation).
logging.basicConfig(level=logging.INFO)
Using the logging module to record the events:
import logging
#Creating an object
logger=logging.getLogger()
#Setting the threshold of logger to DEBUG
logger.setLevel(logging.DEBUG)
#Test messages
logger.debug("Debug Message")
logger.info("An information")
logger.warning("Warning")
logger.error("Error message")
logger.critical("critical Message")
I have the following minimalist example of a logging test based on the Logging Cookbook:
import logging
logger = logging.getLogger('test')
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - '
'%(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
print(logger.handlers)
logger.debug('hello world')
The above produces the following output:
$ python test_log.py
[<StreamHandler <stderr> (DEBUG)>]
As I've defined a handler and set the log level to debug, I was expecting the hello world log message to show up in the sample above.
If a logger's level isn't explicitly set, the system looks up the level of ancestor loggers until it gets to a logger whose level is explicitly set. In this case, it's the root logger which is the parent of the logger named 'test'. Setting the level of either this logger or the root logger to DEBUG will result in the log message being output. See this part of the documentation for the event information flow in Python logging.
So if we call logger.debug('hello world') we are generating a DEBUG event. However the logger acts as first filter before passing the event to the handlers. So if the logger level is at higher severity (like INFO) it will filter the event and the handler won't receive it, even if the event was matching the handler's level (DEBUG).
So the logger's level should be as low as the lowest level of its handlers. If you don't want the rest of the handlers to log DEBUG messages you would need to explicitly set their level higher, instead of leaving it unset and therefor inherited from the logger.
Having a new problem where the logger is writing about every other line without the format, just the message.
My code:
import logging
from logging.handlers import RotatingFileHandler
# Set up logging
LOG_FILE = argv[0][:-3] + '.log'
logging.basicConfig(
filename=LOG_FILE,
filemode='a',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
handler = RotatingFileHandler(LOG_FILE,maxBytes=1000000)
logger.addHandler(handler)
def main():
target, command, notify_address, wait = get_args(argv)
logger.info('Checking status of %s every %d minutes.' % (target, wait))
logger.info('Running %s and sending output to %s when online.' % (command, notify_address))
The vars returned from get_args() are all strings, even though wait is a number.
Note that I am not receiving any errors in my IDE or when running.
The output I am getting in my log file:
2015-02-12 16:26:27,483 - INFO - Checking status of <ip address> every 30 minutes.
Running <arbitrary bash command string> and sending output to <my email address> when online.
2015-02-12 16:26:27,483 - INFO - Running <arbitrary bash command string> and sending output to <my email address> when online.
What is causing the second logger.info() to print twice, and only once formatted properly?
I have another script that logs perfectly, no idea what I've done here. (Copy/pasted the logging setup section to be safe)
Are you using loggers at different levels of your code? It sounds like the log messages could be propagating upwards. Try adding
logger.propagate = False
after you add the handler. You can check out the python docs for a more detailed explanation here, but the relevant text below sounds exactly like what you're seeing.
Note If you attach a handler to a logger and one or more of its ancestors, it may emit the same record multiple times. In general, you should not need to attach a handler to more than one logger - if you just attach it to the appropriate logger which is highest in the logger hierarchy, then it will see all events logged by all descendant loggers, provided that their propagate setting is left set to True. A common scenario is to attach handlers only to the root logger, and to let propagation take care of the rest.
I have a pretty weird problem with my logging facility i use inside django/python. The logging is not working anymore since i upgraded to django 1.3. It seems to be related to the logging level and the 'debug=' setting in the settings.py file.
1) When i log INFO messages and debug=False, the logging won't happen, my file doesn't get appended.
2) When i log WARNING messages and debug=False, the logging works perfectly like i want it to, the file gets appended
3) When i log INFO messages and debug=True, the logging seems to work, the file get appended.
How could i log INFO messages with debug=False? It worked before django 1.3... is there somewhere a mysterious setting which do the trick? Underneath there is a sample code:
views.py:
import logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s',
filename='/opt/general.log',
filemode='a')
def create_log_file(filename, log_name, level=logging.INFO):
handler = logging.handlers.TimedRotatingFileHandler(filename, 'midnight', 7, backupCount=10)
handler.setLevel(level)
formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s', '%a, %Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logging.getLogger(log_name).addHandler(handler)
create_log_file('/opt/test.log', 'testlog')
logger_test = logging.getLogger('testlog')
logger_test.info('testing the logging functionality')
With this code the logging does not work in Django 1.3 with debug set to False in the settings.py file. When i should do like this:
logger_test.warning('testing the logging functionality')
This works perfectly when debug is set to False. The levels DEBUG and INFO aint logging but WARNING,ERROR and CRITICAL are doing their job...
Does anyone have an idea?
Since Django 1.3 contains its own logging configuration, you need to ensure that anything you're doing doesn't clash with it. For example, if the root logger has handlers already configured by Django by the time your module first gets imported, your basicConfig() call won't have any effect.
What you're describing is the normal logging situation - WARNINGs and above get handled, while INFO and DEBUG are suppressed by default. It looks as if your basicConfig() is not having any effect; You should consider replacing your basicConfig() call with the appropriate logging configuration in settings.py, or at least investigate the root logging level and what handlers are attached to it, at the time of your basicConfig() call.