At my current project there are thousand of code lines which looks like this:
logging.info("bla-bla-bla")
I don't want to change all these lines, but I would change log behavior. My idea is changing root logger to other Experimental logger, which is configured by ini-file:
[loggers]
keys = Experimental
[formatter_detailed]
format = %(asctime)s:%(name)s:%(levelname)s %(module)s:%(lineno)d: %(message)s
[handler_logfile]
class = FileHandler
args = ('experimental.log', 'a')
formatter = detailed
[logger_Experimental]
level = DEBUG
qualname = Experimental
handlers = logfile
propagate = 0
Now setting the new root logger is done by this piece of code:
logging.config.fileConfig(path_to_logger_config)
logging.root = logging.getLogger('Experimental')
Is redefining of root logger safe? Maybe there is more convenient way?
I've tried to use google and looked through stackoverflow questions, but I didn't find the answer.
You're advised not to redefine the root logger in the way you describe. In general you should only use the root logger directly for small scripts - for larger applications, best practice is to use
logger = logging.getLogger(__name__)
in each module where you use logging, and then make calls to logger.info() etc.
If all you want to do is to log to a file, why not just add a file handler to the root logger? You can do using e.g.
if __name__ == '__main__':
logging.basicConfig(filename='experimental.log', filemode='w')
main() # or whatever your main entry point is called
or via a configuration file.
Update: When I say "you're advised", I mean by me, here in this answer ;-) While you may not run into any problems in your scenario, it's not good practice to overwrite a module attribute which hasn't been designed to be overwritten. For example, the root logger is an instance of a different class (which is not part of the public API), and there are other references to it in the logging machinery which would still point to the old value. Either of these facts could lead to hard-to-debug problems. Since the logging package allows a number of ways of achieving what you seem to want (seemingly, logging to a file rather than the console), then you should use those mechanisms that have been provided.
logger = logging.getLogger()
Leaving the name empty will return you the root logger.
logger = logging.getLogger('name')
Gives you another logger.
Related
it seems that in python setting the logging level for loggers (including the root logger) is not applied, until you use one of the logging module's logging functions. Here's some code to show what I mean (I used python 3.7):
import logging
if __name__ == "__main__":
# Create a test logger and set its logging level to DEBUG
test_logger = logging.getLogger("test")
test_logger.setLevel(logging.DEBUG)
# Log some debug messages
# THESE ARE NOT PRINTED
test_logger.debug("test debug 1")
# Now use the logging module directly to log something
logging.debug("whatever, you won't see this anyway")
# Apparently the line above "fixed" the logging for the custom logger
# and you should be able to see the message below
test_logger.debug("test debug 2")
Output:
DEBUG:test:test debug 2
Maybe there's something I misunderstood about the configuration of the loggers, in that case I'd appreciate to know the correct way of doing it.
You didn't (explicitly) call logging.basicConfig, so the handler isn't configured correctly.
test_logger initially has no handler, because you didn't add one and the root logger doesn't have one yet. So although the message is "logged", nothing defines what that actually means.
When you call logging.debug, logging.basicConfig is called for you, because the root logger has no handler. At this time, a StreamHandler is created, but the root logger remains at the default level of INFO, and so nothing is sent to the new handler.
Now when you call test_logger.debug again, it has the inherited StreamHandler to actually output the long message to standard error.
I've seen some very odd behaviour with the logger module. It started with a relatively complex project, but now I've seen it with the following script:
import logging
import os
# Uncomment the following line to remove handlers
# logging.getLogger().handlers = []
filePath = os.environ['userprofile'] + r'\Documents\log.txt'
logging.basicConfig(filename=filePath)
logging.debug('Gleep')
logging.shutdown()
This should simply write 'Gleep' to the log.txt file to your documents. Currently it is writing the file but not writing anything to it, however, I've inconsistently seen the following behaviour:
List item
No log file being written at all.
Log file created, but nothing written to it.
Everything working fine.
The only way I've got it working before is to remove existing handlers (commented out in the example above).
This is on several machines in different locations.
So...am I doing something grotesquely wrong here? Why is the logging module acting this way?
I'm not sure how to prove/disprove/debug your 'other' situations, but maybe the following can help clarify what is happening in the code from your question:
First, setting logging.getLogger().handlers = [] should not be necessary, since logging.getLogger() is the root logger by default and has no handlers. Here is a fresh Python 3.7 shell:
>>> import logging
>>> logging.getLogger()
<RootLogger root (WARNING)>
>>> logging.getLogger().handlers
[]
(Note that in the absence of any handlers, a logger will fall back to lastResort, but that should be irrelevant here since you add a handler implicitly via basicConfig().)
Which brings you to logging.basicConfig(filename=filePath): this adds a FileHandler handler to the root logger. It does not touch the level of the root logger, which is WARNING by default, so your message won't pass the 'level test' and won't be emitted as a result.
>>> logging.root.getEffectiveLevel()
30
(This uses .getEffectiveLevel() rather than just the plain attribute because a logger will walk its hierarchy until it finds a level if its level is NOTSET.)
All that is to say: as you currently have it, you are logging from the root logger (level WARNING) a message object that has level DEBUG, so the message will go nowhere.
I have a situation where I want to create two separate logger objects in Python, each with their own independent handler. By "separate," I mean that I want to be able to pass a log statement to each object independently, without contaminating the other log.
main.py
import logging
from my_other_logger import init_other_logger
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO, handlers=[logging.StreamHandler(sys.stdout)])
other_logger = init_other_logger(__name__)
logger.info('Hello World') # Don't want to see this in the other logger
other_logger.info('Goodbye World') # Don't want to see this in the first logger
my_other_logger.py
import logging
import os, sys
def init_other_logger(namespace):
logger = logging.getLogger(namespace)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(LOG_FILE_PATH)
logger.addHandler(fh)
formatter = logging.Formatter('%(levelname)s:%(name)s:%(message)s')
fh.setFormatter(formatter)
#logger.propagate = False
return logger
The only configuration I've been able to identify as useful here is the logger.propagate property. Running the code above as-is pipes all log statements to both the log stream and the log file. When I have logger.propagate = False nothing is piped to the log stream, and both log objects again pipe their output to the log file.
How can I create one log object that sends logs to only one handler, and another log object that sends logs to another handler?
Firstly, let's see what's going on before we can head over to the solution.
logger = logging.getLogger(__name__)
: when you're doing this you're getting or creating a logger with the name 'main'. Since this is the first call, it will create that logger.
other_logger = init_other_logger(__name__) : when you're doing this, again, you're getting or creating a logger with the name 'main'. Since this is the second call, it will fetch the logger created above. So you're not really instantiating a new logger, but you're getting a reference to the same logger created above. You can check this by doing a print after you call init_other_logger of the form: print(logger is other_logger).
What happens next is you add a FileHandler and a Formatter to the 'main' logger (inside the init_other_logger function), and you invoke 2 log calls via the method info(). But you're doing it with the same logger.
So this:
logger.info('Hello World')
other_logger.info('Goodbye World')
is essentially the same thing as this:
logger.info('Hello World')
logger.info('Goodbye World')
Now it's not so surprising anymore that both loggers output to both the file and stream.
Solution
So the obvious thing to do is to call your init_other_logger with another name.
I would recommend against the solution the other answer proposes because that's
NOT how things should be done when you need an independent logger. The documentation has it nicely put that you should NEVER instantiate a logger directly, but always via the function getLogger of the logging module.
As we discovered above when you do a call of logging.getLogger(logger_name) it's either getting or creating a logger with logger_name. So this works perfectly fine when you want a unique logger as well. However remember this function is idemptotent meaning it will only create a logger with a given name the first time you call it and it will return that logger if you call it with the same name no matter how many times you'll call it after.
So, for example:
a first call of the form logging.getLogger('the_rock') - creates your unique logger
a second call of the form logging.getLogger('the_rock') - fetches the above logger
You can see that this is particularly useful if you, for instance:
Have a logger configured with Formatters and Filters somewhere in your project, for instance in project_root/main_package/__init__.py.
Want to use that logger somewhere in a secondary package which sits in project_root/secondary_package/__init__.py.
In secondary_package/__init__.py you could do a simple call of the form: logger = logging.getLogger('main_package') and you'll use that logger with all its bells and whistles.
Attention!
Even if you, at this point, will use your init_other_logger function to create a unique logger it would still output to both the file and the console. Replace this line other_logger = init_other_logger(__name__) with other_logger = init_other_logger('the_rock') to create a unique logger and run the code again. You will still see the output written to both the console and the file.
Why ?
Because it will use both the FileHandler and the StreamHandler.
Why ?
Because the way the logging machinery works. Your logger will emit its message via its handlers, then it will propagate all the way up to the root logger where it will use the StreamHandler which you attached via the basicConfig call. So the propagate property you discovered is actually what you want in your case, because you're creating a custom logger, which you'd want to emit messages only via its manually attached handlers and not emit any further. Uncomment the logger.propagate = False after creating the unique logger and you'll see that everything works as expected.
Both of your handlers are installed on the same logger. This is why they aren't seperate.
logger is other_logger because logging.getLogger(__name__) is logging.getLogger(__name__)
Either create a logger directly for the second log logging.Logger(name) (I know the documentation says never to do this but if you want an entirely independent logger this is how to do it), or use a different name for the second log when calling logging.getLogger().
Root logger doesn't log when (I think) it should:
import logging
# NOTE: I make sure to set the root logger level to logging.DEBUG
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
logging.debug('This is a debugging test.')
From my understanding this should log something but it does nothing.
Quick Google searches didn't help me to figure this issue out, neither did the official documentation.
In the other hand, if I use logging.warning instead of logging.debug, it does work.
What am I doing wrong?
EDIT:
Checking the current level with logging.getLogger().getEffectiveLevel() indicates me that the level is still at 30, like before the call of logging.basicConfig.
Checking logging.getLogger().isEnabledFor(logging.DEBUG) effectively tells me that the root logger level isn't enabled for logging.DEBUG.
EDIT: Turned out OP's (me) trouble was due to already calling logging.debug before logging.basicConfig, thus settings the config to defaults and the call to logging.basicConfig was ignored as pointed out by #SmCaterpillar.
Although, this alternative solution allows you to "force" root logger level after a basicConfig has already been set, could still be useful!
Found a "solution":
# […] Previous code
# Sets the root logger level
logging.getLogger().setLevel(logging.DEBUG)
logging.debug('Test') # Displays 'DEBUG:root:Test'
But it doesn't carry out the format I gave in in logging.basicConfig.
That said I'm already satisfied: It is already some kind of progress. Logging even with not exactly the format I expected is always better than no logging at all.
It appears that if you invoke logging.info() BEFORE you run logging.basicConfig, the logging.basicConfig call doesn't have any effect. In fact, no logging occurs.
Where is this behavior documented? I don't really understand.
You can remove the default handlers and reconfigure logging like this:
# if someone tried to log something before basicConfig is called, Python creates a default handler that
# goes to the console and will ignore further basicConfig calls. Remove the handler if there is one.
root = logging.getLogger()
if root.handlers:
for handler in root.handlers:
root.removeHandler(handler)
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
Yes.
You've asked to log something. Logging must, therefore, fabricate a default configuration. Once logging is configured... well... it's configured.
"With the logger object configured,
the following methods create log
messages:"
Further, you can read about creating handlers to prevent spurious logging. But that's more a hack for bad implementation than a useful technique.
There's a trick to this.
No module can do anything except logging.getlogger() requests at a global level.
Only the if __name__ == "__main__": can do a logging configuration.
If you do logging at a global level in a module, then you may force logging to fabricate it's default configuration.
Don't do logging.info globally in any module. If you absolutely think that you must have logging.info at a global level in a module, then you have to configure logging before doing imports. This leads to unpleasant-looking scripts.
This answer from Carlos A. Ibarra is in principle right, however that implementation might break since you are iterating over a list that might be changed by calling removeHandler(). This is unsafe.
Two alternatives are:
while len(logging.root.handlers) > 0:
logging.root.removeHandler(logging.root.handlers[-1])
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
or:
logging.root.handlers = []
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
where the first of these two using the loop is the safest (since any destruction code for the handler can be called explicitly inside the logging framework). Still, this is a hack, since we rely on logging.root.handlers to be a list.
Here's the one piece of the puzzle that the above answers didn't mention... and then it will all make sense: the "root" logger -- which is used if you call, say, logging.info() before logging.basicConfig(level=logging.DEBUG) -- has a default logging level of WARNING.
That's why logging.info() and logging.debug() don't do anything: because you've configured them not to, by... um... not configuring them.
Possibly related (this one bit me): when NOT calling basicConfig, I didn't seem to be getting my debug messages, even though I set my handlers to DEBUG level. After a bit of hair-pulling, I found you have to set the level of the custom logger to be DEBUG as well. If your logger is set to WARNING, then setting a handler to DEBUG (by itself) won't get you any output on logger.info() and logger.debug().
Ran into this same issue today and, as an alternative to the answers above, here's my solution.
import logging
import sys
logging.debug('foo') # IRL, this call is from an imported module
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO, force=True)
logging.info('bar') # without force=True, this is not printed to the console
Here's what the docs say about the force argument.
If this keyword argument is specified as true, any existing handlers
attached to the root logger are removed and closed, before carrying
out the configuration as specified by the other arguments.
A cleaner version of the answer given by #paul-kremer is:
while len(logging.root.handlers):
logging.root.removeHandler(logging.root.handlers[-1])
Note: it is generally safe to assume logging.root.handlers will always be a list (see: https://github.com/python/cpython/blob/cebe9ee988837b292f2c571e194ed11e7cd4abbb/Lib/logging/init.py#L1253)
Here is what I did.
I wanted to log to a file which has a name configured in a config-file and also get the debug-logs of the config-parsing.
TL;DR; This logs into a buffer until everything to configure the logger is available
# Log everything into a MemoryHandler until the real logger is ready.
# The MemoryHandler never flushes (flushLevel 100 is above CRITICAL) automatically but only on close.
# If the configuration was loaded successfully, the real logger is configured and set as target of the MemoryHandler
# before it gets flushed by closing.
# This means, that if the log gets to stdout, it is unfiltered by level
root_logger = logging.getLogger()
root_logger.setLevel(logging.NOTSET)
stdout_logging_handler = logging.StreamHandler(sys.stderr)
tmp_logging_handler = logging.handlers.MemoryHandler(1024 * 1024, 100, stdout_logging_handler)
root_logger.addHandler(tmp_logging_handler)
config: ApplicationConfig = ApplicationConfig.from_filename('config.ini')
# because the records are already logged, unwanted ones need to be removed
filtered_buffer = filter(lambda record: record.levelno >= config.main_config.log_level, tmp_logging_handler.buffer)
tmp_logging_handler.buffer = filtered_buffer
root_logger.removeHandler(tmp_logging_handler)
logging.basicConfig(filename=config.main_config.log_filename, level=config.main_config.log_level, filemode='wt')
logging_handler = root_logger.handlers[0]
tmp_logging_handler.setTarget(logging_handler)
tmp_logging_handler.close()
stdout_logging_handler.close()