I have a Python script that utilizes the logging module to log messages to a file. And in most cases it works just fine, but on some of my co-worker's machines it is logging every message from every imported module to the console window. There doesn't seem to be any pattern as to why this is working on some machines and not on others.
Here is how I set up the logger
logger = logging.getLogger(__name__)
if __name__ == "__main__":
args = parser.parse_args()
logging.basicConfig(handlers=[logging.NullHandler()]) #do this so that imported modules don't screw with logging
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler(args.log)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] %(levelname)s: %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.info(str(args))
main(args)
So what's happening is first I set up my logger as a global variable, then I use basicConfig to set the root handler as the NullHandler(), this is so that any imported modules won't log anything to my log file or my console window, and finally I set my logger to log to a log file provided in the program arguments.
This works exactly how I want to on my machine, the log file is only written to when I use my logger object. No log messages are passed to my console window and no log messages from imported modules go to my log file.
However, on a few random machines, every single call to logging or logger goes to the console window. The logs in the log file are ok, they only show the logs I expect them to (the ones called with logger).
For instance, when I do logger.info(str(args)),
INFO: Namespace(arg1='Foo', arg2='Bar', arg3='foobar')
is outputted to the console window.
I can't think of any reason that this would behave differently on different machines. It is the exact same script, the same version of python, and the same version of logging. Any ideas?
Related
I am coding a project which has multiple modules for a legacy software in Python 2.7. I am logging the statements in a file called debg.log from all the modules. The code is ran from the main python file called "Automation.py" which then calls multiple modules one of which is suppose called "builds.py". I want the log statements to be printed onto the debg.log file such that it is sequential the same way as the statements are executed in the python modules.
For e.g: Let's say in the actual code Automation.py is ran first and it prints out this first: "This is the main file." and "It calls the other modules". Then it calls the other module "builds.py" which prints out "I am called by the first file". And then the sequence of commands goes back to the main file "Automation.py" and it prints out "Program has ended."
In my program the when I use logger the sequence is being printed like this in debg.log:
prints out "Program has ended." So basically the print in the 2nd module is not printed.
Now, I have to make sure such that the output printed in debg.log is new each time the program is executed so I have to use mode='w' as the logs should not be appended to previous logs. That is where the problem lies as if I put mode='w' in the module 'builds.py' it truncates the logs from the 'Automation.py' first and then when the commands goes back to the 'Automation.py' it truncates the logs from the 'builds.py'. Please help explain how I should organize the filemode such that everything is printed in the sequential manner and that it does not get truncated.
Below is the logger part of the code of the main module
//Small snippet of logger code in Automation.py
#import logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler('debg.log', mode='w')
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
This is the logger part of the second module
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
file_handler = logging.FileHandler('debg.log', mode='w')
file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler()
logger.addHandler(file_handler)
logger.addHandler(stream_handler)
As, I said I have used multiple different scenarios by changing the mode of the file so that it prints on the log file correctly but it did not work. The only time it did print out in a sequential manner was when the mode was not included as so it went to default which is append. But then the logs just kept appending to the debg.log which is not what I want.
it seems that in python setting the logging level for loggers (including the root logger) is not applied, until you use one of the logging module's logging functions. Here's some code to show what I mean (I used python 3.7):
import logging
if __name__ == "__main__":
# Create a test logger and set its logging level to DEBUG
test_logger = logging.getLogger("test")
test_logger.setLevel(logging.DEBUG)
# Log some debug messages
# THESE ARE NOT PRINTED
test_logger.debug("test debug 1")
# Now use the logging module directly to log something
logging.debug("whatever, you won't see this anyway")
# Apparently the line above "fixed" the logging for the custom logger
# and you should be able to see the message below
test_logger.debug("test debug 2")
Output:
DEBUG:test:test debug 2
Maybe there's something I misunderstood about the configuration of the loggers, in that case I'd appreciate to know the correct way of doing it.
You didn't (explicitly) call logging.basicConfig, so the handler isn't configured correctly.
test_logger initially has no handler, because you didn't add one and the root logger doesn't have one yet. So although the message is "logged", nothing defines what that actually means.
When you call logging.debug, logging.basicConfig is called for you, because the root logger has no handler. At this time, a StreamHandler is created, but the root logger remains at the default level of INFO, and so nothing is sent to the new handler.
Now when you call test_logger.debug again, it has the inherited StreamHandler to actually output the long message to standard error.
I have a situation where I want to create two separate logger objects in Python, each with their own independent handler. By "separate," I mean that I want to be able to pass a log statement to each object independently, without contaminating the other log.
main.py
import logging
from my_other_logger import init_other_logger
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO, handlers=[logging.StreamHandler(sys.stdout)])
other_logger = init_other_logger(__name__)
logger.info('Hello World') # Don't want to see this in the other logger
other_logger.info('Goodbye World') # Don't want to see this in the first logger
my_other_logger.py
import logging
import os, sys
def init_other_logger(namespace):
logger = logging.getLogger(namespace)
logger.setLevel(logging.DEBUG)
fh = logging.FileHandler(LOG_FILE_PATH)
logger.addHandler(fh)
formatter = logging.Formatter('%(levelname)s:%(name)s:%(message)s')
fh.setFormatter(formatter)
#logger.propagate = False
return logger
The only configuration I've been able to identify as useful here is the logger.propagate property. Running the code above as-is pipes all log statements to both the log stream and the log file. When I have logger.propagate = False nothing is piped to the log stream, and both log objects again pipe their output to the log file.
How can I create one log object that sends logs to only one handler, and another log object that sends logs to another handler?
Firstly, let's see what's going on before we can head over to the solution.
logger = logging.getLogger(__name__)
: when you're doing this you're getting or creating a logger with the name 'main'. Since this is the first call, it will create that logger.
other_logger = init_other_logger(__name__) : when you're doing this, again, you're getting or creating a logger with the name 'main'. Since this is the second call, it will fetch the logger created above. So you're not really instantiating a new logger, but you're getting a reference to the same logger created above. You can check this by doing a print after you call init_other_logger of the form: print(logger is other_logger).
What happens next is you add a FileHandler and a Formatter to the 'main' logger (inside the init_other_logger function), and you invoke 2 log calls via the method info(). But you're doing it with the same logger.
So this:
logger.info('Hello World')
other_logger.info('Goodbye World')
is essentially the same thing as this:
logger.info('Hello World')
logger.info('Goodbye World')
Now it's not so surprising anymore that both loggers output to both the file and stream.
Solution
So the obvious thing to do is to call your init_other_logger with another name.
I would recommend against the solution the other answer proposes because that's
NOT how things should be done when you need an independent logger. The documentation has it nicely put that you should NEVER instantiate a logger directly, but always via the function getLogger of the logging module.
As we discovered above when you do a call of logging.getLogger(logger_name) it's either getting or creating a logger with logger_name. So this works perfectly fine when you want a unique logger as well. However remember this function is idemptotent meaning it will only create a logger with a given name the first time you call it and it will return that logger if you call it with the same name no matter how many times you'll call it after.
So, for example:
a first call of the form logging.getLogger('the_rock') - creates your unique logger
a second call of the form logging.getLogger('the_rock') - fetches the above logger
You can see that this is particularly useful if you, for instance:
Have a logger configured with Formatters and Filters somewhere in your project, for instance in project_root/main_package/__init__.py.
Want to use that logger somewhere in a secondary package which sits in project_root/secondary_package/__init__.py.
In secondary_package/__init__.py you could do a simple call of the form: logger = logging.getLogger('main_package') and you'll use that logger with all its bells and whistles.
Attention!
Even if you, at this point, will use your init_other_logger function to create a unique logger it would still output to both the file and the console. Replace this line other_logger = init_other_logger(__name__) with other_logger = init_other_logger('the_rock') to create a unique logger and run the code again. You will still see the output written to both the console and the file.
Why ?
Because it will use both the FileHandler and the StreamHandler.
Why ?
Because the way the logging machinery works. Your logger will emit its message via its handlers, then it will propagate all the way up to the root logger where it will use the StreamHandler which you attached via the basicConfig call. So the propagate property you discovered is actually what you want in your case, because you're creating a custom logger, which you'd want to emit messages only via its manually attached handlers and not emit any further. Uncomment the logger.propagate = False after creating the unique logger and you'll see that everything works as expected.
Both of your handlers are installed on the same logger. This is why they aren't seperate.
logger is other_logger because logging.getLogger(__name__) is logging.getLogger(__name__)
Either create a logger directly for the second log logging.Logger(name) (I know the documentation says never to do this but if you want an entirely independent logger this is how to do it), or use a different name for the second log when calling logging.getLogger().
I can't understand why the following code does not produce my debug message even though effective level is appropriate (output is just 10)
import logging
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
while when I add this line after the import: logging.debug("Start...")
import logging
logging.debug("Start...")
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
it produces following output:
DEBUG:root:Debug Mess!
ERROR:root:10
so even though "Start..." is not shown, it starts to log. Why?
It's on Python 3.5. Thanks
The top-level logging.debug(..) call calls the logging.basicConfig() function for you if no handlers have been configured yet on the root logger.
Because using a call to logging.getLogger().debug() does not trigger that call, you don't see any output because there are no handlers to show the output on.
The Python 3 version of logger does have a logging.lastResort handler, used for when no logging configuration exists, but this handler is configured to only show messages of level WARNING and up, which is why you see your ERROR level message (10) printed to STDERR, but not your DEBUG level message. In Python 2 you would get the message No handlers could be found for logger "root" printed instead, just once for the first attempt to log anything. I'd not rely on the lastResort handler however; instead properly configure your logging hierarchy with a decent handler configured for your own needs.
Either call logging.basicConfig() yourself, or manually add a handler on the root logger:
l = logging.getLogger()
l.addHandler(logging.StreamHandler())
The above basically does the same thing as a logging.basicConfig() call with no further arguments. The StreamHandler() created this way logs to STDERR and does not further filter on the message level. Note that a logging.basicConfig() call can also set the logging level for you.
The root logger(default logger, top level) and all other logger's default log level is warning(order 3) in all 5 log levels: debug < info < warning < error < fatal in order.
So at your first logging.debug('starting...'), you haven't set the root log level to debug as following code and you can't get the starting...output.
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug('starting...')
see python logging HOW TO for detail
I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.