Here is the sample code from source: https://docs.python.org/3/howto/logging.html
import logging
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')
I thought the level=logging.DEBUG of below code
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
making the logging accept only logging.DEBUG only. Then why info, warning, error work
The logging levels of python is arranged in the following order:
CRITICAL
ERROR
WARNING
INFO
DEBUG
NOTSET
If you set the root logger level when configuring as logging.DEBUG, it will write all the logs with the levels above that.
Example:
import logging
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')
Output:
DEBUG:root:This message should go to the log file
INFO:root:So should this
WARNING:root:And this, too
ERROR:root:And non-ASCII stuff, too, like Øresund and Malmö
If you set the root logger level when configuring as logging.ERROR, it will write only the CRITICAL logs and ERROR logs.
Example:
import logging
logging.basicConfig(filename='example.log', level=logging.ERROR)
logging.warning('this is warning')
logging.info('this is info')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')
logging.critical('And non-ASCII stuff, too, like Øresund and Malmö 2')
Output:
ERROR:root:And non-ASCII stuff, too, like Øresund and Malmö
CRITICAL:root:And non-ASCII stuff, too, like Øresund and Malmö 2
Log levels are actually integers and they set a threshold. For instance logging.DEBUG is 10 and logging.INFO is 20. When you set a log level, you allow that level and anything larger. When you set DEBUG any level 10 or greater will log. Filter objects are commonly used if you want a different level of control.
Related
Is there a way to save the program input and output to a text file? Let's have a really simple application and say I want to log the shell to a "log.txt" file.
What's your name?
Peter
How old are you?
35
Where are your from?
Germany
Hi Peter, you're 35 years old and you're from Germany!
I believe that the logging module could be the way but I don't know how to use it.
In the official documentation there is a Basic Logging Tutorial.
A very simple example would be:
import logging
logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
logging.error('And non-ASCII stuff, too, like Øresund and Malmö')
That's the explanation:
First you import logging, and after that you configure it with basicConfig indicating the file that you want to save the log messages to.
Then, every time you want to add something to the log, you have to use the logging functions: debug(), info(), warning(), error() and critical(), that are named after the level or severity of the events they are used to track.
it seems that in python setting the logging level for loggers (including the root logger) is not applied, until you use one of the logging module's logging functions. Here's some code to show what I mean (I used python 3.7):
import logging
if __name__ == "__main__":
# Create a test logger and set its logging level to DEBUG
test_logger = logging.getLogger("test")
test_logger.setLevel(logging.DEBUG)
# Log some debug messages
# THESE ARE NOT PRINTED
test_logger.debug("test debug 1")
# Now use the logging module directly to log something
logging.debug("whatever, you won't see this anyway")
# Apparently the line above "fixed" the logging for the custom logger
# and you should be able to see the message below
test_logger.debug("test debug 2")
Output:
DEBUG:test:test debug 2
Maybe there's something I misunderstood about the configuration of the loggers, in that case I'd appreciate to know the correct way of doing it.
You didn't (explicitly) call logging.basicConfig, so the handler isn't configured correctly.
test_logger initially has no handler, because you didn't add one and the root logger doesn't have one yet. So although the message is "logged", nothing defines what that actually means.
When you call logging.debug, logging.basicConfig is called for you, because the root logger has no handler. At this time, a StreamHandler is created, but the root logger remains at the default level of INFO, and so nothing is sent to the new handler.
Now when you call test_logger.debug again, it has the inherited StreamHandler to actually output the long message to standard error.
I need to raise the severity of all debug logs in my script and log them as INFO. I was intending to use a handler for this. I am importing a package that logs at DEBUG level so I cannot change the original logging level. I need them to be logged as INFO in my script.
You can consider using a file-specific logger. that way in your script you can set the severity level to DEBUG while keeping the logger in the package at its level.
In doing so you create the module logger and set its level instead of the root logger (which will demage the package's output)
This is an example of file-specific logger:
# this creates a logger specific for that "__name__" which is the name of your moduele
module_logger = logging.getLogger(__name__)
module_logger.setLevel(logging.DEBUG)
module_logger.log(...)
now you can use this logger to log while keeping everything else the same,
Hope that helped
I can't understand why the following code does not produce my debug message even though effective level is appropriate (output is just 10)
import logging
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
while when I add this line after the import: logging.debug("Start...")
import logging
logging.debug("Start...")
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
it produces following output:
DEBUG:root:Debug Mess!
ERROR:root:10
so even though "Start..." is not shown, it starts to log. Why?
It's on Python 3.5. Thanks
The top-level logging.debug(..) call calls the logging.basicConfig() function for you if no handlers have been configured yet on the root logger.
Because using a call to logging.getLogger().debug() does not trigger that call, you don't see any output because there are no handlers to show the output on.
The Python 3 version of logger does have a logging.lastResort handler, used for when no logging configuration exists, but this handler is configured to only show messages of level WARNING and up, which is why you see your ERROR level message (10) printed to STDERR, but not your DEBUG level message. In Python 2 you would get the message No handlers could be found for logger "root" printed instead, just once for the first attempt to log anything. I'd not rely on the lastResort handler however; instead properly configure your logging hierarchy with a decent handler configured for your own needs.
Either call logging.basicConfig() yourself, or manually add a handler on the root logger:
l = logging.getLogger()
l.addHandler(logging.StreamHandler())
The above basically does the same thing as a logging.basicConfig() call with no further arguments. The StreamHandler() created this way logs to STDERR and does not further filter on the message level. Note that a logging.basicConfig() call can also set the logging level for you.
The root logger(default logger, top level) and all other logger's default log level is warning(order 3) in all 5 log levels: debug < info < warning < error < fatal in order.
So at your first logging.debug('starting...'), you haven't set the root log level to debug as following code and you can't get the starting...output.
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug('starting...')
see python logging HOW TO for detail
I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file