I have a number of scripts that use logging.error(msg) style logging messages.
I originally had these handled by a FileHandler but now I need to change it over to a SysLogHandler. I can't get the SysLogHandler to work, by which I mean there are no exceptions but nothing ends up in the logs.
On the other hand, I can get messages logged through syslog.syslog with this code:
import syslog
syslog.openlog(ident="LOG_IDENTIFIER", logoption=syslog.LOG_PID, facility=syslog.LOG_LOCAL3)
syslog.syslog('syslog test msg local3')
and this is what I'm using for the sysloghandler:
import logging
import logging.handlers
logger = logging.getLogger('myLogger')
handler = logging.handlers.SysLogHandler(address='/dev/log',facility=19)
logger.addHandler(handler)
logging.error('sysloghandler test msg local3')
facility=19 because that's the value for SysLogHandler.LOG_LOCAL.
I'm thinking the issue is the address setting. The log files are located in /var/log but that doesn't seem to work. I've tried not specifying an address (so it defaults to localhost port 514) still nothing gets logged.
I need to know either how I can determine the correct address info or how to troubleshoot the issue if that isn't the problem.
Related
I am trying to understand how fabric's logger module works.
I run on the command line:
$ fabfile -I task-1
I of course get output to the console showing me the execution of the task on each of the remote hosts connected to.
Bu how can I redirect the output of errors to a logfile on my local machine and put a timestamp on it?
Does fabric's logger module provide this? Or should I use Python's logging module. Either one, I am not sure how to implement.
Unfortunately, Fabric does not feature logging to a file (see issue #57)
But there is a workaround using the logging module, which I find pretty nice.
First, configure your logger:
import logging
logging.basicConfig(
level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
And then wrap the portions of your code which are likely to throw errors with a try/catch block like this:
try:
#code
except:
logging.exception('Error:')
The logger will print 'Error:' and the exception's stacktrace to "out.log"
I can't understand why the following code does not produce my debug message even though effective level is appropriate (output is just 10)
import logging
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
while when I add this line after the import: logging.debug("Start...")
import logging
logging.debug("Start...")
l = logging.getLogger()
l.setLevel(logging.DEBUG)
l.debug("Debug Mess!")
l.error(l.getEffectiveLevel())
it produces following output:
DEBUG:root:Debug Mess!
ERROR:root:10
so even though "Start..." is not shown, it starts to log. Why?
It's on Python 3.5. Thanks
The top-level logging.debug(..) call calls the logging.basicConfig() function for you if no handlers have been configured yet on the root logger.
Because using a call to logging.getLogger().debug() does not trigger that call, you don't see any output because there are no handlers to show the output on.
The Python 3 version of logger does have a logging.lastResort handler, used for when no logging configuration exists, but this handler is configured to only show messages of level WARNING and up, which is why you see your ERROR level message (10) printed to STDERR, but not your DEBUG level message. In Python 2 you would get the message No handlers could be found for logger "root" printed instead, just once for the first attempt to log anything. I'd not rely on the lastResort handler however; instead properly configure your logging hierarchy with a decent handler configured for your own needs.
Either call logging.basicConfig() yourself, or manually add a handler on the root logger:
l = logging.getLogger()
l.addHandler(logging.StreamHandler())
The above basically does the same thing as a logging.basicConfig() call with no further arguments. The StreamHandler() created this way logs to STDERR and does not further filter on the message level. Note that a logging.basicConfig() call can also set the logging level for you.
The root logger(default logger, top level) and all other logger's default log level is warning(order 3) in all 5 log levels: debug < info < warning < error < fatal in order.
So at your first logging.debug('starting...'), you haven't set the root log level to debug as following code and you can't get the starting...output.
import logging
logging.basicConfig(level=logging.DEBUG)
logging.debug('starting...')
see python logging HOW TO for detail
I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.
I want to output some strings to a log file and I want the log file to be continuously updated.
I have looked into the logging module pf python and found out that it is
mostly about formatting and concurrent access.
Please let me know if I am missing something or amy other way of doing it
usually i do the following:
# logging
LOG = "/tmp/ccd.log"
logging.basicConfig(filename=LOG, filemode="w", level=logging.DEBUG)
# console handler
console = logging.StreamHandler()
console.setLevel(logging.ERROR)
logging.getLogger("").addHandler(console)
The logging part initialises logging's basic configurations. In the following I set up a console handler that prints out some logging information separately. Usually my console output is set to output only errors (logging.ERROR) and the detailed output in the LOG file.
Your loggings will now printed to file. For instance using:
logger = logging.getLogger(__name__)
logger.debug("hiho debug message")
or even
logging.debug("next line")
should work.
Doug Hellmann has a nice guide.
To add my 10cents with regards to using logging. I've only recently discovered the Logging module and was put off at first. Maybe just because it initially looks like a lot of work, but it's really simple and incredibly handy.
This is the set up that I use. Similar to Mkinds answer, but includes a timestamp.
# Set up logging
log = "bot.log"
logging.basicConfig(filename=log,level=logging.DEBUG,format='%(asctime)s %(message)s', datefmt='%d/%m/%Y %H:%M:%S')
logging.info('Log Entry Here.')
Which will produce something like:
22/09/2015 14:39:34 Log Entry Here.
You can log to a file with the Logging API.
Example: http://docs.python.org/2/howto/logging.html#logging-to-a-file
It appears that if you invoke logging.info() BEFORE you run logging.basicConfig, the logging.basicConfig call doesn't have any effect. In fact, no logging occurs.
Where is this behavior documented? I don't really understand.
You can remove the default handlers and reconfigure logging like this:
# if someone tried to log something before basicConfig is called, Python creates a default handler that
# goes to the console and will ignore further basicConfig calls. Remove the handler if there is one.
root = logging.getLogger()
if root.handlers:
for handler in root.handlers:
root.removeHandler(handler)
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
Yes.
You've asked to log something. Logging must, therefore, fabricate a default configuration. Once logging is configured... well... it's configured.
"With the logger object configured,
the following methods create log
messages:"
Further, you can read about creating handlers to prevent spurious logging. But that's more a hack for bad implementation than a useful technique.
There's a trick to this.
No module can do anything except logging.getlogger() requests at a global level.
Only the if __name__ == "__main__": can do a logging configuration.
If you do logging at a global level in a module, then you may force logging to fabricate it's default configuration.
Don't do logging.info globally in any module. If you absolutely think that you must have logging.info at a global level in a module, then you have to configure logging before doing imports. This leads to unpleasant-looking scripts.
This answer from Carlos A. Ibarra is in principle right, however that implementation might break since you are iterating over a list that might be changed by calling removeHandler(). This is unsafe.
Two alternatives are:
while len(logging.root.handlers) > 0:
logging.root.removeHandler(logging.root.handlers[-1])
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
or:
logging.root.handlers = []
logging.basicConfig(format='%(asctime)s %(message)s',level=logging.DEBUG)
where the first of these two using the loop is the safest (since any destruction code for the handler can be called explicitly inside the logging framework). Still, this is a hack, since we rely on logging.root.handlers to be a list.
Here's the one piece of the puzzle that the above answers didn't mention... and then it will all make sense: the "root" logger -- which is used if you call, say, logging.info() before logging.basicConfig(level=logging.DEBUG) -- has a default logging level of WARNING.
That's why logging.info() and logging.debug() don't do anything: because you've configured them not to, by... um... not configuring them.
Possibly related (this one bit me): when NOT calling basicConfig, I didn't seem to be getting my debug messages, even though I set my handlers to DEBUG level. After a bit of hair-pulling, I found you have to set the level of the custom logger to be DEBUG as well. If your logger is set to WARNING, then setting a handler to DEBUG (by itself) won't get you any output on logger.info() and logger.debug().
Ran into this same issue today and, as an alternative to the answers above, here's my solution.
import logging
import sys
logging.debug('foo') # IRL, this call is from an imported module
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO, force=True)
logging.info('bar') # without force=True, this is not printed to the console
Here's what the docs say about the force argument.
If this keyword argument is specified as true, any existing handlers
attached to the root logger are removed and closed, before carrying
out the configuration as specified by the other arguments.
A cleaner version of the answer given by #paul-kremer is:
while len(logging.root.handlers):
logging.root.removeHandler(logging.root.handlers[-1])
Note: it is generally safe to assume logging.root.handlers will always be a list (see: https://github.com/python/cpython/blob/cebe9ee988837b292f2c571e194ed11e7cd4abbb/Lib/logging/init.py#L1253)
Here is what I did.
I wanted to log to a file which has a name configured in a config-file and also get the debug-logs of the config-parsing.
TL;DR; This logs into a buffer until everything to configure the logger is available
# Log everything into a MemoryHandler until the real logger is ready.
# The MemoryHandler never flushes (flushLevel 100 is above CRITICAL) automatically but only on close.
# If the configuration was loaded successfully, the real logger is configured and set as target of the MemoryHandler
# before it gets flushed by closing.
# This means, that if the log gets to stdout, it is unfiltered by level
root_logger = logging.getLogger()
root_logger.setLevel(logging.NOTSET)
stdout_logging_handler = logging.StreamHandler(sys.stderr)
tmp_logging_handler = logging.handlers.MemoryHandler(1024 * 1024, 100, stdout_logging_handler)
root_logger.addHandler(tmp_logging_handler)
config: ApplicationConfig = ApplicationConfig.from_filename('config.ini')
# because the records are already logged, unwanted ones need to be removed
filtered_buffer = filter(lambda record: record.levelno >= config.main_config.log_level, tmp_logging_handler.buffer)
tmp_logging_handler.buffer = filtered_buffer
root_logger.removeHandler(tmp_logging_handler)
logging.basicConfig(filename=config.main_config.log_filename, level=config.main_config.log_level, filemode='wt')
logging_handler = root_logger.handlers[0]
tmp_logging_handler.setTarget(logging_handler)
tmp_logging_handler.close()
stdout_logging_handler.close()