Python log python interpreter output - python

I tried to build a logger that also loggs the full output from the python interpreter, what you normally see on the console.
This is my suggestion, it did not work.
logger = logging.getLogger()
stdoutStreamHandler = logging.StreamHandler(sys.stdout)
stderrStreamHandler = logging.StreamHandler(sys.stderr)
fileHandler = logging.FileHandler(logpath)
logger.addHandler(stdoutStreamHandler)
logger.addHandler(stderrStreamHandler)

Unsure what you mean by logging the full output from the Python Interpreter, do you mean the command-line interpreter or the background interpreter at run-time?
Either way this may help: trace
This will allow you to execute any code with the trace, and will also allow you to write it out to a file:
import sys
import trace
# create a Trace object, telling it what to ignore, and whether to
# do tracing or line-counting or both.
tracer = trace.Trace(
ignoredirs=[sys.prefix, sys.exec_prefix],
trace=0,
count=1)
# run the new command using the given tracer
tracer.run('main()')
# make a report, placing output in the current directory
r = tracer.results()
r.write_results(show_missing=True, coverdir=".")
I have not yet used it myself, but it could be possible to execute
logger.debug(r)
Which would create a record in your log file of the debug log level, you could, of course, change that to info, warning, error or critical.

Related

Python Logging Module Inconsistent Behaviour

I've seen some very odd behaviour with the logger module. It started with a relatively complex project, but now I've seen it with the following script:
import logging
import os
# Uncomment the following line to remove handlers
# logging.getLogger().handlers = []
filePath = os.environ['userprofile'] + r'\Documents\log.txt'
logging.basicConfig(filename=filePath)
logging.debug('Gleep')
logging.shutdown()
This should simply write 'Gleep' to the log.txt file to your documents. Currently it is writing the file but not writing anything to it, however, I've inconsistently seen the following behaviour:
List item
No log file being written at all.
Log file created, but nothing written to it.
Everything working fine.
The only way I've got it working before is to remove existing handlers (commented out in the example above).
This is on several machines in different locations.
So...am I doing something grotesquely wrong here? Why is the logging module acting this way?
I'm not sure how to prove/disprove/debug your 'other' situations, but maybe the following can help clarify what is happening in the code from your question:
First, setting logging.getLogger().handlers = [] should not be necessary, since logging.getLogger() is the root logger by default and has no handlers. Here is a fresh Python 3.7 shell:
>>> import logging
>>> logging.getLogger()
<RootLogger root (WARNING)>
>>> logging.getLogger().handlers
[]
(Note that in the absence of any handlers, a logger will fall back to lastResort, but that should be irrelevant here since you add a handler implicitly via basicConfig().)
Which brings you to logging.basicConfig(filename=filePath): this adds a FileHandler handler to the root logger. It does not touch the level of the root logger, which is WARNING by default, so your message won't pass the 'level test' and won't be emitted as a result.
>>> logging.root.getEffectiveLevel()
30
(This uses .getEffectiveLevel() rather than just the plain attribute because a logger will walk its hierarchy until it finds a level if its level is NOTSET.)
All that is to say: as you currently have it, you are logging from the root logger (level WARNING) a message object that has level DEBUG, so the message will go nowhere.

Save each level of logging in a different file

I am working with the Python logging module but I don't know how get writing in different files for each type of logging level.
For example, I want the debug-level messages (and only debug) saved in the file debug.log.
This is the code I have so far, I have tried with handlers but does not work anymore. I don't know If it is possible to do so I wish. This is the code I have using handlers and different files for each level of logging, but it does not work, since the Debug file save messages of all other levels.
import logging as _logging
self.logger = _logging.getLogger(__name__)
_handler_debug = _logging.FileHandler(self.debug_log_file)
_handler_debug.setLevel(_logging.DEBUG)
self.logger.addHandler(_handler_debug)
_handler_info = _logging.FileHandler(self.info_log_file)
_handler_info.setLevel(_logging.INFO)
self.logger.addHandler(_handler_info)
_handler_warning = _logging.FileHandler(self.warning_log_file)
_handler_warning.setLevel(_logging.WARNING)
self.logger.addHandler(_handler_warning)
_handler_error = _logging.FileHandler(self.error_log_file)
_handler_error.setLevel(_logging.ERROR)
self.logger.addHandler(_handler_error)
_handler_critical = _logging.FileHandler(self.critical_log_file)
_handler_critical.setLevel(_logging.CRITICAL)
self.logger.addHandler(_handler_critical)
self.logger.debug("debug test")
self.logger.info("info test")
self.logger.warning("warning test")
self.logger.error("error test")
self.logger.critical("critical test")
And the debug.log contains this:
debug test
info test
warning test
error test
critical test
I'm working with classes, so I have adapted a bit the code
Possible duplicate to: python logging specific level only
Please see the accepted answer there.
You need to use filters for each handler that restrict the output to the log level supplied.

Logging always prints basic output, despite never being told to

When I try to log from my script, it always generates basic logging output. There is no configuration for this.
The code is the following:
def set_logger():
res_logger = logging.getLogger(__name__)
file_handler = logging.FileHandler(filename='%s.log' % __file__)
res_logger.setLevel(logging.INFO)
stdout_handler = logging.StreamHandler(sys.stdout)
file_handler.setLevel(logging.INFO)
stdout_handler.setLevel(logging.WARNING)
file_formatter = logging.Formatter('[%(asctime)s]\t{%(filename)s:%(lineno)d}\t%(levelname)s\t- %(message)s')
file_handler.setFormatter(file_formatter)
stdout_formatter = logging.Formatter('[%(asctime)s]\t{%(filename)s:%(lineno)d}\t%(levelname)s\t- %(message)s')
stdout_handler.setFormatter(stdout_formatter)
res_logger.addHandler(file_handler)
res_logger.addHandler(stdout_handler)
return res_logger
logger = set_logger()
logger.error("TEST")
The output with this setup should be [2018-04-17 15:50:31,601] {filename.py:60} ERROR - TEST in my stdout and in the filename.py.log file.
However, the actual output is
ERROR:__main__:TEST
[2018-04-17 15:57:37,756] {filename.py:60} ERROR - TEST
What's even weirder: I can't use "logging.error('test')" to produce basic output, even if I never declare a logger. It just produces nothing. And this ONLY happens to this script. If I copy-paste the same code to a new script, it all works fine. There is no logging configuration being done in the whole script, as you can see here
All of the 9 times "logging" is written are shown in the image. The only time "logger" is used, is when there are logs to produce.
I can't see why its not working.
Something else is almost certainly configuring logging and adding another handler which outputs to console. For example, if I copy your code to footest.py and add an import logging, sys at the top and run it using
python -c "import footest"
I get the output you would expect. But if I insert a line
logging.debug('foo')
and then run the python command again, I get the extra line. The logging.debug('foo') doesn't output anything, as the default level is WARNING, but it adds a handler which is used when the logger.error('TEST') at the bottom is executed.

python logger - can only be run once

I am testing the python logger with the jupyter notebook.
When I run the following example code in a freshly started kernel, it works and create the log file with the right content.
import logging
logging.basicConfig(filename='/home/depot/wintergreen/example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
However if I try to rerun the same code with say, for instance, the filename changed from example.logto example.log2, nothing happens, the file example.log2 is not created.
I ended up devising that test as it seemed to me that when trying to run the logging, it would only function the very first time I am running it. What am I doin wrong here?
You are right, .basicConfig() uses your kwargs only once. Because after first time you got handlers logging.root.handlers, one handler actually, so if you look in source code
def basicConfig(**kwargs):
...
_acquireLock()
try:
if len(root.handlers) == 0:
...
finally:
_releaseLock()
So since your len(root.handlers) != 0 actual assignment of the provided arguments is not happening.
HOW TO CHANGE WITHOUT RESTARTING:
The only solution i came up with is for changing basic Config with calling .basicConfig() without restarting kernel is to:
for handler in logging.root.handlers:
logging.root.removeHandler(handler)
Which will remove all handlers from root logger and after that you are good to set anything you want.
The issue is that basicConfig() function is designed to only be run once.
Per the docs: The first time it runs, it "creates a StreamHandler with a default Formatter and adding it to the root logger". However on the second time, the "function does nothing if the root logger already has handlers configured for it".
One possible solution is clear the previous handler with using logging.root.removeHandler. Alternatively, you can directly access stream attribute for open stream used by the StreamHandler instance:
>>> import logging
>>> logging.basicConfig(filename='abc.txt') # 1st call to basicConfig
>>> h = logging.root.handlers[0] # get the handler
>>> h.stream.close() # close the current stream
>>> h.stream = open('def.txt', 'a') # set-up a new stream
FWIW, basicConfig() was a late addition to the logging module and was intended as a simplified short-cut API for common cases. In general, whenever you have problems with basicConfig(), it means that it is time to use the full API which is a little less convenient but gives you more control:
import logging
# First pass
h = logging.StreamHandler(open('abc.txt', 'a'))
h.setLevel(logging.DEBUG)
h.setFormatter(logging.Formatter('%(asctime)s | %(message)s'))
logging.root.addHandler(h)
logging.critical('The GPU is melting')
# Later passes
logging.root.removeHandler(h)
h = logging.StreamHandler(open('def.txt', 'a'))
h.setLevel(logging.DEBUG)
h.setFormatter(logging.Formatter('%(asctime)s | %(message)s'))
logging.root.addHandler(h)
logging.critical('The CPU is getting hot too')

Change level logged to IPython/Jupyter notebook

I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.

Categories