Logging not captured on behave steps - python

Ok so in my environment.py file I am able to log stuff by:
logging.basicConfig(level=logging.DEBUG, filename="example.log")
def before_feature(context, feature):
logging.info("test logging")
but when I am inside the steps file I cannot perform logging:
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
#given("we have a step")
def step_impl(context):
logger.debug("Test logging 2")
The logging message inside the step does not show up. I am using the python behave module. Any ideas?
I have tried enabling and disabling logcapture when I run behave but it makes no difference.

By default, behave tends to capture logs during feature execution, and only display them in cases of failure.
To disable this, you can set
log_capture=false
in behave.ini
Or, you can use the --no-logcapture command line option
Further Reading : Behave API Reference, Behave LogCapture

what worked for me:
behave --no-capture --no-capture-stderr --no-logcapture
and add in the environment.py the follwing snipest:
def after_step(context, step):
print("")
Why: I disovered that behave does not log the last print statement of a step. So I just added a empty print after each step with the previous snipest.
Hope it helped

Importing logging from environment.py in steps.py solved problem for me.
from features.environment import logging
I am not sure but I guess the problem is that every time you import logging it rewrites your previous configs because disable_existing_loggers is True by default. (Here is the documentation paragraph explaining this)

Related

Python context management and verbosity

To facilitate testing code as I write it, I include verbosity in almost every module I write, as follows:
class MyObj(object):
def __init__(arg0, kwarg0="default", verbosity=0):
self.a0 = arg0
self.k0 = kwarg0
self.vb = verbosity
def my_method(self):
if verbosity > 2:
print(f"{self} is doing a thing now...")
or
def my_func(arg0, arg1, verbosity=0):
if verbosity > 2:
print(f"doing something to {arg0} and {arg1}...")
if verbosity > 5: # Added on later edit
import ipdb;ipdb.set_trace() # to clarify requirement
do_somthing()
The executable scripts that import these will have collected (from the command line or elsewhere) a verbosity argument which gets passed all the way down the stack.
It's occurred to me to use a context manager so that I wouldn't have to initialize this variable at every level of the stack, something like having this in the driver script:
with args.verbosity as vb:
my_func("x", "y")
Can I do that and then use vb in my_func without having to include it in the signature? Is there a better way to achieve this kind of control?
SUBSEQUENT EDIT: it's clear from the first answers—-thank you for those--that I need to check out the logging module, but in some cases I want to stop execution in the middle to inspect things at a particular stack level (see the ipdb code I am adding with this edit). Would you still recommend that I use logging? (I'm assuming there's a way to get the logging level if I felt compelled to occasionally litter my code with if statements like that one.)
Finally, I'm still interested in whether the context management solution would be expected to work (even if it's not the optimal solution).
To facilitate testing code as I write it, I include verbosity in almost every module I write ...
Don't litter your code with if-statements and prints for this kind of purpose. It makes the code messy, repetitive and less readable.
The use-case is exactly what stdlib logging is for: you can unconditionally log events which describe what the program is doing, at various verbosity levels, and the messages will be displayed - or not - depending on the logging system's configuration.
import logging
log = logging.getLogger(__name__)
def my_func(arg0, arg1):
log.info("doing something to %s and %s...", arg0, arg1)
do_something()
if __name__ == "__main__":
logging.basicConfig(level=logging.DEBUG, format="%(message)s")
my_func(123, 456)
In the example above, the message will print because it is at level INFO which is above the verbosity level that I've configured logging with (DEBUG). If you configure logging at level WARNING, then it won't display.
Generally the user will control the logging configuration settings (levels, formats, streams, files) via a config file, environment variables, or command-line arguments. It is up to the end-user to choose the specific logging configuration that meets their needs, as the developer you can just log events anytime. No need to worry about where the log events end up going to, or if they end up going anywhere at all.
Another way to do this is levels of logging. For example, Python's builtin logging module has error, warning, info, and debug levels:
import logging
logger = logging.getLogger()
logger.info('Normal message')
logger.debug('Message that only gets printed with high verbosity`)
Simply configure the logging level to debug, warn, etc., and you're basically done! Plus you get lots of native logging goodies.

Save each level of logging in a different file

I am working with the Python logging module but I don't know how get writing in different files for each type of logging level.
For example, I want the debug-level messages (and only debug) saved in the file debug.log.
This is the code I have so far, I have tried with handlers but does not work anymore. I don't know If it is possible to do so I wish. This is the code I have using handlers and different files for each level of logging, but it does not work, since the Debug file save messages of all other levels.
import logging as _logging
self.logger = _logging.getLogger(__name__)
_handler_debug = _logging.FileHandler(self.debug_log_file)
_handler_debug.setLevel(_logging.DEBUG)
self.logger.addHandler(_handler_debug)
_handler_info = _logging.FileHandler(self.info_log_file)
_handler_info.setLevel(_logging.INFO)
self.logger.addHandler(_handler_info)
_handler_warning = _logging.FileHandler(self.warning_log_file)
_handler_warning.setLevel(_logging.WARNING)
self.logger.addHandler(_handler_warning)
_handler_error = _logging.FileHandler(self.error_log_file)
_handler_error.setLevel(_logging.ERROR)
self.logger.addHandler(_handler_error)
_handler_critical = _logging.FileHandler(self.critical_log_file)
_handler_critical.setLevel(_logging.CRITICAL)
self.logger.addHandler(_handler_critical)
self.logger.debug("debug test")
self.logger.info("info test")
self.logger.warning("warning test")
self.logger.error("error test")
self.logger.critical("critical test")
And the debug.log contains this:
debug test
info test
warning test
error test
critical test
I'm working with classes, so I have adapted a bit the code
Possible duplicate to: python logging specific level only
Please see the accepted answer there.
You need to use filters for each handler that restrict the output to the log level supplied.

python logger - can only be run once

I am testing the python logger with the jupyter notebook.
When I run the following example code in a freshly started kernel, it works and create the log file with the right content.
import logging
logging.basicConfig(filename='/home/depot/wintergreen/example.log',level=logging.DEBUG)
logging.debug('This message should go to the log file')
logging.info('So should this')
logging.warning('And this, too')
However if I try to rerun the same code with say, for instance, the filename changed from example.logto example.log2, nothing happens, the file example.log2 is not created.
I ended up devising that test as it seemed to me that when trying to run the logging, it would only function the very first time I am running it. What am I doin wrong here?
You are right, .basicConfig() uses your kwargs only once. Because after first time you got handlers logging.root.handlers, one handler actually, so if you look in source code
def basicConfig(**kwargs):
...
_acquireLock()
try:
if len(root.handlers) == 0:
...
finally:
_releaseLock()
So since your len(root.handlers) != 0 actual assignment of the provided arguments is not happening.
HOW TO CHANGE WITHOUT RESTARTING:
The only solution i came up with is for changing basic Config with calling .basicConfig() without restarting kernel is to:
for handler in logging.root.handlers:
logging.root.removeHandler(handler)
Which will remove all handlers from root logger and after that you are good to set anything you want.
The issue is that basicConfig() function is designed to only be run once.
Per the docs: The first time it runs, it "creates a StreamHandler with a default Formatter and adding it to the root logger". However on the second time, the "function does nothing if the root logger already has handlers configured for it".
One possible solution is clear the previous handler with using logging.root.removeHandler. Alternatively, you can directly access stream attribute for open stream used by the StreamHandler instance:
>>> import logging
>>> logging.basicConfig(filename='abc.txt') # 1st call to basicConfig
>>> h = logging.root.handlers[0] # get the handler
>>> h.stream.close() # close the current stream
>>> h.stream = open('def.txt', 'a') # set-up a new stream
FWIW, basicConfig() was a late addition to the logging module and was intended as a simplified short-cut API for common cases. In general, whenever you have problems with basicConfig(), it means that it is time to use the full API which is a little less convenient but gives you more control:
import logging
# First pass
h = logging.StreamHandler(open('abc.txt', 'a'))
h.setLevel(logging.DEBUG)
h.setFormatter(logging.Formatter('%(asctime)s | %(message)s'))
logging.root.addHandler(h)
logging.critical('The GPU is melting')
# Later passes
logging.root.removeHandler(h)
h = logging.StreamHandler(open('def.txt', 'a'))
h.setLevel(logging.DEBUG)
h.setFormatter(logging.Formatter('%(asctime)s | %(message)s'))
logging.root.addHandler(h)
logging.critical('The CPU is getting hot too')

How can I see log messages when unit testing in PyCharm?

I'm sure this is a simple fix, but I'd like to view log messages in the PyCharm console while running a unit test. The modules I'm testing have their own loggers, and normally I'd set a root logger to catch the debugging messages at a certain level, and pipe the other logs to a file. But I can't figure out how this works with unit tests.
I'm using the unittest2 module, and using PyCharm's automatic test discovery (which probably is based on nose, but I don't know).
I've tried fooling with the run configurations, but there doesn't seem to be a straightforward way to do this.
The PyCharm documentation isn't particularly helpful here either, if any of you work there.
In edit: It DOES appear that the console catches critical level log messages. I want to know if there is a way to configure this to catch debug level messages.
This post (Pycharm unit test interactive debug command line doesn't work) suggests adding the -s option to the build configuration, which does not produce the desired result.
The only solution I've found is to do the normal logging setup at the top of the file with tests in it (assuming you're just running one test or test class): logging.basicConfig(level=logging.DEBUG). Make sure you put that before most import statements or the default logging for those modules will have already been set (that was a hard one to figure out!).
I needed to add too the option --nologcapture to nosetests as param in pycharm 2016.3.2
Run>Edit COnfigurations > Defaults > Python Tests > Nosetests : activate the check for Prams option and add --nologcapture
In Edit Configurations:
delete old configurations
go to Defaults / Python tests / NoseTests
add --nologcapture to "additional Arguments"
worked like a "charm" for me in pyCharm 2017.2.3
thanks bott
My problem was similar, I only saw messages with level WARNING or higher while running tests in PyCharm. A colleague suggested to configure the logger in __init__.py which is in the directory of my tests.
# in tests/__init__.py
import logging
import sys
# Reconfiguring the logger here will also affect test running in the PyCharm IDE
log_format = '%(asctime)s %(levelname)s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format=log_format)
Then I can simply log in the test code like:
import logging
logging.info('whatever')
I experienced this problem all of a sudden after updating Pycharm. I had been running my tests with the Unittest runner, and all of a sudden Pycharm decided the default should be the pytest runner. When I changed the default test runner back to Unittest the logs appeared as I expected they would.
You can change the default test runner in Project Settings/Preferences >> Tools >> Python Integrated Tools
I assume adding the -nologcapture flag as noted in other answers probably would have worked in my case as well, but I prefer a solution that's exposed by the IDE rather than a manual override.
BTW choosing Unittest also solves an additional error that I got once when trying to run tests - No module named 'nose'
The -s option mentioned in the question in combination with adding the StreamHandler using stream=sys.stdout did work for me.
import logging
import sys
logger = logging.getLogger('foo')
logger.addHandler(logging.StreamHandler(stream=sys.stdout))
logger.warning('foo')
Pycharm unit test interactive debug command line doesn't work
Add a stream handler, pointing to sys.stdout if doctest or pytest is running:
import logging
import sys
logger = logging.getLogger()
def setup_doctest_logger(log_level:int=logging.DEBUG):
"""
:param log_level:
:return:
>>> logger.info('test') # there is no output in pycharm by default
>>> setup_doctest_logger()
>>> logger.info('test') # now we have the output we want
test
"""
if is_pycharm_running():
logger_add_streamhandler_to_sys_stdout()
logger.setLevel(log_level)
def is_pycharm_running()->bool:
if ('docrunner.py' in sys.argv[0]) or ('pytest_runner.py' in sys.argv[0]):
return True
else:
return False
def logger_add_streamhandler_to_sys_stdout():
stream_handler=logging.StreamHandler(stream=sys.stdout)
logger.addHandler(stream_handler)

how to replace print debug message with the logging module

Up to now, I've been peppering my code with 'print debug message' and even 'if condition: print debug message'. But a number of people have told me that's not the best way to do it, and I really should learn how to use the logging module. After a quick read, it looks as though it does everything I could possibly want, and then some. It looks like a learning project in its own right, and I want to work on other projects now and simply use the minimum functionality to help me. If it makes any difference, I am on python 2.6 and will be for the forseeable future, due to library and legacy compatibilities.
All I want to do at the moment is pepper my code with messages that I can turn on and off section by section, as I manage to debug specific regions. As a 'hello_log_world', I tried this, and it doesn't do what I expected
import logging
# logging.basicConfig(level=logging.DEBUG)
logging.error('first error')
logging.debug('first debug')
logging.basicConfig(level=logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
You'll notice I'm using the really basic config, using as many defaults as possible, to keep things simple. But appears that it's too simple, or that I don't understand the programming model behind logging.
I had expected that sys.stderr would end up with
ERROR:root:first error
ERROR:root:second error
DEBUG:root:second debug
... but only the two error messages appear. Setting level=DEBUG doesn't make the second one appear. If I uncomment the basicConfig call at the start of the program, all four get output.
Am I trying to run it at too simple a level?
What's the simplest thing I can add to what I've written there to get my expected behaviour?
Logging actually follows a particular hierarchy (DEBUG -> INFO -> WARNING -> ERROR -> CRITICAL), and the default level is WARNING. Therefore the reason you see the two ERROR messages is because it is ahead of WARNING on the hierarchy chain.
As for the odd commenting behavior, the explanation is found in the logging docs (which as you say are a task unto themselves :) ):
The call to basicConfig() should come before any calls to debug(),
info() etc. As it’s intended as a one-off simple configuration
facility, only the first call will actually do anything: subsequent
calls are effectively no-ops.
However you can use the setLevel parameter to get what you desire:
import logging
logging.getLogger().setLevel(logging.ERROR)
logging.error('first error')
logging.debug('first debug')
logging.getLogger().setLevel(logging.DEBUG)
logging.error('second error')
logging.debug('second debug')
The lack of an argument to getLogger() means that the root logger is modified. This is essentially one step before #del's (good) answer, where you start getting into multiple loggers, each with their own specific properties/output levels/etc.
Rather than modifying the logging levels in your code to control the output, you should consider creating multiple loggers, and setting the logging level for each one individually. For example:
import logging
first_logger = logging.getLogger('first')
second_logger = logging.getLogger('second')
logging.basicConfig()
first_logger.setLevel(logging.ERROR)
second_logger.setLevel(logging.DEBUG)
first_logger.error('first error')
first_logger.debug('first debug')
second_logger.error('second error')
second_logger.debug('second debug')
This outputs:
ERROR:first:first error
ERROR:second:second error
DEBUG:second:second debug

Categories