How can I see log messages when unit testing in PyCharm? - python

I'm sure this is a simple fix, but I'd like to view log messages in the PyCharm console while running a unit test. The modules I'm testing have their own loggers, and normally I'd set a root logger to catch the debugging messages at a certain level, and pipe the other logs to a file. But I can't figure out how this works with unit tests.
I'm using the unittest2 module, and using PyCharm's automatic test discovery (which probably is based on nose, but I don't know).
I've tried fooling with the run configurations, but there doesn't seem to be a straightforward way to do this.
The PyCharm documentation isn't particularly helpful here either, if any of you work there.
In edit: It DOES appear that the console catches critical level log messages. I want to know if there is a way to configure this to catch debug level messages.
This post (Pycharm unit test interactive debug command line doesn't work) suggests adding the -s option to the build configuration, which does not produce the desired result.

The only solution I've found is to do the normal logging setup at the top of the file with tests in it (assuming you're just running one test or test class): logging.basicConfig(level=logging.DEBUG). Make sure you put that before most import statements or the default logging for those modules will have already been set (that was a hard one to figure out!).

I needed to add too the option --nologcapture to nosetests as param in pycharm 2016.3.2
Run>Edit COnfigurations > Defaults > Python Tests > Nosetests : activate the check for Prams option and add --nologcapture

In Edit Configurations:
delete old configurations
go to Defaults / Python tests / NoseTests
add --nologcapture to "additional Arguments"
worked like a "charm" for me in pyCharm 2017.2.3
thanks bott

My problem was similar, I only saw messages with level WARNING or higher while running tests in PyCharm. A colleague suggested to configure the logger in __init__.py which is in the directory of my tests.
# in tests/__init__.py
import logging
import sys
# Reconfiguring the logger here will also affect test running in the PyCharm IDE
log_format = '%(asctime)s %(levelname)s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format=log_format)
Then I can simply log in the test code like:
import logging
logging.info('whatever')

I experienced this problem all of a sudden after updating Pycharm. I had been running my tests with the Unittest runner, and all of a sudden Pycharm decided the default should be the pytest runner. When I changed the default test runner back to Unittest the logs appeared as I expected they would.
You can change the default test runner in Project Settings/Preferences >> Tools >> Python Integrated Tools
I assume adding the -nologcapture flag as noted in other answers probably would have worked in my case as well, but I prefer a solution that's exposed by the IDE rather than a manual override.
BTW choosing Unittest also solves an additional error that I got once when trying to run tests - No module named 'nose'

The -s option mentioned in the question in combination with adding the StreamHandler using stream=sys.stdout did work for me.
import logging
import sys
logger = logging.getLogger('foo')
logger.addHandler(logging.StreamHandler(stream=sys.stdout))
logger.warning('foo')
Pycharm unit test interactive debug command line doesn't work

Add a stream handler, pointing to sys.stdout if doctest or pytest is running:
import logging
import sys
logger = logging.getLogger()
def setup_doctest_logger(log_level:int=logging.DEBUG):
"""
:param log_level:
:return:
>>> logger.info('test') # there is no output in pycharm by default
>>> setup_doctest_logger()
>>> logger.info('test') # now we have the output we want
test
"""
if is_pycharm_running():
logger_add_streamhandler_to_sys_stdout()
logger.setLevel(log_level)
def is_pycharm_running()->bool:
if ('docrunner.py' in sys.argv[0]) or ('pytest_runner.py' in sys.argv[0]):
return True
else:
return False
def logger_add_streamhandler_to_sys_stdout():
stream_handler=logging.StreamHandler(stream=sys.stdout)
logger.addHandler(stream_handler)

Related

Why are loggers not propogating when I run pytest

I'm unable to get logging output from my unit tests and it looks like the reason is that all loggers except for the root logger have propagate set to false when I run pytest. I can run the following file, using pytest and python test_file.py. In the former logging.getLogger('one').propagate == True while in the latter logging.getLogger('one').propagate == False. Why is this?
test_file.py
import logging
def test_function():
print()
print('root', logging.getLogger().propagate)
print('one', logging.getLogger('one').propagate)
if __name__ == '__main__':
test_function()
How do I get all my loggers to propagate? Searching the internet only turns up questions about how to turn off propagation as if most people have the opposite experience that I do.
This isn't a problem with pytest. I searched the cpython and pytest source code for anything that would do this and found nothing. Then I stepped through the getLogger code and found that something was calling the logging.setLoggerClass function and replacing the logger class with a subclass that sets propagate to false. I set a breakpoint on setLoggerClass and found that a pytest plugin provided by ROS2 was doing this. I added the following to pytest.ini and now everything works great.
[pytest]
addopts = -p no:launch -p no:launch_ros

How to set logging level in a unit test for Python

I am running a particular unit test in my library, and I would like to get more logging from it. I am running it in the following way:
python -m unittest my_module.my_submodule_test.MyTestClass.test_my_testcase
Because I'm doing this, the my_submodule_test.py file does not run the bottom part where I set the log level like so:
if __name__ == '__main__':
logging.getLogger().setLevel(logging.DEBUG)
unittest.main()
How can I programatically set the log level so that I can get more detailed logging from my test?
Also, I'd like to be able to set the logging via a command-line argument. Is there a way to do that?
One way in which this can be done is by doing it in the setUp code of your TestCase subclass. You would do the following:
class MyTestClass(unittest.TestCase):
def setUp(self):
logging.basicConfig(level=logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
You can use either logging.basicConfig(level=logging.DEBUG), or logging.getLogger().setLevel(logging.DEBUG).
This should activate logging for your whole project (unless you are changing the level for loggers further down in the hierarchy that you care about).
I do not know of a way to do this from the command line, unfortunately.

Python Fabric logging errors

I am trying to understand how fabric's logger module works.
I run on the command line:
$ fabfile -I task-1
I of course get output to the console showing me the execution of the task on each of the remote hosts connected to.
Bu how can I redirect the output of errors to a logfile on my local machine and put a timestamp on it?
Does fabric's logger module provide this? Or should I use Python's logging module. Either one, I am not sure how to implement.
Unfortunately, Fabric does not feature logging to a file (see issue #57)
But there is a workaround using the logging module, which I find pretty nice.
First, configure your logger:
import logging
logging.basicConfig(
level=logging.DEBUG, format='%(asctime)s:%(levelname)s:%(name)s:%(message)s',
filename="out.log",
filemode='a'
)
And then wrap the portions of your code which are likely to throw errors with a try/catch block like this:
try:
#code
except:
logging.exception('Error:')
The logger will print 'Error:' and the exception's stacktrace to "out.log"

Logging not captured on behave steps

Ok so in my environment.py file I am able to log stuff by:
logging.basicConfig(level=logging.DEBUG, filename="example.log")
def before_feature(context, feature):
logging.info("test logging")
but when I am inside the steps file I cannot perform logging:
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
#given("we have a step")
def step_impl(context):
logger.debug("Test logging 2")
The logging message inside the step does not show up. I am using the python behave module. Any ideas?
I have tried enabling and disabling logcapture when I run behave but it makes no difference.
By default, behave tends to capture logs during feature execution, and only display them in cases of failure.
To disable this, you can set
log_capture=false
in behave.ini
Or, you can use the --no-logcapture command line option
Further Reading : Behave API Reference, Behave LogCapture
what worked for me:
behave --no-capture --no-capture-stderr --no-logcapture
and add in the environment.py the follwing snipest:
def after_step(context, step):
print("")
Why: I disovered that behave does not log the last print statement of a step. So I just added a empty print after each step with the previous snipest.
Hope it helped
Importing logging from environment.py in steps.py solved problem for me.
from features.environment import logging
I am not sure but I guess the problem is that every time you import logging it rewrites your previous configs because disable_existing_loggers is True by default. (Here is the documentation paragraph explaining this)

Change level logged to IPython/Jupyter notebook

I have a package that relies on several different modules, each of which sets up its own logger. That allows me to log where each log message originates from, which is useful.
However, when using this code in an IPython/Jupyter notebook, I was having trouble controlling what got printed to the screen. Specifically, I was getting a lot of DEBUG-level messages that I didn't want to see.
How do I change the level of logs that get printed to the notebook?
More info:
I've tried to set up a root logger in the notebook as follows:
# In notebook
import logging
logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Import the module
import mymodule
And then at the top of my modules, I have
# In mymodule.py
import logging
logger = logging.getLogger('mypackage.' + __name__)
logger.setLevel(logging.DEBUG)
logger.propagate = True
# Log some messages
logger.debug('debug')
logger.info('info')
When the module code is called in the notebook, I would expect the logs to get propagated up, and then for the top logger to only print the info log statement. But both the debug and the info log statement get shown.
Related links:
From this IPython issue, it seems that there are two different levels of logging that one needs to be aware of. One, which is set in the ipython_notebook_config file, only affects the internal IPython logging level. The other is the IPython logger, accessed with get_ipython().parent.log.
https://github.com/ipython/ipython/issues/8282
https://github.com/ipython/ipython/issues/6746
With current ipython/Jupyter versions (e.g. 6.2.1), the logging.getLogger().handlers list is empty after startup and logging.getLogger().setLevel(logging.DEBUG) has no effect, i.e. no info/debug messages are printed.
Inside ipython, you have to change an ipython configuration setting (and possibly work around ipython bugs), as well. For example, to lower the logging threshold to debug messages:
# workaround via specifying an invalid value first
%config Application.log_level='WORKAROUND'
# => fails, necessary on Fedora 27, ipython3 6.2.1
%config Application.log_level='DEBUG'
import logging
logging.getLogger().setLevel(logging.DEBUG)
log = logging.getLogger()
log.debug('Test debug')
For just getting the debug messages of one module (cf. the __name__ value in that module) you can replace the above setLevel() call with a more specific one:
logging.getLogger('some.module').setLevel(logging.DEBUG)
The root cause of this issue (from https://github.com/ipython/ipython/issues/8282) is that the Notebook creates a root logger by default (which is different from IPython default behavior!). The solution is to get at the handler of the notebook logger, and set its level:
# At the beginning of the notebook
import logging
logger = logging.getLogger()
assert len(logger.handlers) == 1
handler = logger.handlers[0]
handler.setLevel(logging.INFO)
With this, I don't need to set logger.propagate = True in the modules and it works.
Adding another solution because the solution was easier for me. On startup of the Ipython kernel:
import logging
logging.basicConfig(level=20)
Then this works:
logging.getLogger().info("hello")
>> INFO:root:hello
logging.info("hello")
>> INFO:root:hello
And if I have similar logging code in a function that I import and run, the message will display as well.

Categories