How not to print captured log on success in python-behave - python

I am trying to capture the logs of my python-behave tests (produced by the logging module) by using this command to run them:
behave -f pretty --logging-level INFO --capture "features/file_system_operations.feature"
This executes the tests, but even though they are passing, all my info lines are printed to sceen instead of being captured, eg.:
When I change the logging level to warning, the code properly responds and no lines are printed:
behave -f pretty --logging-level WARNING --capture "features/file_system_operations.feature"
Results a clean printout:
How can I ask behave to have INFO logging lines printed only if a test fails?

I think I managed to find was going on, and it seems like logging imported multiple times is the root cause. I am not sure if this is a feature or a bug, but will try to patch around the issue by extracting logging functionality into its own module.

Related

How do I capture logs with colour formatters in python behave?

I would like to have coloured output in logging captured by python behave and print it in case of test failures. For example, this logging captured by pytest:
While a similar test on behave produce logs in this format:
I tried using the coloredlogs framework but it completely prevented log capturing and logs were printed to stdout no matter if a test failed or not. Is there an alternative method/framework that works with behave?

Completely disable py.test stdin/stdout capturing, within a test (similar to py.test -s)

(This is meant as a much more specific version of How can I make py.test tests accept interactive input?).
I'm using py.test, and I'm trying to make stdin and stdout work as normally -- i.e. print() should immediately print to the terminal, and input() (this is Python 3.5) should accept a line of input interactively.
(I am trying to make my test cases do this, I know this isn't normal for unit testing, I am not trying to have my testcases test something else that uses stdin and stdout).
Running pytest with the -s flag does pretty much what I want, but I would rather have this as an option set within my test modules or individual test functions. Especially as I only want to use it on some tests. capsys.disabled() as context manager doesn't cut it -- it only deals with stdout and stderr, not stdin.
Is there a way to do this, either a pytest option to call via marker/decorator/context manager, or a way to "re-connect" sys.stdin/stdout within my module?
Thank you.
http://doc.pytest.org/en/latest/capture.html provides how to disable the stdout capturing:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')

PyInstaller packaged application works fine in Console mode, crashes in Window mode

I am building a fairly complex application using Python and PySide. Finally the day of the release is nearing so I want to build this application as an exe.
However, I have a strange problem on my hands. I've used PyInstaller (using version 2 by the way) in the past and never had this happened to me.
Basically, when I build the application with the --console flag, it works fine - but it opens the console window. When I build the application with the window flag (-w), it doesn't work fine. It starts and everything, but there are all these weird glitches. For example, loading a text file often raises the BadFileDescriptor error (which doesn't happen in console mode), and the application crashes after performing a certain task. What's worse is that the task is a loop, and it performs fine the first time, but when it starts working again, it crashes.
When I looked at the minidump file there were some errors about memory access violation of QtGui4.dll file. Again, this doesn't happen in console mode.
Anyone have any ideas?
The BadFileDescriptor error and the consequently memory access violation are caused by the fact that the stdout of applications in windowed mode is a fixed size buffer.
So, if you have are writing to stdout, either with print or sys.stdout directly, after some time you'd see those errors.
You can fix this by:
Removing/commenting out the writings on stdout
Using logging instead of printing to stdout
Redirecting stdout at the beginning of the execution of your application. This is the solution that requires less code to be changed, even though I think moving the debugging statements to logging would be the better choice.
To redirect stdout you can use this kind of code:
import sys
import tempfile
sys.stdout = tempfile.TemporaryFile()
sys.stderr = tempfile.TemporaryFile()
Just before executing your program. You can use also some custom object to put the output in "log" files or whatever, the important thing is that the output should not fill the fixed size buffer.
For example you could do something like this to be able to take advantage of the logging module without changing too much code:
import sys
import logging
debug_logger = logging.getLogger('debug')
debug_logger.write = debug_logger.debug #consider all prints as debug information
debug_logger.flush = lambda: None # this may be called when printing
#debug_logger.setLevel(logging.DEBUG) #activate debug logger output
sys.stdout = debug_logger
The downside of this approach is that print executes more calls to stdout.write for each line:
>>> print 'test'
DEBUG:debug:test
DEBUG:debug:
If you want you can probably avoid this kind of behaviour writing a real write function that calls the_logger.debug only with "full lines".
Anyway I think these kind of solution should only be temporary, and be used only before porting the prints to calls to logging.debug.
(Obviously the loggers should write to a file and not to stdout to avoid the errors.)

Getting Python's coverage.py to gather coverage for the module that imports it?

I've been toying around with coverage.py, but can't seem to get it to gather coverage for the __main__ module.
I'm on Windows, and like to hack up scripts using IDLE. The edit-hit-F5 cycle is really convenient, fast, and fun. Unfortunately, it doesn't look like coverage.py is able (or willing) to gather coverage of the main module -- in the code below, it reports that no data is collected. My code looks like this:
import coverage
cov = coverage.coverage()
cov.start()
def CodeUnderTest():
print 'do stuff'
return True
assert CodeUnderTest()
cov.stop()
cov.save()
cov.html_report()
Anyone have any ideas? I've tried various options to coverage, but to no avail. It seems like the environment IDLE creates isn't very friendly towards coverage, since sys.modules['__main__'] points to a idle.pyw file, not the file its running.
You haven't said what behavior you are seeing, but I would expect that the two line in CodeUnderTest would show as covered, but none of the other lines in the file are. Coverage.py can't measure execution that happened before it was started, and here it isn't started until after the module has been executed. For example, the import coverage line has already been executed by the time coverage is started. Additionally, once coverage has been started, it isn't until the next function call that measurement truly begins.
The simplest way to run coverage.py is to use it from the command line. That way, you know that it is starting as early as possible:
$ coverage run my_prog.py arg1 arg2 ...
If you must use it programmatically, arrange your file so that all the excecution you're interested in happens inside a function that is invoked after coverage is started.

Logging every step and action by Python in a Large Script

I have created a large python script at the end. But now I need a logger for it. I have input steps, prompts.. Function calls.. While Loops.., etc. in the script.
And also, the logger is have to log success operations too.
I couldn't find a suitable answer for me. I'm searching on the internet again, and wanted to ask you too.
Whats your opinion?
Thanks
There's a module logging in the standard library. Basic usage is very simple; in every module that needs to do logging, put
logger = logging.getLogger(__name__)
and log with, e.g.,
logger.info("Doing something interesting")
logger.warn("Oops, something's not right")
Then in the main module, put something like
logging.basicConfig(level=logging.INFO)
to print all logs with a severity of INFO or worse to standard error. The module is very configurable, see its documentation for details.

Categories