I would like to have coloured output in logging captured by python behave and print it in case of test failures. For example, this logging captured by pytest:
While a similar test on behave produce logs in this format:
I tried using the coloredlogs framework but it completely prevented log capturing and logs were printed to stdout no matter if a test failed or not. Is there an alternative method/framework that works with behave?
Related
I am trying to capture the logs of my python-behave tests (produced by the logging module) by using this command to run them:
behave -f pretty --logging-level INFO --capture "features/file_system_operations.feature"
This executes the tests, but even though they are passing, all my info lines are printed to sceen instead of being captured, eg.:
When I change the logging level to warning, the code properly responds and no lines are printed:
behave -f pretty --logging-level WARNING --capture "features/file_system_operations.feature"
Results a clean printout:
How can I ask behave to have INFO logging lines printed only if a test fails?
I think I managed to find was going on, and it seems like logging imported multiple times is the root cause. I am not sure if this is a feature or a bug, but will try to patch around the issue by extracting logging functionality into its own module.
(This is meant as a much more specific version of How can I make py.test tests accept interactive input?).
I'm using py.test, and I'm trying to make stdin and stdout work as normally -- i.e. print() should immediately print to the terminal, and input() (this is Python 3.5) should accept a line of input interactively.
(I am trying to make my test cases do this, I know this isn't normal for unit testing, I am not trying to have my testcases test something else that uses stdin and stdout).
Running pytest with the -s flag does pretty much what I want, but I would rather have this as an option set within my test modules or individual test functions. Especially as I only want to use it on some tests. capsys.disabled() as context manager doesn't cut it -- it only deals with stdout and stderr, not stdin.
Is there a way to do this, either a pytest option to call via marker/decorator/context manager, or a way to "re-connect" sys.stdin/stdout within my module?
Thank you.
http://doc.pytest.org/en/latest/capture.html provides how to disable the stdout capturing:
def test_disabling_capturing(capsys):
print('this output is captured')
with capsys.disabled():
print('output not captured, going directly to sys.stdout')
print('this output is also captured')
All Maya script logs and errors printed in history tab. This is output from all commands and python scripts.
For better debugging scripts I want all the logs were sent somewhere on the server. How to intercept and send the output to your script. Then I will do all that is necessary, and the output is either a remote console, or somewhere in the files on the server.
The task to intercept the output. How to do it?
You can also redirect Script Editor history using Maya's scriptEditorInfo command found here:
An example usage of this would be something like:
import maya.cmds as cmds
outfile = r'/path/to/your/outfile.txt'
# begin output capture
cmds.scriptEditorInfo(historyFilename=outfile, writeHistory=True)
# stop output capture
cmds.scriptEditorInfo(writeHistory=False)
There is also cmdFileOutput which you can either call interactively or enable/disable via a registry entry to MAYA_CMD_FILE_OUTPUT, documentation here
Lastly, you can augment Maya start using the -log flag to write the Output Window text to another location. With this however, you do not get the Script Editor output, but could be all you desire given what it is you are trying to log.
its sounds like that you need a real-time error tracker like Sentry
, in Sentry are logging modules that are maked exactly for this reason, communicate Server/Client logging with a richer error/debug handling
here a example for Rerouting the Maya Script Editor to a terminal
I'm writing a pytest plugin that needs to warn the user about anomalies encountered during the collection phase, but I don't find any way to consistently send output to the console from inside my pytest_generate_tests function.
Output from print and from the logging module only appears in the console when adding the -s option. All logging-related documentation I found refers to logging inside tests, not from within a plugin.
In the end I used the pytest-warning infrastructure by using the undocumented _warn() method of the pytest config object passed to or anyway accessible from various hooks. For example:
def pytest_generate_tests(metafunc):
[...]
if warning_condition:
metafunc.config._warn("Warning condition encountered.")
[...]
This way you get additional pytest-warnings in the one-line summary if any was reported and you can see the warnings details by adding the '-r w' option to the pytest command line.
I'm running a couple of instances of pytest.main() and once they are all complete I want to quickly see the failures across all the runs without rooting through all the individual reports. How can I do that?
Do I have to parse the textual reports or can I get py.test to return an object with failure data? (so far as I've seen it just returns an integer)
I use Allure reports (https://docs.qameta.io/allure/#_pytest) for that.
You can run each pytest.main() with option --alluredir= where each instance has different path, for example /path/to/reports/report1, /path/to/reports/report2.
After all runs are completed, you can generate one combined report by running command allure serve /path/to/reports. More about reports generating here: https://docs.qameta.io/allure/#_get_started