I'm running a couple of instances of pytest.main() and once they are all complete I want to quickly see the failures across all the runs without rooting through all the individual reports. How can I do that?
Do I have to parse the textual reports or can I get py.test to return an object with failure data? (so far as I've seen it just returns an integer)
I use Allure reports (https://docs.qameta.io/allure/#_pytest) for that.
You can run each pytest.main() with option --alluredir= where each instance has different path, for example /path/to/reports/report1, /path/to/reports/report2.
After all runs are completed, you can generate one combined report by running command allure serve /path/to/reports. More about reports generating here: https://docs.qameta.io/allure/#_get_started
Related
When I apply or run the PowerBI Python script, I see two of the following:
Two Python instances executing the same script
Both codes utilizes all resources similarly
Both generate their own log files
This is redundant, takes double the time, uses all my resources, and is super annoying!
Why is this the case? Can we avoid this or fix this somehow so that only one instance of python executes the script to generate data for the data source?
I have a set of Behave (1.2.6) features and scenarios that are all working correctly individually. But based on certain initial conditions, I need to run subsets of them in a specific order. I know it's not the right way to do BDD (each test should be independent, with its own setup and teardown), but these are integration tests against an actual deployed web app (no mocking), and the setup and teardown take far too long.
I could drive it from a shell script that runs each test in a separate behave run. But I'd like to have a python driver function that would examine the initial conditions, run the requested set of tests in the right order, and output combined summary stats.
So how can I invoke a Behave scenario from a Python function?
You can import the main function and run it:
from behave.__main__ import main
main("--tags smoke")
I have multiple recurring tasks scheduled to run several times per day to keep some different data stores in sync with one another. The settings for the 'Actions' tab are as follows:
Action: Start a Program
Program/script: C:\<path to script>.py
Add arguments:
Start in: C:\<directory of script>
I can run the python files just fine if I use the command line and navigate to the file location and use python or even just using python without navigating.
For some reason, the scripts just won't run with a scheduled task. I've checked all over and tried various things like making sure the user profile is set correctly and has all of the necessary privileges, which holds true. These scripts have been working for several weeks now with no problems, so something has changed that we aren't able to identify at this time.
Any suggestions?
Have you tried using:
Action: Start a Program
Program/script: C:\<path to python.exe>\python.exe
Add arguments: C:\\<path to script>\\script.py
I'm writing a pytest plugin that needs to warn the user about anomalies encountered during the collection phase, but I don't find any way to consistently send output to the console from inside my pytest_generate_tests function.
Output from print and from the logging module only appears in the console when adding the -s option. All logging-related documentation I found refers to logging inside tests, not from within a plugin.
In the end I used the pytest-warning infrastructure by using the undocumented _warn() method of the pytest config object passed to or anyway accessible from various hooks. For example:
def pytest_generate_tests(metafunc):
[...]
if warning_condition:
metafunc.config._warn("Warning condition encountered.")
[...]
This way you get additional pytest-warnings in the one-line summary if any was reported and you can see the warnings details by adding the '-r w' option to the pytest command line.
I have a script that gets a file input plus some info, runs a couple of (possibly interdependent) programs on it using subprocess module, and distributes the output over the file-system.
Only a few parts can be tested in isolation by traditional unit-testing, so I'm searching a convenient way to automate the integration-testing (see if the output files exist in the right locations, in the right number, of the right size, etc).
I initially thought that setUp and tearDown methods from the default unittest module could help me, but they are re-run with each test, not once for the entire test suite, so it is not an option. Is there any way to make the unittest module run a global setUp and tearDown once? Or an alternative module/tool that I can use? Eclipse/PyDev integration would be a bonus.