I'm use pytest with some parameterized tests. However, in more recent versions of pytest with keywords matching becoming more complex I can't figure out how to match a specific paramterization of the test.
If I run my tests they look like
test_abc[backend_generator0-1]
test_abc[backend_generator0-2]
etc. But I can't figure out how to run a specific test parameterization.
pytest -k "test_abc[backend_generator0-2]"
gives Syntax Error
test_simple_delay[backend_generator1not 2]
I've tried various attempts at escaping the - to match only the specific test but without success.
This python 2.7 on pytest 2.3.5
You don't need -k or escapes for this. Use the node ID directly:
py.test 'test_abc[backend_generator0-1]'
You can just do py.test -k "test_abc and generator0" i guess.
Related
I am trying to measure code coverage by my pytest tests. I tried following the quick start guide of coverage (https://coverage.readthedocs.io/en/6.4.1/)
When I run my test with the following command, everything seems fine
coverage run -m pytest tests/
===================================== test session starts ======================================
platform linux -- Python 3.10.4, pytest-7.1.2, pluggy-1.0.0
rootdir: /home/arnaud/Documents/Github/gotcha
collected 4 items
tests/preprocessing/test_preprocessing.py .... [100%]
====================================== 4 passed in 0.30s =======================================
However, when I try to access the report with either of those commands,
coverage report
coverage html
I get the following message:
No source for code: '<project_directory>/config-3.py'.
I did not find an appropriate solution to this problem so far
It is possible to ignore errors using the command
coverage html -i
which solved my issue
This issue is usually caused by older coverage result files, so you can either:
remove the old coverage results files or...
run coverage command with -i flag in order to ignore the errors - you can read more about that in coverage official docs: https://coverage.readthedocs.io/en/6.4.1/cmd.html#reporting
Another possible solution is to specify the source attribute. In my case, rather than the whole project (source = .), I specified the actual source folder (e.g. src). This can either be done on the commandline:
coverage run --source=src
or include it in your .coveragerc file:
[run]
source = src
...
I was getting this same issue because of a specific library I was importing*, but I never figured out why that library affected coverage, and others didn't.
Though this might just be a workaround, it makes sense to just check your source folder, and ignoring all errors (with -i) isn't much better.
* The library uses opencv-python-headless, which I think has the root cause of this issue.
Can you explain the prompt 'F6401' when I run pylint pylint-pytest plugin cannot enumerate and collect pytest fixtures. Please run `pytest --fixtures --collect-only path/to/current/module.py` and resolve any potential syntax error or package dependency issues (Can-enumerate-pytest-fixtures) is the reason?
I would like to know how it works, or why it appears, and sometimes has different outputs. The same code, sometimes two, sometimes more. I was depressed.
I did run pytest --fixtures --collect-only without any unusual hints and my tests were normal.
Description:
After I fine-tune my existing code, including running pylint, pytest, and isort, everything works. I added a new package executor with three modules, one is the abstract module of base.py, two are corresponding to different implementation modules(local.py, docker.py).
Then I run isort, and pylint works fine
Then I import the base class and two implementation classes in the module's __init__.py file, and add a factory method.
When I run pylint again, the input tells me that some of the test modules have problems with F6401.
Again, I want to emphasize that everything was fine until I added this module. But now I just added the source code of this module, this exception will appear.
What makes it even more confusing to me is that the module I'm prompted doesn't include any fixtures. I ran pylint again and found that F6401 has more test modules (several times more than last time).
I've been using PyLint for a new project to check for a mode-by-module migration, and when I migrate to this module, I can't continue.
OS env
python 3.7
os: Deepin(base Debian)
IDE: Pycharm
Package versions
pylint 3.0.0a3
pylint-pytest 1.1.2
pyparsing 2.4.7
pytest 6.2.3
pytest-asyncio 0.14.0
pytest-cov 2.11.1
pytest-mock 3.5.1
ISSUE about this question.
After debugging the source code, I found out that the cause of my problems was an error in pylint-pytest when running pytest to collect fixtures from source code, and then pylint-pytest passed the error to PyLint.
My source code had a type annotation error that caused pytest to look for a fixture from that module that was wrong, and the error was passed to pylint. But why there is a different output is not clear to me.
From debugging the source code, we know that pylint-pytest registers itself with pylint, and when pylint checks all files, it passes the files to pylint-pytest's FixtureChecker.
https://github.com/reverbc/pylint-pytest/blob/62676386f80989cc0373d77bc5dc74acc635fd7a/pylint_pytest/checkers/fixture.py#L92-L142
The visit_module method in the FixtureChecker passes the file to pytest, running pytest <module_file> --fixtures --collect-only, At the same time load the FixtureCollector plug-in into pytest.
https://github.com/reverbc/pylint-pytest/blob/62676386f80989cc0373d77bc5dc74acc635fd7a/pylint_pytest/checkers/fixture.py#L125-L131
In pytest_collectreport , if an error is reported by pytest, it is logged and the error information is passed to pytest.
https://github.com/reverbc/pylint-pytest/blob/62676386f80989cc0373d77bc5dc74acc635fd7a/pylint_pytest/checkers/fixture.py#L24-L34
I don't think this logic makes sense. Pytest should only collect fixtures from the test modules, and instead of collecting fixtures from all modules, Pylint-Pytest should filter out the source code when PyLint checks.
At this point, my doubts have disappeared. Thanks.
The documentation for pytest suggests you can skip certain imports:
https://docs.pytest.org/en/latest/skipping.html#skipping-on-a-missing-import-dependency
We are trying to run pylint under pytest and in some cases importing tensorflow causes issues because of system dependencies. The documentation shows a way of skipping the import in code, is it possible to skip imports like this from the command line of pytest?
There is no such feature in pytest, so you should do this directly in code (usually in a conftest.py).
A hacky workaround to do the same directly at the command line woud be:
python -c "import pytest; pytest.importorskip('tensorflow'); pytest.main()"
Better would be to use one of the existing hooks to add your own command-line option to pytest, so it can be specified clearly like --no-tensorflow or whatever.
I test my SciPy installation using
python -c "import scipy; scipy.test('full', verbose=2)"
A single test (test_face) fails, while all other either passes or xfails. This one test fails because the dependency bz2 is lacking, which is fine. How can I specify that I want to skip this test entirely, while still running all other tests?
I'm using SciPy 1.2.0 with pytest 4.0.2.
I found a working solution using the extra_argv argument which passes the arguments on to pytest. From the pytest docs, -k "not test_face" may be used to skip exactly this test. In total then,
python -c "import scipy; scipy.test('full', verbose=2, extra_argv=['-k not test_face'])"
achieves what I wanted.
I have written a unittest of a program MachineSettings_test.py of mine of the following form:
import unittest
import MachineSettings as MS
class TestMachineSettings(unittest.TestCase):
def setUp(self):
[...]
def testStringRepresentation(self):
[...]
def testCasDict(self):
[...]
if __name__=="__main__":
unittest.main()
I am a little bit confused by the following fact:
If I run
python -m unittest -v MachineSettings_test
I get as output
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
i.e. Python does not recognize the tests inside the unittesting module.
But if I just run
python MachineSettings_test.py
Everything works fine and I get as output
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
This is confusing to me and I could not find any similar question here yet, so I posted it.
The version of Python that I am (forced to be) using is 2.6, but I could not find anything in the documentation that makes this case to be special.
Anyone an idea?
Thanks
From the documentation:
Changed in version 2.7: In earlier versions it was only possible to run individual test methods and not modules or classes.
And you're trying to run tests for whole modules with python 2.6.
Apparently you can't even run from individual test methods with -m unittest in python 2.6. See this question for details.
You might wanna try nose or nose2.