I've only just started learning about testing, and so I'm just starting out by trying to put together and run some very simple unit tests using py.test.
Example test_script.py:
import pytest
def test_func():
assert True
pytest.main('-v')
Running this gives:
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4 -- C:\Program Files (x86)\Python33\python.exe
collecting ... collected 1 items
test_script.py:3: test_func PASSED
=========================== 1 passed in 0.12 seconds ===========================
If I replace -v with -s to view stdout (and disable pytest capturing of stdout), the tests run twice:
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4
collected 1 items
test_script.py .
=========================== 1 passed in 0.04 seconds ===========================
collected 1 items
test_script.py .
=========================== 1 passed in 0.12 seconds ===========================
Should the tests run twice here? I did search, but couldn't find anything obvious in the documentation (though may have been looking in the wrong place).
That's a funny one :)
here is what happens: python executes test_script.py and thus executes pytest.main("-s") which goes back to the file system and collects the test_script.py as a test module. When pytest imports test_script, pytest.main(...) is invoked again during collection. The second invocation does not import test_script again because that is in sys.modules now, but it executes the test function. When the collection has finished (and the inner pytest.main run has executed the test once), the test function is also executed by the outer pytest.main invocation. Everything clear? :)
If you want to avoid this, you need to wrap the pytest.main invocation like this:
if __name__ == "__main__":
pytest.main("-s")
This invocation will not execute on a normal import but it will execute when you issue python test_script.py because python executes a command line specified script by setting __name__ to __main__ but to test_script upon a normal import test_script import.
Related
I've been working on a git hook program, and in it, I'm using subprocess to run pytest, and it's giving some strange indentation in the stdout. This is running on git bash on windows, so it's certainly possible that it's a newline problem, but note that adding universal_newlines=True kwarg didn't help either
My calling code:
process_result = subprocess.run(['pytest', './test'], text=True)
My output:
$ git commit -m "t"
============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /mnt/c/Users/[usr]/Documents/barb
collected 1 item
test/test_cli.py . [100%]
============================== 1 passed in 0.56s ===============================
Any ideas on how to fix this issue?
I have a file test_pytest.py
def test_dummy():
assert 1 + 1 == 2
I can run pytest from within a python console in an IDE such as Spyder (my use case is Blender but the behavior is general for an embedded python console)
pytest.main(['-v','test_pytest.py'])
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/elfnor/anaconda3/bin/python
cachedir: Documents/python/.cache
rootdir: /home/elfnor/Documents/python, inifile:
collecting ... collected 1 items
Documents/python/test_pytest.py::test_dummy PASSED
=========================== 1 passed in 0.01 seconds ===========================
I then add another test to test_pytest.py, and save the file.
def test_dummy():
assert 1 + 1 == 2
def test_another():
assert 2 + 2 == 4
And rerun pytest.main(['-v','test_pytest.py']) but pytest dosen't find the new test.
I've tried reloading the pytest module, but no luck. If I kill the console (in Spyder), re-import pytest and rerun, the new test is found.
I haven't found a way to kill the console in Blender (without killing Blender). Is there a different way to force pytest to find new tests when used in a console.?
I am creating a conda recipe, and have added run_test.py . These are unittest classes.
Unfortunatly, when there are errors, the package is still created.
My question, how to inform conda that the test failed, and it should not continue with the package build.
run_test.py contains :
suit = unittest.TestLoader().discover("../tests/unitTest")#, pattern="test[AP][la]*[sr].py")
unittest.TextTestRunner(verbosity=2).run(suit )
I do add the files in meta.yaml
test:
files:
- ../tests/unittest/
This is the output:
Ran 16 tests in 2.550s
FAILED (errors=5)
===== PACKAGE-NAME-None-np18py27_0 OK ====
I want to stop the build
The script needs to exit nonzero. If the tests fail, call sys.exit(1) in the script.
In run_test.py, you can invoke unittest.main() by doing the following:
if __name__ == "__main__":
unittest.main()
The conda build process will automatically fail if the tests do not succeed. This link will help demonstrate: invoking unittest main method
I can run files with plain test_* functions without any problems, however when I try to run a file with the tests contained in a subclass of unittest.TestCase I get the following result
W:\dev\Scripts\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 3.0.1\helpers\pycharm\pytestrunner.py" -p pytest_teamcity W:/dev/datakortet/xfr/setup/tests
Testing started at 3:31 PM ...
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 0 items / 1 skipped
========================== 1 skipped in 0.57 seconds ===========================
Process finished with exit code 0
Empty test suite.
when I run the same tests from the commandline:
(dev) w:\dev\datakortet\xfr\setup\tests>py.test test_setup_views.py
========================================================================================= test session starts ====
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 6 items
test_setup_views.py ......
====================================================================================== 6 passed in 4.15 seconds ==
(dev) w:\dev\datakortet\xfr\setup\tests>
do I need to add anything to the tests (I don't have a test suite, or a test runner, since py.test doesn't require this...)
Go to your respective file, with contains unittest tests. Then what you need to do is Go to Python Integrated Tools inside of settings. Then set the Default test runner to Unittest.
After that, you can just go into your unittest file, you can just run it, and it will perform the tests.
Or you can right click your directory of your files where the tests are located, and right click, and you should be able to see "Run Unittests in test.py" or something. That will run all your tests
We have been using nosetest for running and collecting our unittests (which are all written as python unittests which we like). Things we like about nose:
uses standard python unit tests (we like the structure this imposes).
supports reporting coverage and test output in xml (for jenkins).
What we are missing is a good way to run tests in isolated processes while maintaining good error repoorting (we are testing C++ libraries through python so segfaults should not be catastrophic). nosepipe seems to be no longer maintained and we have some problems with it.
We are trying to figure out whether we should
- fix/use nosepipe
- switch to nose2 and write nosepipe2.
- use pytest or some other testing framework.
We would prefer to use an approach with a good community. It seems our problem (C++ plugins requiring good isolation) might be a common problem but googling I have not found solutions that are maintained. Advice from more experienced heads appreciated.
pytest has the xdist plugin which provides the --boxed option
to run each test in a controlled subprocess. Here is a basic example::
# content of test_module.py
import pytest
import os
import time
# run test function 50 times with different argument
#pytest.mark.parametrize("arg", range(50))
def test_func(arg):
time.sleep(0.05) # each tests takes a while
if arg % 19 == 0:
os.kill(os.getpid(), 15)
If you run this with::
$ py.test --boxed
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
collecting ... collected 50 items
test_module.py f..................f..................f...........
================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 3.41 seconds ====================
You'll see that a couple of tests are reported as crashing, indicated
by lower-case f and the respective failure summary. You can also use
the xdist-provided parallelization feature to speed up your testing::
$ py.test --boxed -n3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
gw0 I / gw1 I / gw2 I
gw0 [50] / gw1 [50] / gw2 [50]
scheduling tests via LoadScheduling
..f...............f..................f............
================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
[gw0] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 2.03 seconds ====================
In principle, just distributing to parallel subprocesses may often suffice and avoids the overhead of starting a boxed process for each test. This currently only works if you only have less crashing tests than the -n number of processes because a dying testing process is not restarted. This limitation could probably be removed without too much effort. Meanwhile you will have to use the safe boxing option.