I can run files with plain test_* functions without any problems, however when I try to run a file with the tests contained in a subclass of unittest.TestCase I get the following result
W:\dev\Scripts\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 3.0.1\helpers\pycharm\pytestrunner.py" -p pytest_teamcity W:/dev/datakortet/xfr/setup/tests
Testing started at 3:31 PM ...
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 0 items / 1 skipped
========================== 1 skipped in 0.57 seconds ===========================
Process finished with exit code 0
Empty test suite.
when I run the same tests from the commandline:
(dev) w:\dev\datakortet\xfr\setup\tests>py.test test_setup_views.py
========================================================================================= test session starts ====
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 6 items
test_setup_views.py ......
====================================================================================== 6 passed in 4.15 seconds ==
(dev) w:\dev\datakortet\xfr\setup\tests>
do I need to add anything to the tests (I don't have a test suite, or a test runner, since py.test doesn't require this...)
Go to your respective file, with contains unittest tests. Then what you need to do is Go to Python Integrated Tools inside of settings. Then set the Default test runner to Unittest.
After that, you can just go into your unittest file, you can just run it, and it will perform the tests.
Or you can right click your directory of your files where the tests are located, and right click, and you should be able to see "Run Unittests in test.py" or something. That will run all your tests
Related
Suppose we have installed huge library like SageMath. Let consider trivial test file:
from sage.all_cmdline import * # import sage library
class TestClass:
def test_method(self):
assert True
It runs for about 1.5 sec with Nosetest
$ time nosetests test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
nosetests test.py 1.38s user 0.14s system 97% cpu 1.567 total
Whereas with pytest it runs for ~4.5 sec!
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/user/pytest, inifile: pytest.ini
plugins: profiling-1.7.0
collecting 1 item
/usr/lib/python3.8/site-packages/sage/misc/sage_unittest.py:20:
PytestCollectionWarning: cannot collect test class 'TestSuite' because it has a __init__ constructor (from: test.py)
class TestSuite(object):
collected 1 item
test.py . [100%]
====================================================================== 1 passed in 3.26s ======================================================================
pytest test.py 3.86s user 0.46s system 101% cpu 4.253 total
It looks (according to the warning) like pytest collects some tests from the library itself or may be something else.
The question is: how to speedup pytest startup in the cases like this with huge library to load? And how to avoid loading tests from that huge library?
P.S. See detailed discussion on the subject: https://github.com/pytest-dev/pytest/issues/7111
I have a file test_pytest.py
def test_dummy():
assert 1 + 1 == 2
I can run pytest from within a python console in an IDE such as Spyder (my use case is Blender but the behavior is general for an embedded python console)
pytest.main(['-v','test_pytest.py'])
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1 -- /home/elfnor/anaconda3/bin/python
cachedir: Documents/python/.cache
rootdir: /home/elfnor/Documents/python, inifile:
collecting ... collected 1 items
Documents/python/test_pytest.py::test_dummy PASSED
=========================== 1 passed in 0.01 seconds ===========================
I then add another test to test_pytest.py, and save the file.
def test_dummy():
assert 1 + 1 == 2
def test_another():
assert 2 + 2 == 4
And rerun pytest.main(['-v','test_pytest.py']) but pytest dosen't find the new test.
I've tried reloading the pytest module, but no luck. If I kill the console (in Spyder), re-import pytest and rerun, the new test is found.
I haven't found a way to kill the console in Blender (without killing Blender). Is there a different way to force pytest to find new tests when used in a console.?
I try to run a series of test cases on Ubuntu with py.test and is not collecting my test cases from a folder. I use unittest to write test cases.
On Windows i use this command:
py.test –v "folder with test cases" > log_file.txt
The output from Windows:
============================= test session starts =============================
platform win32 -- Python 2.6.3 -- py-1.4.20 -- pytest-2.5.2 -- C:\Python26\python.exe
plugins: capturelog
collecting ... collected 27 items
Same command on Ubuntu, the output is:
============================= test session starts ==============================
platform linux2 -- Python 2.7.3 -- pytest-1.3.4
test path 1: TestScenario01/
=============================== in 0.01 seconds ===============================
I use different Python versions, because on Windows Scapy works only with Python 2.6. Another difference is that on Windows appears py-1.4.20 and pytest-2.5.2. I have them installed on Ubuntu too.
I managed to start tests. I had to remove the old py-1.3.4 module and install py-1.4.2 and also had to upgrade pytest to version 2.5.2. Now it works.
I've only just started learning about testing, and so I'm just starting out by trying to put together and run some very simple unit tests using py.test.
Example test_script.py:
import pytest
def test_func():
assert True
pytest.main('-v')
Running this gives:
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4 -- C:\Program Files (x86)\Python33\python.exe
collecting ... collected 1 items
test_script.py:3: test_func PASSED
=========================== 1 passed in 0.12 seconds ===========================
If I replace -v with -s to view stdout (and disable pytest capturing of stdout), the tests run twice:
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4
============================= test session starts ==============================
platform win32 -- Python 3.3.1 -- pytest-2.3.4
collected 1 items
test_script.py .
=========================== 1 passed in 0.04 seconds ===========================
collected 1 items
test_script.py .
=========================== 1 passed in 0.12 seconds ===========================
Should the tests run twice here? I did search, but couldn't find anything obvious in the documentation (though may have been looking in the wrong place).
That's a funny one :)
here is what happens: python executes test_script.py and thus executes pytest.main("-s") which goes back to the file system and collects the test_script.py as a test module. When pytest imports test_script, pytest.main(...) is invoked again during collection. The second invocation does not import test_script again because that is in sys.modules now, but it executes the test function. When the collection has finished (and the inner pytest.main run has executed the test once), the test function is also executed by the outer pytest.main invocation. Everything clear? :)
If you want to avoid this, you need to wrap the pytest.main invocation like this:
if __name__ == "__main__":
pytest.main("-s")
This invocation will not execute on a normal import but it will execute when you issue python test_script.py because python executes a command line specified script by setting __name__ to __main__ but to test_script upon a normal import test_script import.
We have been using nosetest for running and collecting our unittests (which are all written as python unittests which we like). Things we like about nose:
uses standard python unit tests (we like the structure this imposes).
supports reporting coverage and test output in xml (for jenkins).
What we are missing is a good way to run tests in isolated processes while maintaining good error repoorting (we are testing C++ libraries through python so segfaults should not be catastrophic). nosepipe seems to be no longer maintained and we have some problems with it.
We are trying to figure out whether we should
- fix/use nosepipe
- switch to nose2 and write nosepipe2.
- use pytest or some other testing framework.
We would prefer to use an approach with a good community. It seems our problem (C++ plugins requiring good isolation) might be a common problem but googling I have not found solutions that are maintained. Advice from more experienced heads appreciated.
pytest has the xdist plugin which provides the --boxed option
to run each test in a controlled subprocess. Here is a basic example::
# content of test_module.py
import pytest
import os
import time
# run test function 50 times with different argument
#pytest.mark.parametrize("arg", range(50))
def test_func(arg):
time.sleep(0.05) # each tests takes a while
if arg % 19 == 0:
os.kill(os.getpid(), 15)
If you run this with::
$ py.test --boxed
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
collecting ... collected 50 items
test_module.py f..................f..................f...........
================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 3.41 seconds ====================
You'll see that a couple of tests are reported as crashing, indicated
by lower-case f and the respective failure summary. You can also use
the xdist-provided parallelization feature to speed up your testing::
$ py.test --boxed -n3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev8
plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov
gw0 I / gw1 I / gw2 I
gw0 [50] / gw1 [50] / gw2 [50]
scheduling tests via LoadScheduling
..f...............f..................f............
================================= FAILURES =================================
_______________________________ test_func[0] _______________________________
[gw0] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[19] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
______________________________ test_func[38] _______________________________
[gw2] linux2 -- Python 2.7.3 /home/hpk/venv/1/bin/python
/home/hpk/tmp/doc-exec-420/test_module.py:6: running the test CRASHED with signal 15
=================== 3 failed, 47 passed in 2.03 seconds ====================
In principle, just distributing to parallel subprocesses may often suffice and avoids the overhead of starting a boxed process for each test. This currently only works if you only have less crashing tests than the -n number of processes because a dying testing process is not restarted. This limitation could probably be removed without too much effort. Meanwhile you will have to use the safe boxing option.