I have some twisted.trial tests in my project, that is tests inheriting from twisted.trial.unittest.TestCase.
I need to pass some trial options to my test, specifically it is --reactor option of twisted.trial command line utility. Is there some way for me to pass it to pytest? My thinking is: I add something to pytest.ini, and pytest would somehow launch my trial unittest testcase with this option. Is that possible at the moment?
Sample to reproduce this. Take the following unit test:
# test_reactor.py
from twisted.trial.unittest import TestCase
class CrawlTestCase(TestCase):
def test_if_correct_reactor(self):
from twisted.internet import reactor
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
assert isinstance(reactor, AsyncioSelectorReactor)
Now run it with trial with --reactor flag
python -m twisted.trial --reactor=asyncio test_reactor
test_reactor
CrawlTestCase
test_if_correct_reactor ... [OK]
-------------------------------------------------------------------------------
Ran 1 tests in 0.042s
PASSED (successes=1)
Now run it without --reactor flag
python -m twisted.trial test_reactor
test_reactor
CrawlTestCase
test_if_correct_reactor ... [ERROR]
===============================================================================
[ERROR]
Traceback (most recent call last):
File "/home/pawel/.../test_reactor.py", line 8, in test_if_correct_reactor
assert isinstance(reactor, AsyncioSelectorReactor)
builtins.AssertionError:
test_reactor.CrawlTestCase.test_if_correct_reactor
-------------------------------------------------------------------------------
Ran 1 tests in 0.081s
FAILED (errors=1)
Now run it with pytest
py.test test_reactor.py
============================================================================================================ test session starts =============================================================================================================
platform linux -- Python 3.9.4, pytest-6.2.3, py-1.11.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/pawel/../aaomw, configfile: pytest.ini
plugins: Faker-8.1.3, hypothesis-6.10.1, benchmark-3.4.1
collected 1 item
test_reactor.py F
Question is: how do I pass something to pytest so that it passes it to trial? Is there something I can put in pytest.ini so that reactor is passed to twisted trial?
If what I'm trying to do is not possible, please provide proof that it is not possible, this is also possibly accepted answer, perhaps something needs to be changed in pytest to make this kind of thing possible.
After installing pytest-twisted plugin I get --reactor flag and a proper option of installing and launching reactor.
Related
I have done the following:
cloned pandas-dev/pandas
within vs code, I have installed the Remote Development extension pack
I have opened this folder within a container, as instructed here
But now, if I try to discover tests, it fails. Here is the output of Python test log:
python /root/.vscode-server/extensions/ms-python.python-2020.6.89148/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir /workspaces/pandas-dev -s --cache-clear pandas
Test Discovery failed:
Error: /workspaces/pandas-dev/pandas/util/_test_decorators.py:97: MatplotlibDeprecationWarning: The 'warn' parameter of use() is deprecated since Matplotlib 3.1 and will be removed in 3.3. If any parameter follows 'warn', they should be pass as keyword, not positionally.
mod.use("Agg", warn=True)
/workspaces/pandas-dev/pandas/util/_test_decorators.py:97: MatplotlibDeprecationWarning: The 'warn' parameter of use() is deprecated since Matplotlib 3.1 and will be removed in 3.3. If any parameter follows 'warn', they should be pass as keyword, not positionally.
mod.use("Agg", warn=True)
============================= test session starts ==============================
platform linux -- Python 3.7.7, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
rootdir: /workspaces/pandas-dev, inifile: setup.cfg, testpaths: pandas
plugins: forked-1.1.2, asyncio-0.12.0, cov-2.10.0, hypothesis-5.16.2, xdist-1.32.0
collected 0 items
============================ no tests ran in 0.01s =============================
ERROR: file not found: pandas
Traceback (most recent call last):
File "/root/.vscode-server/extensions/ms-python.python-2020.6.89148/pythonFiles/testing_tools/run_adapter.py", line 22, in <module>
main(tool, cmd, subargs, toolargs)
File "/root/.vscode-server/extensions/ms-python.python-2020.6.89148/pythonFiles/testing_tools/adapter/__main__.py", line 100, in main
parents, result = run(toolargs, **subargs)
File "/root/.vscode-server/extensions/ms-python.python-2020.6.89148/pythonFiles/testing_tools/adapter/pytest/_discovery.py", line 44, in discover
raise Exception("pytest discovery failed (exit code {})".format(ec))
Exception: pytest discovery failed (exit code 4)
I am able to discover tests absolutely fine for the same repo when developing outside the container, i.e. with the dev environment built from source.
How can I discover tests when developing inside the container?
I've moved recently from Python 2.7 to Python 3.8. There's a strange new phenomenon when running the tests, that can be reproduced with this simple example:
from django.test import TestCase
from users.models import User
class TestWTF(TestCase):
def setUp(self):
self.user = User.objects.create(email='admin#project.com')
def test_failure(self):
self.assertTrue(False)
def test_success(self):
pass
When test_failure() fails the self.user object doesn't get removed from the DB. It seems like the promised rollback feature just doesn't happen. test_success() and all subsequent tests in the same class will fail with UNIQUE constraint being violated when setUp() tries to create the object again.
It doesn't happen in Python 2.
A partial output from pytest I'm using:
$ pytest -W ignore -s
================================================= test session starts ==================================================
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
django: settings: project_sites.settings.tests (from ini)
rootdir: /home/emes/devel/project/project, inifile: pytest.ini
plugins: django-3.8.0, env-0.6.2, case-1.5.3, cov-2.8.1
collected 2 items
[...]
=============================================== short test summary info ================================================
FAILED deals/tests/test_wtf.py::TestWTF::test_failure - AssertionError: False is not true
FAILED deals/tests/test_wtf.py::TestWTF::test_success - django.db.utils.IntegrityError: UNIQUE constraint failed: use...
================================================== 2 failed in 22.76s ==================================================
edit: I'm using Django-1.11.26
I had the same issue with Django 2.2 on Python 3.6.
I was using pytest 5.4.1 and pytest-django 3.8.0.
Updating pytest-django to version 3.9.0 resolved the issue, so I believe it was an issue in 3.8.0.
Suppose we have installed huge library like SageMath. Let consider trivial test file:
from sage.all_cmdline import * # import sage library
class TestClass:
def test_method(self):
assert True
It runs for about 1.5 sec with Nosetest
$ time nosetests test.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
nosetests test.py 1.38s user 0.14s system 97% cpu 1.567 total
Whereas with pytest it runs for ~4.5 sec!
platform linux -- Python 3.8.2, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /home/user/pytest, inifile: pytest.ini
plugins: profiling-1.7.0
collecting 1 item
/usr/lib/python3.8/site-packages/sage/misc/sage_unittest.py:20:
PytestCollectionWarning: cannot collect test class 'TestSuite' because it has a __init__ constructor (from: test.py)
class TestSuite(object):
collected 1 item
test.py . [100%]
====================================================================== 1 passed in 3.26s ======================================================================
pytest test.py 3.86s user 0.46s system 101% cpu 4.253 total
It looks (according to the warning) like pytest collects some tests from the library itself or may be something else.
The question is: how to speedup pytest startup in the cases like this with huge library to load? And how to avoid loading tests from that huge library?
P.S. See detailed discussion on the subject: https://github.com/pytest-dev/pytest/issues/7111
i am trying to execute my following test suite:
import unittest
from Login_Page import LoginPageAndLogout
def test_suite():
# get all tests from classes
login_test = unittest.TestLoader().loadTestsFromTestCase(LoginPageAndLogout)
# create a test suite
all_tests = unittest.TestSuite([
login_test
])
# run the suite
unittest.TextTestRunner(verbosity=2).run(all_tests)
from Pycharm's terminal using the command :
sudo pytest selenium-tests/testSuite.py -vvv -s
and a part of the output is the following:
============================================================================================================ test session starts ============================================================================================================
platform linux2 -- Python 2.7.14, pytest-3.1.3, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/osboxes/PycharmProjects/WebTesting, inifile:
plugins: cov-2.5.1
collected 3 items
selenium-tests/testSuite.py::LoginPageAndLogout::test_failed_login <- selenium-tests/Login_Page.py PASSED
selenium-tests/testSuite.py::LoginPageAndLogout::test_login <- selenium-tests/Login_Page.py FAILED
selenium-tests/testSuite.py::test_suite test_failed_login (Login_Page.LoginPageAndLogout) ... ok
test_login (Login_Page.LoginPageAndLogout) ... ok
----------------------------------------------------------------------
Ran 2 tests in 55.993s
The structure of my Login_Page.py file is:
class LoginPageAndLogout(unittest.TestCase):
def setUp(self):
# ...
# login with incorrect credentials to get error message
def test_failed_login(self):
# ...
# login with correct credentials
def test_login(self):
# ...
def tearDown(self):
# ...
As you can see from the output, I have 2 tests but the terminal collects three things instead and run each test twice. Is there a way to execute only the PASSED/FAILED execution, not the ... ok ?
If I comment out unittest.TextTestRunner(verbosity=2).run(all_tests) my tests executed only once but I get the ... ok result instead of the PASSED/FAILED which is the one I want; so I see the pytest execution results instead of the unittests runner results.
How can I run from the terminal my suite using the unitest runner only?
The solution to this was quite easy as I had just misunderstood how my unit test was being executed all this time.
The only thing I had to do was to comment out the whole test_suite class from my testSuite.py file and just import at the top of this file the classes from the test scripts i wanted to execute.
Now my tests run only once and I can still execute all my scripts all at once without typing them in my command one by one using the exact same command:
sudo pytest selenium-tests/testSuite.py -vvv -s
The output of that command is now:
osboxes#osboxes:~/PycharmProjects/WebTesting$ sudo pytest selenium-tests/testSuite.py -vvv -s
========================================================================================================================= test session starts ==========================================================================================================================
platform linux2 -- Python 2.7.14, pytest-3.1.3, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/osboxes/PycharmProjects/WebTesting, inifile:
plugins: cov-2.5.1
collected 2 items
selenium-tests/testSuite.py::LoginPageAndLogout::test_failed_login <- selenium-tests/Login_Page.py PASSED
selenium-tests/testSuite.py::LoginPageAndLogout::test_login <- selenium-tests/Login_Page.py PASSED
====================================================================================================================== 2 passed in 58.81 seconds =======================================================================================================================
osboxes#osboxes:~/PycharmProjects/WebTesting$
I can run files with plain test_* functions without any problems, however when I try to run a file with the tests contained in a subclass of unittest.TestCase I get the following result
W:\dev\Scripts\python.exe "C:\Program Files (x86)\JetBrains\PyCharm 3.0.1\helpers\pycharm\pytestrunner.py" -p pytest_teamcity W:/dev/datakortet/xfr/setup/tests
Testing started at 3:31 PM ...
============================= test session starts ==============================
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 0 items / 1 skipped
========================== 1 skipped in 0.57 seconds ===========================
Process finished with exit code 0
Empty test suite.
when I run the same tests from the commandline:
(dev) w:\dev\datakortet\xfr\setup\tests>py.test test_setup_views.py
========================================================================================= test session starts ====
platform win32 -- Python 2.7.3 -- pytest-2.3.5
plugins: cov, xdist
collected 6 items
test_setup_views.py ......
====================================================================================== 6 passed in 4.15 seconds ==
(dev) w:\dev\datakortet\xfr\setup\tests>
do I need to add anything to the tests (I don't have a test suite, or a test runner, since py.test doesn't require this...)
Go to your respective file, with contains unittest tests. Then what you need to do is Go to Python Integrated Tools inside of settings. Then set the Default test runner to Unittest.
After that, you can just go into your unittest file, you can just run it, and it will perform the tests.
Or you can right click your directory of your files where the tests are located, and right click, and you should be able to see "Run Unittests in test.py" or something. That will run all your tests