I try to make a simple test with assert statement like this
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
! pytest
and it returns
If anyone has any ideas what might be wrong.
For pytest to find the tests these have to be in a file in the current directory or a sub-directory and the filename has to begin with test (although that's probably customizable).
So if you use (inside a Jupyter notebook):
%%file test_sth.py
def sqr(x):
return x**2 + 1
def test_sqr():
assert kvadrat(2) == 4
That creates a file named test_sth.py with the given contents.
Then run ! pytest and it should work (well, fail):
============================= test session starts =============================
platform win32 -- Python 3.6.3, pytest-3.3.0, py-1.5.2, pluggy-0.6.0
rootdir: D:\JupyterNotebooks, inifile:
plugins: hypothesis-3.38.5
collected 1 item
test_sth.py F [100%]
================================== FAILURES ===================================
__________________________________ test_sqr ___________________________________
def test_sqr():
> assert kvadrat(2) == 4
E NameError: name 'kvadrat' is not defined
test_sth.py:5: NameError
========================== 1 failed in 0.12 seconds ===========================
Related
I believe my problem may be that I am not giving a spec parameter to the patch method. And you can see that autospec is not working either. Both the commented and un-commnented lines give the same result. And searching on "python mock parameter spec" does not help as "spec" is too generic a word...
I have a function I want to test:
% cat fixTextFiles.py:
import os
from unittest.mock import patch
def fixFile(tables, dir, filename):
[...]
return None
And a test that I am trying to write:
import os
import tableColumns
import unittest
import fixTextFiles
from unittest.mock import create_autospec
def test_fix_files(mocker):
rootdir = 'data_20209999/CalAccess/DATA/'
files = [rootdir + 'TEXT_MEMO.TSV', rootdir + 'SMRY.TSV']
mocker.patch('os.listdir', return_value=files)
mocker.patch('fixTextFiles.fixFile', return_value=None, autospec=True)
#mock_function = create_autospec('fixTextFiles.fixFile', return_value=None)
fixTextFiles.fixFiles(tableColumns.readTableColumns(), 'data_20209999/CalAccess/DATA', 'TEXT_MEMO.TSV')
And the result:
============================= test session starts ==============================
platform darwin -- Python 3.7.3, pytest-6.0.0, py-1.9.0, pluggy-0.13.1
rootdir: /Users/ray/Projects/CalAccessImpls/open_calaccess_data_py
plugins: mock-3.2.0
collected 3 items
fetchSoSData_test.py .. [ 66%]
fixTextFiles_test.py F [100%]
=================================== FAILURES ===================================
________________________________ test_fix_files ________________________________
mocker = <pytest_mock.plugin.MockFixture object at 0x109623e80>
def test_fix_files(mocker):
rootdir = 'data_20209999/CalAccess/DATA/'
files = [rootdir + 'TEXT_MEMO.TSV', rootdir + 'SMRY.TSV']
mocker.patch('os.listdir', return_value=files)
mocker.patch('fixTextFiles.fixFile', return_value=None, autospec=True)
#mock_function = create_autospec('fixTextFiles.fixFile', return_value=None)
> fixTextFiles.fixFiles(tableColumns.readTableColumns(), 'data_20209999/CalAccess/DATA', 'TEXT_MEMO.TSV')
E TypeError: fixFiles() takes 1 positional argument but 3 were given
fixTextFiles_test.py:21: TypeError
=========================== short test summary info ============================
FAILED fixTextFiles_test.py::test_fix_files - TypeError: fixFiles() takes 1 p...
========================= 1 failed, 2 passed in 0.24s ==========================
Yes, this was a really dumb example of pilot error....
Ok, I'm struggling with something that is literally blowing my mind.
Although my actual code is different, this basically nails down the issue. Assume this sample code:
import pytest
#pytest.mark.parametrize('type',(
pytest.param('stability', marks=pytest.mark.stability),
pytest.param('integration', marks=pytest.mark.integration),
))
#pytest.mark.integration
#pytest.mark.stability
def test_meh(type):
assert type == 'integration'
And the following is the output of running that test.
$pytest -m integration test_meeh.py
========================================================= test session starts =========================================================
platform darwin -- Python 3.6.4, pytest-3.1.2, py-1.5.2, pluggy-0.4.0
rootdir: /Users/yzT/Desktop, inifile:
collected 2 items
test_meeh.py F
============================================================== FAILURES ===============================================================
_________________________________________________________ test_meh[stability] _________________________________________________________
type = 'stability'
#pytest.mark.parametrize('type',(
pytest.param('stability', marks=pytest.mark.stability),
pytest.param('integration', marks=pytest.mark.integration),
))
#pytest.mark.integration
#pytest.mark.stability
def test_meh(type):
> assert type == 'integration'
E AssertionError: assert 'stability' == 'integration'
E - stability
E + integration
test_meeh.py:10: AssertionError
========================================================= 1 tests deselected ==========================================================
=============================================== 1 failed, 1 deselected in 0.07 seconds ================================================
$ pytest -m stability test_meeh.py
========================================================= test session starts =========================================================
platform darwin -- Python 3.6.4, pytest-3.1.2, py-1.5.2, pluggy-0.4.0
rootdir: /Users/yzT/Desktop, inifile:
collected 2 items
test_meeh.py .
========================================================= 1 tests deselected ==========================================================
=============================================== 1 passed, 1 deselected in 0.02 seconds ================================================
What is going on here? Why when I use -m integration it uses stability and when I use -m stability it uses integration ?
Looks like there are two issues with the code sample that you provided.
First, you shouldn't mark both individual parametrized tests and the test function. As soon as the -m option you provided matches the mark decorator on the function, the test will be selected.
Here's a minimal comparison. With both the marks on the function and individual parametrized tests:
# parametrized_tests.py
import pytest
#pytest.mark.parametrize('smiley', [
pytest.param(':)', marks=[pytest.mark.happy]),
pytest.param(':(', marks=[pytest.mark.unhappy]),
])
#pytest.mark.happy
#pytest.mark.unhappy
def test_meh(smiley):
assert smiley == ':)'
You will have two tests collected and selected:
$ pytest -m happy --collect-only parametrized_tests.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: [redacted], inifile:
collected 2 items
<Module 'params-individual-tests.py'>
<Function 'test_smiley[:)]'>
<Function 'test_smiley[:(]'>
====================================================================================== no tests ran in 0.01 seconds ======================================================================================
But if only the parametrized tests are marked:
# parametrized_tests.py
import pytest
#pytest.mark.parametrize('smiley', [
pytest.param(':)', marks=[pytest.mark.happy]),
pytest.param(':(', marks=[pytest.mark.unhappy]),
])
def test_smiley(smiley):
assert smiley == ':)'
You will get two tests collected, but only one selected, as you expect:
$ pytest -m happy --collect-only parametrized_tests.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: [redacted], inifile:
collected 2 items / 1 deselected
<Module 'params-individual-tests.py'>
<Function 'test_smiley[:)]'>
====================================================================================== 1 deselected in 0.01 seconds ======================================================================================
Second, there seems to be a puzzling bug (or undocumented "feature") in pytest.param, where using the exact name of the mark as the parameter value makes the test not selected.
From your (slightly modified) code:
# mark_name.py
import pytest
#pytest.mark.parametrize('type_', [
pytest.param('integration', marks=[pytest.mark.integration]),
pytest.param('stability', marks=[pytest.mark.stability]),
])
def test_meh(type_):
assert type_ == 'integration'
If I try to run the integration tests only, it won't select any:
$ pytest -m integration mark_name.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: [redacted], inifile:
collected 2 items / 2 deselected
But simply modifying the value (in this case I only made it uppercase) makes everything work as expected:
# mark_name.py
import pytest
#pytest.mark.parametrize('type_', [
pytest.param('INTEGRATION', marks=[pytest.mark.integration]),
pytest.param('STABILITY', marks=[pytest.mark.stability]),
])
def test_meh(type_):
assert type_ == 'INTEGRATION'
And now I can select the tests properly:
$ pytest -m integration mark_name.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: [redacted], inifile:
collected 2 items / 1 deselected
mark_name.py . [100%]
================================================================================= 1 passed, 1 deselected in 0.01 seconds =================================================================================
$ pytest -m stability mark_name.py
========================================================================================== test session starts ===========================================================================================
platform darwin -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0
rootdir: [redacted], inifile:
collected 2 items / 1 deselected
mark_name.py F [100%]
================================================================================================ FAILURES ================================================================================================
__________________________________________________________________________________________ test_meh[STABILITY] ___________________________________________________________________________________________
type_ = 'STABILITY'
#pytest.mark.parametrize('type_', [
pytest.param('INTEGRATION', marks=[pytest.mark.integration]),
pytest.param('STABILITY', marks=[pytest.mark.stability]),
])
def test_meh(type_):
> assert type_ == 'INTEGRATION'
E AssertionError: assert 'STABILITY' == 'INTEGRATION'
E - STABILITY
E + INTEGRATION
mark_name.py:9: AssertionError
================================================================================= 1 failed, 1 deselected in 0.08 seconds =================================================================================
I suspect that's related to the string value being used as the ID of the test, but would suggest opening a GitHub issue if that's something you don't want to work around.
Also, totally unrelated, but it's better not to define the argument as type, because you are shadowing the builtin function type. PEP-8 suggests using a trailing underscore to prevent a name clash.
I have a lot of tests that basically do the same actions but with different data, so I wanted to implement them using pytest, I managed to do it in a typical junit way like this:
import pytest
import random
d = {1: 'Hi',2: 'How',3: 'Are',4:'You ?'}
def setup_function(function):
print("setUp",flush=True)
def teardown_function(functions):
print("tearDown",flush=True)
#pytest.mark.parametrize("test_input", [1,2,3,4])
def test_one(test_input):
print("Test with data " + str(test_input))
print(d[test_input])
assert True
Which gives me the following output
C:\Temp>pytest test_prueba.py -s
============================= test session starts =============================
platform win32 -- Python 3.6.5, pytest-3.5.0, py-1.5.3, pluggy-0.6.0
rootdir: C:\Temp, inifile:
collected 4 items
test_prueba.py
setUp
Test with data 1
Hi
.tearDown
setUp
Test with data 2
How
.tearDown
setUp
Test with data 3
Are
.tearDown
setUp
Test with data 4
You ?
.tearDown
========================== 4 passed in 0.03 seconds ===========================
The problem now is that I would perform some actions also in the setup and teardown that I need to access to the test_input value
Is there any elegant solution for this ?
Maybe to achieve this I should use the parametrization or the setup teardown in a different way ?
If that the case can someone put an example of data driven testing with setup and teardown parametrized ?
Thanks !!!
parameterize on a test is more for just specifying raw inputs and expected outputs. If you need access to the parameter in the setup, then it's more part of a fixture than a test.
So you might like to try:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture(params=["good", "bad", "error"])
def parameterized_fixture(request):
param = request.param
yield from thing_that_uses_param(param)
def test_one(parameterized_fixture):
assert parameterized_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_one[good] PASSED [ 33%]
a.py::test_one[bad] FAILED [ 66%]
a.py::test_one[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_one[bad] ________________________________
parameterized_fixture = 'FAIL'
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:28: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_one[error] _______________________________
parameterized_fixture = None
def test_one(parameterized_fixture):
> assert parameterized_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:28: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
However, that requires that you create a parameterized fixture for each set of parameters you might want to use with a fixture.
You could alternatively mix and match the parameterized mark and a fixture that reads those params, but that requires the test to uses specific names for the parameters. It will also need to make sure such names are unique so it won't conflict with any other fixtures trying to do the same thing. For instance:
import pytest
d = {"good": "SUCCESS", "bad": "FAIL"}
def thing_that_uses_param(param):
print("param is", repr(param))
yield d.get(param)
print("test done")
#pytest.fixture
def my_fixture(request):
if "my_fixture_param" not in request.funcargnames:
raise ValueError("could use a default instead here...")
param = request.getfuncargvalue("my_fixture_param")
yield from thing_that_uses_param(param)
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
assert my_fixture.lower() == "success"
Which outputs:
============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-3.4.0, py-1.5.2, pluggy-0.6.0 -- c:\Users\User\AppData\Local\Programs\Python\Python35-32\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\User\Documents\python, inifile:
collecting ... collected 3 items
a.py::test_two[good] PASSED [ 33%]
a.py::test_two[bad] FAILED [ 66%]
a.py::test_two[error] FAILED [100%]
================================== FAILURES ===================================
________________________________ test_two[bad] ________________________________
my_fixture = 'FAIL', my_fixture_param = 'bad'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AssertionError: assert 'fail' == 'success'
E - fail
E + success
a.py:25: AssertionError
---------------------------- Captured stdout setup ----------------------------
param is 'bad'
-------------------------- Captured stdout teardown ---------------------------
test done
_______________________________ test_two[error] _______________________________
my_fixture = None, my_fixture_param = 'error'
#pytest.mark.parametrize("my_fixture_param", ["good", "bad", "error"])
def test_two(my_fixture, my_fixture_param):
> assert my_fixture.lower() == "success"
E AttributeError: 'NoneType' object has no attribute 'lower'
a.py:25: AttributeError
---------------------------- Captured stdout setup ----------------------------
param is 'error'
-------------------------- Captured stdout teardown ---------------------------
test done
===================== 2 failed, 1 passed in 0.08 seconds ======================
I think that what you are looking for is yield fixtures,
you can make an auto_use fixture the run something before every test and after
and you can access all the test metadata (marks, parameters and etc)
you can read it
here
and the access to parameters is via function argument called request
IMO, set_up and tear_down should not access test_input values. If you want it to be that way, then probably there is some problem in your test logic.
set_up and tear_down must be independent of values used by test. However, you may use another fixture to get the task done.
I have a lot of tests that basically do the same actions but with different data
In addition to Dunes' answer relying solely on pytest, this part of your question makes me think that pytest-cases could be useful to you, too. Especially if some test data should be parametrized while other do not.
See this other post for an example, and also the documentation of course. I'm the author by the way ;)
When I call pytest.main(...) in Python, it will display the unit test messages in the output window, such as
============================= test session starts =============================
platform win32 -- Python 3.6.4, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: xxx, inifile: pytest.ini
plugins: cov-2.5.1, hypothesis-3.38.5
collected 1 item
..\test_example.py F [100%]
================================== FAILURES ===================================
_______________________________ test_something ________________________________
def test_something():
> assert 1 == 0
E assert 1 == 0
..\test_example.py:5: AssertionError
========================== 1 failed in 0.60 seconds ===========================
My question is simply that how can I get the message above into a string object. The documentation doesn't say anything about this. And the what pytest.main() returns is only an integer that represents the error code.
https://docs.pytest.org/en/latest/usage.html#calling-pytest-from-python-code
I don't know pytest but you can try to redirect the standard output to a file. Something like this:
import sys
sys.stdout = open("log.txt", "w")
After this you can read the file to get the strings. What do you want to do with the strings?
I have this testing code:
import pytest
def params():
dont_skip = pytest.mark.skipif(False, reason="don't skip")
return [dont_skip("foo"), dont_skip("bar")]
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip please")
def test_foo(param):
assert False
Yet test_foo is not skipped, even though there are skipif decorators attached to test_foo (I tried in both orders, as you can see above):
============================= test session starts ==============================
platform darwin -- Python 3.5.0, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Volumes/Home/Users/Waleed/tmp/python/explainerr/test, inifile:
collected 2 items
test/test_example.py FF
=================================== FAILURES ===================================
________________________________ test_foo[foo] _________________________________
param = 'foo'
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip")
def test_foo(param):
> assert False
E assert False
test/test_example.py:13: AssertionError
________________________________ test_foo[bar] _________________________________
param = 'bar'
#pytest.mark.skipif(True, reason="always skip")
#pytest.mark.parametrize("param", params())
#pytest.mark.skipif(True, reason="really always skip")
def test_foo(param):
> assert False
E assert False
test/test_example.py:13: AssertionError
=========================== 2 failed in 0.01 seconds ===========================
If I change this line
dont_skip = pytest.mark.skipif(False, reason="don't skip")
to
dont_skip = pytest.mark.skipif(True, reason="don't skip")
then it skips the test cases:
============================= test session starts ==============================
platform darwin -- Python 3.5.0, pytest-2.8.5, py-1.4.31, pluggy-0.3.1
rootdir: /Volumes/Home/Users/Waleed/tmp/python/explainerr/test, inifile:
collected 2 items
test/test_example.py ss
========================== 2 skipped in 0.01 seconds ===========================
How do I get pytest.mark.skipif to work when also using skippable parameters with pytest.mark.parametrize? I'm using Python 3.5.0 with Pytest 2.8.5.
The pytest version that you are using is very old. I used your code in my env and two tests are skipped at first skipif marks.
python3 -m pytest test_b.py
==================================================================================== test session starts =====================================================================================
platform darwin -- Python 3.6.5, pytest-3.9.1, py-1.7.0, pluggy-0.8.0
rootdir:<redacted>, inifile:
collected 2 items
test_b.py ss [100%]
====================================================================================== warnings summary ======================================================================================
test_b.py:9: RemovedInPytest4Warning: Applying marks directly to parameters is deprecated, please use pytest.param(..., marks=...) instead.
For more details, see: https://docs.pytest.org/en/latest/parametrize.html
#pytest.mark.skipif(True, reason="always skip")
test_b.py:9: RemovedInPytest4Warning: Applying marks directly to parameters is deprecated, please use pytest.param(..., marks=...) instead.
For more details, see: https://docs.pytest.org/en/latest/parametrize.html
#pytest.mark.skipif(True, reason="always skip")
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== 2 skipped, 2 warnings in 0.03 seconds ============================================================================