Split a test in different functions with pytest - python

I'm using pytest and have multiple tests to run to check an issue.
I would like to split all tests into different functions like this:
# test_myTestSuite.py
#pytest.mark.issue(123)
class MyTestSuite():
def test_part_1():
result = do_something()
assert result == True
def test_part_2():
result = do_an_other_something()
assert result == 'ok'
of course, I implemented issue in conftest.py
# conftest.py
def pytest_addoption(parser):
group = parser.getgroup('Issues')
group.addoption('--issue', action='store',
dest='issue', default=0,
help='')
but I don't know how to hook once after testing MyTestSuite and check that all tests of MyTestSuite correctly passed.
Does anyone have any ideas?
PS: this is my first post on StackOverflow.

Try to use the return function as most simple type of positive debug conformation as shown below.
#pytest.mark.issue(123)
class MyTestSuite():
def test_part_1():
result = do_something()
assert result == True
return 'tp1', True
def test_part_2():
result = do_an_other_something()
assert result == 'ok'
return 'tp2', True
..and then where you run your tests from:
x = MyTestSuite().test_part_1()
if x[1] == True:
print 'Test %s completed correctly' % x[0]
The result after running test1:
Test tp1 completed correctly, or...
AssertionError.
Collecting assertion errors:
collected_errors = []
def test_part_1():
testname = 'tp1'
try:
result = do_something()
assert result == True
return testname, True
except Exception as error:
info = (testname, error)
collected_errors.append(info)
More assertion flavours you can find here on SO.

Related

How to provide default values to single parameterized pytest fixture with arguments?

I am using pytest parametrized fixtures, which have variable return values.
This is a simplified example:
import pytest
#pytest.fixture
def mocked_value(mock_param):
if mock_param:
mocked_value = "true"
else:
mocked_value = "false"
yield mocked_value
#pytest.mark.parametrize(("mock_param"), [True, False])
def test_both_mocked_parameters(mocked_value, mock_param):
assert mocked_value == str(mock_param).lower()
#pytest.mark.parametrize(("mock_param"), [True])
def test_just_one_mocked_param(mocked_value, mock_param):
assert mocked_value == "true"
Is there a way to make the pytest fixture have a default param given to it? Something like this, except built into the single fixture definition:
def _mocked_function(mock_param):
if mock_param:
mocked_value = "true"
else:
mocked_value = "false"
return mocked_value
#pytest.fixture
def mocked_value_with_default():
yield _mocked_function(True)
#pytest.fixture
def mocked_value(mock_param):
yield _mocked_function(mock_param)
#pytest.mark.parametrize(("mock_param"), [True, False])
def test_both_mocked_parameters(mocked_value, mock_param):
assert mocked_value == str(mock_param).lower()
def test_just_one_mocked_param(mocked_value_with_default):
assert mocked_value_with_default == "true"
The above works, however it would be much cleaner to have just one fixture definition handling both use cases. How do I do this?
You could use fixture parametrization:
https://docs.pytest.org/en/stable/example/parametrize.html#indirect-parametrization

Test a function that contains assert statement and doesn't return anything

If, for example, I have a function like this:
def func(inputs: List):
""" function to test
args: inputs (list of tuples)
"""
for elem in inputs:
assert elem[0].attr1 == elem[1].attr1
assert elem[0].attr2 == elem[1].attr2
if hasttr(elem[0], "attr3") & hasttr(elem[1], "attr3"):
assert elem[0].attr3 == elem[1].attr3
How can I write the test for this function using pytest?
I know we can use this code (source: https://docs.pytest.org/en/stable/assert.html):
import pytest
def myfunc():
raise ValueError("Exception 123 raised")
def test_match():
with pytest.raises(ValueError, match=r".* 123 .*"):
myfunc()
to test a function that raise a ValueError, but what about assert?
Thank you!
As per one of the comments, you can simply check for AssertionError:
import pytest
def myfunc():
# some code
assert 0 == 1, "boom" # raises AssertionError
def test_myfunc():
with pytest.raises(AssertionError, match=r".*boom.*"):
myfunc()
If an AssertionError hasn't been raised by myfunc(), test_myfunc() will fail with:
Failed: DID NOT RAISE <class 'AssertionError'>
If an AssertionError with a different message has been raised by myfunc(), the test will fail with:
AssertionError: Regex pattern ... does not match ...

Are my Unit tests written correctly to test my code functions

Trying to write Unit tests to test my Python code project.
I have a gestioner.py file which checks for the User input and see if they are valid (from an allowed list) or not, (from a 'danger_list')
and then also checks if the entered command has any suspicious charecters entered to prevent security attacks.
Now I want to add Unit tests to my functions code in the Gestioner.py
Was following the writeup in Stackoverflow question: Writing unit tests in Python: How do I start?
and then py.test documentation (https://docs.pytest.org/en/latest/contents.html)
and I've written following test_gestioner.py to start my unit tests as below:
And all my three tests seeming Pasing, all the time. But I doubt if my these tests are really and correctly testing my code because when I expect the test to fail still that test Passes.
Am I doing this Unit testing of my code correctly ?
it appears to me that I am not, so please suggest where am I doing mistake in my unit tests ?
Here are is my main script, Gestioner.py
import sys
import re
class CmdGestioner:
_allowed_list = ['fichier', 'binfile', 'prep', 'rep' ]
_danger_list = ['gen', 'regen']
def __init__(self):
None
def chk_cmds(self, mycmd):
print('chk_cmds: lets look term at {}'.format(mycmd))
regex = re.compile(r'[&`%;$><!#]')
msg = bool(regex.search(mycmd))
print('My command: ' +mycmd)
if msg is True:
print(msg)
print('Found a suspicious entry in the command : {}'.format(mycmd))
return -1
else:
print('Command seems clean {}'.format(mycmd))
return 0
def is_valid(self, in_command):
valid = True
cmd_mtg_str = ''.join(str(elem) for elem in in_command)
for term in cmd_mtg_str.split():
if term.strip('--') not in self._allowed_list:
if term.strip('--') in self._forbidden_list:
valid = False
print('Found a forbidden Sec_Service command '+term)
else:
print ('%s not found in list so need to runs checks on this!' % (term))
checkResult = self.chk_cmds(term)
if checkResult == -1:
valid = False
else:
print ('Found in list so term %s is valid' % (term))
return valid
test_command = ' '.join(str(elem) for elem in sys.argv[1:])
cmd = CmdGestioner()
status = cmd.is_valid(test_command)
print ('\nStatus for command: %s is %s' % (test_command,status))
and my unit test file (test_gestioner.py)
import sys
import re
import unittest
import pytest
from gestioner import CmdGestioner
cmd = CmdGestioner()
def test_chk_cmds_valid():
mycmd = "--binfile testbinfile"
cmd.chk_cmds(mycmd)
assert 'Status for command: ' +mycmd +' is True'
def test_chk_cmds_danger():
mycmd = "--gen genf"
cmd.chk_cmds(mycmd)
assert 'Status for command: ' +mycmd +' is False'
def test_chk_cmds_attacks():
mycmd = ";ls"
cmd.chk_cmds(mycmd)
assert 'Found a suspicious entry in the command : ' .format(mycmd)
assert 'Status for command: ' +mycmd +' is False
My Unit tests being executed:
PS C:\Work\Questions_forums\SecService_messaging> py.test -v ================================================================================== test session starts =============================================================================================================================
platform win32 -- Python 3.7.4, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- c:\users\SecUser\appdata\local\programs\python\python37\python.exe
cachedir: .pytest_cache
metadata: {'Python': '3.7.4', 'Platform': 'Windows-10-10.0.18362-SP0', 'Packages': {'pytest': '5.0.1', 'py': '1.8.0', 'pluggy': '0.12.0'}, 'Plugins': {'html': '1.21.1', 'metadata': '1.8.0'}}
rootdir: C:\Work\Questions_forums\SecService_messaging
plugins: html-1.21.1, metadata-1.8.0
collected 3 items
test_gestioner.py::test_chk_cmds_valid PASSED [ 33%]
test_gestioner.py::test_chk_cmds_danger PASSED [ 66%]
test_gestioner.py::test_chk_cmds_attacks PASSED [100%]
========================================================================================================================== 3 passed in 0.03 seconds ==========================================================================================================================
No, your unittests are incorrectly written. The chk_cmd method doesn't return a string, it prints to standard out, but your assert statements are looking for the result of your chk_cmd to equal what was printed. Instead, the assert statement is checking if the string equals true which any string of one charter or more is considered true so these tests should always pass. For the assert statements to work as they're written you'd need to return the expected string from the chk_cmd method or test something else. For example
assert cmd.is_valid('rep')
assert cmd.chk_cmds(mycmd) == -1
assert cmd.chk_cmd(mycmd) == 0
It looks like you may come from a C programming background with the -1 and 0 return values. If you rewrote those values as True and False it would improve your code and your assertions. The following refactor:
class CmdGestioner:
_allowed_list = ['fichier', 'binfile', 'prep', 'rep' ]
_danger_list = ['gen', 'regen']
def chk_cmds(self, mycmd):
regex = re.compile(r'[&`%;$><!#]')
msg = bool(regex.search(mycmd))
if msg is True:
return False
else:
return True
def is_valid(self, in_command):
valid = True
cmd_mtg_str = ''.join(str(elem) for elem in in_command)
for term in cmd_mtg_str.split():
if term.strip('--') not in self._allowed_list:
if term.strip('--') in self._danger_list:
valid = False
else:
if self.chk_cmds(term):
valid = False
return valid
is both more readable (print statements omitted to see the proposed changes more clearly) and it allows assertions to be rewritten as
assert cmd.chk_cmds(mycmd)
Because the method now returns true or false you don't have to add a conditional statement to the assertion.
The way you have your current assertions should always evaluate to true since you're checking the string and not standard out.

How to control the incremental test case in Pytest

#pytest.mark.incremental
class Test_aws():
def test_case1(self):
----- some code here ----
result = someMethodTogetResult
assert result[0] == True
orderID = result[1]
def test_case2(self):
result = someMethodTogetResult # can be only perform once test case 1 run successfully.
assert result == True
def test_deleteOrder_R53HostZonePrivate(self):
result = someMethodTogetResult
assert result[0] == True
The current behavior is if test 1 passes then test 2 runs and if test 2 passes then test 3 runs.
What I need is:
If test_case 3 should be run if test_case 1 passed. test_case 2 should not change any behavior. Any thoughts here?
I guess you are looking for pytest-dependency which allows setting conditional run dependencies between tests. Example:
import random
import pytest
class TestAWS:
#pytest.mark.dependency
def test_instance_start(self):
assert random.choice((True, False))
#pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
def test_instance_stop(self):
assert random.choice((True, False))
#pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
def test_instance_delete(self):
assert random.choice((True, False))
test_instance_stop and test_instance_delete will run only if test_instance_start succeeds and skip otherwise. However, since test_instance_delete does not depend on test_instance_stop, the former will execute no matter what the result of the latter test is. Run the example test class several times to verify the desired behaviour.
To complement hoefling's answer, another option is to use pytest-steps to perform incremental testing. This can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.
However it does not implement advanced dependency mechanisms like pytest-dependency, so use the package that better suits your goal.
With pytest-steps, hoefling's example would write:
import random
from pytest_steps import test_steps, depends_on
def step_instance_start():
assert random.choice((True, False))
#depends_on(step_instance_start)
def step_instance_stop():
assert random.choice((True, False))
#depends_on(step_instance_start)
def step_instance_delete():
assert random.choice((True, False))
#test_steps(step_instance_start, step_instance_stop, step_instance_delete)
def test_suite(test_step):
# Execute the step
test_step()
EDIT: there is a new 'generator' mode to make it even easier:
import random
from pytest_steps import test_steps, optional_step
#test_steps('step_instance_start', 'step_instance_stop', 'step_instance_delete')
def test_suite():
# First step (Start)
assert random.choice((True, False))
yield
# Second step (Stop)
with optional_step('step_instance_stop') as stop_step:
assert random.choice((True, False))
yield stop_step
# Third step (Delete)
with optional_step('step_instance_delete') as delete_step:
assert random.choice((True, False))
yield delete_step
Check the documentation for details. (I'm the author of this package by the way ;) )
You can use pytest-ordering package to order your tests using pytest mark. The author of the package explains the usage here
Example:
#pytest.mark.first
def test_first():
pass
#pytest.mark.second
def test_2():
pass
#pytest.mark.order5
def test_5():
pass

How can I determine if a test passed or failed by examining the Item object passed to the pytest_runtest_teardown?

Pytest allows you to hook into the teardown phase for each test by implementing a function called pytest_runtest_teardown in a plugin:
def pytest_runtest_teardown(item, nextitem):
pass
Is there an attribute or method on item that I can use to determine whether the test that just finished running passed or failed? I couldn't find any documentation for pytest.Item and hunting through the source code and playing around in ipdb didn't reveal anything obvious.
You may also consider call.excinfo in pytest_runtest_makereport:
def pytest_runtest_makereport(item, call):
if call.when == 'setup':
print('Called after setup for test case is executed.')
if call.when == 'call':
print('Called after test case is executed.')
print('-->{}<--'.format(call.excinfo))
if call.when == 'teardown':
print('Called after teardown for test case is executed.')
The call object contains a whole bunch of additional information (test start time, stop time, etc.).
Refer:
http://doc.pytest.org/en/latest/_modules/_pytest/runner.html
def pytest_runtest_makereport(item, call):
when = call.when
duration = call.stop-call.start
keywords = dict([(x,1) for x in item.keywords])
excinfo = call.excinfo
sections = []
if not call.excinfo:
outcome = "passed"
longrepr = None
else:
if not isinstance(excinfo, ExceptionInfo):
outcome = "failed"
longrepr = excinfo
elif excinfo.errisinstance(pytest.skip.Exception):
outcome = "skipped"
r = excinfo._getreprcrash()
longrepr = (str(r.path), r.lineno, r.message)
else:
outcome = "failed"
if call.when == "call":
longrepr = item.repr_failure(excinfo)
else: # exception in setup or teardown
longrepr = item._repr_failure_py(excinfo,
style=item.config.option.tbstyle)
for rwhen, key, content in item._report_sections:
sections.append(("Captured %s %s" %(key, rwhen), content))
return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when,
sections, duration)
The Node class don't have any information regarding the status of the last test, however we do have the status of the total number of failed tests (in item.session.testsfailed), and we can use it:
We can add a new member to the item.session object (not so nice, but you gotta love python!). This member will save the status of the last testsfailed - item.session.last_testsfailed_status.
If testsfailed > last_testsfailed_status - the last test the run just failed.
import pytest
import logging
logging.basicConfig(
level='INFO',
handlers=(
logging.StreamHandler(),
logging.FileHandler('log.txt')
)
)
#pytest.mark.hookwrapper
def pytest_runtest_teardown(item, nextitem):
outcome = yield
if not hasattr(item.session, 'last_testsfailed_status'):
item.session.last_testsfailed_status = 0
if item.session.testsfailed and item.session.testsfailed > item.session.last_testsfailed_status:
logging.info('Last test failed')
item.session.last_testsfailed_status = item.session.testsfailed
Initially, I was also struggling to get the Test Status, so that I can use it to make a custom report.
But, after further analysis of pytest_runtest_makereport hook function, I was able to see various attributes of 3 params (item, call, and report).
Let me just list some of it out:
Call:
excinfo (this further drills down to carry traceback if any)
start (start time of the test in float value since epoch time)
stop (stop time of the test in float value since epoch time)
when (can take values - setup, call, teardown)
item:
_fixtureinfo (contains info abt any fixtures you have used)
nodeid (the test_name assumed by pytest)
cls (contains the class info of test, by info I mean the variables which were declared and accessed in the class of test)
funcargs (what parameters you have passed to your test along with its values)
report:
outcome (this carries the test status)
longrepr (contains the failure info including the traceback)
when (can take values - setup, call, teardown. please note that depending on its value the report will carry the values)
FYI: there are other attributes for all the above 3 params, I have mentioned in few.
Below is the code snipped depicting, of how I have hooked the function and used.
def pytest_runtest_makereport(item, call, __multicall__):
report = __multicall__.execute()
if (call.when == "call") and hasattr(item, '_failed_expect'):
report.outcome = "failed"
summary = 'Failed Expectations:%s' % len(item._failed_expect)
item._failed_expect.append(summary)
report.longrepr = str(report.longrepr) + '\n' + ('\n'.join(item._failed_expect))
if call.when == "call":
ExTest.name = item.nodeid
func_args = item.funcargs
ExTest.parameters_used = dict((k, v) for k, v in func_args.items() if v and not hasattr(v, '__dict__'))
# [(k, v) for k, v in func_args.items() if v and not hasattr(v, '__dict__')]
t = datetime.fromtimestamp(call.start)
ExTest.start_timestamp = t.strftime('%Y-%m-%d::%I:%M:%S %p')
ExTest.test_status = report.outcome
# TODO Get traceback info (call.excinfo.traceback)
return report
Hook wrappers are the way to go - allow all the default hooks to run & then look at their results.
Below example shows 2 methods for detecting whether a test has failed (add it to your conftest.py)
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
# Because this is a hookwrapper, calling `yield` lets the actual hooks run & returns a `_Result`
result = yield
# Get the actual `TestReport` which the hook(s) returned, having done the hard work for you
report = result.get_result()
# Method 1: `report.longrepr` is either None or a failure representation
if report.longrepr:
logging.error('FAILED: %s', report.longrepr)
else:
logging.info('Did not fail...')
# Method 2: `report.outcome` is always one of ['passed', 'failed', 'skipped']
if report.outcome == 'failed':
logging.error('FAILED: %s', report.longrepr)
elif report.outcome == 'skipped':
logging.info('Skipped')
else: # report.outcome == 'passed'
logging.info('Passed')
See TestReport documentation for details of longrepr and outcome
(It doesn't use pytest_runtest_teardown as the OP requested but it does easily let you check for failure)

Categories