I have bunch of unit-tests in my unit-test file. However, one of the tests I would like to skip only when running the unit-tests from command line. I know how to always skip it (#unittest.skip), but I want to somehow skip it only when running the unit-test file from command line. Is this possible?
Something like this:
test_all_my_tests.py -exclude test_number_five()
Thanks
Great question.
One idea could be command with arguments and in the arguments specify which tests to skip.
Then in your script you would parse the passed arguments and call your tests accordingly.
Your input would look like:
test_all_my_tests.py -exclude 5
and in the python script it would check for a "-exclude" argument and take the following argument as well.
Good luck!
You can have a look at #unittest.skipIf() or even implement your own skip-decorator:
Example
Here is a working example, where I implemented a custom decorator
def skipIfOverCounter(obj):
This decorator is attached to all tests like this:
#skipIfOverCounter
def test_upper(self):
The decorator increments a count and compares it to the console argument.
Output
Implemented 3 unit tests:
test_upper()
test_isupper()
test_split()
The I called python .\unittests.py 0
Skipped test 0
Ran 'test_isupper'
Ran 'test_split'
With param = 1: python .\unittests.py 1
Skipped test 1
Ran 'test_split'
Ran 'test_upper'
Skip the last test: python .\unittests.py 2
Skipped test 2
Ran 'test_isupper'
Ran 'test_upper'
Full working sample
import sys
import unittest
SKIP_INDEX = 0
COUNTER = 0
if len(sys.argv) > 1:
SKIP_INDEX = int(sys.argv.pop())
def skipIfOverCounter(obj):
global COUNTER
global SKIP_INDEX
if SKIP_INDEX == COUNTER:
print(f"Skipped test {COUNTER}")
COUNTER = COUNTER + 1
return unittest.skip("Skipped test")
COUNTER = COUNTER + 1
return obj
class TestStringMethods(unittest.TestCase):
#skipIfOverCounter
def test_upper(self):
print("Ran 'test_upper'")
self.assertEqual('foo'.upper(), 'FOO')
#skipIfOverCounter
def test_isupper(self):
print("Ran 'test_isupper'")
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
#skipIfOverCounter
def test_split(self):
print("Ran 'test_split'")
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
unittest.main()
You could even extend this by adapting the decorator to only execute the first two tests or something like this
Related
I want to test a file called ninja.py wrote in Python3.6.
# File ninja.py
def what_to_do_result(result):
# Send a mail, write something in a file, play a song or whatever
def my_function(a, b):
# Step 1
result = a + b
# Step 2
if result == 3:
what_to_do_result(result)
elif result == 5:
what_to_do_result(result + 1)
else:
return True
I have started writing a test file called test_ninjapy and wrote some unittest. I do use Pytest.
import pytest
class MyTestException(Exception):
pass
def run_side_effect(*args, **kwargs):
raise MyTestException(kwargs["result"])
#pytest.fixture(name="resource")
def setup_fixture():
# Some code here
class TestNinja:
#staticmethod
def setup_method():
# Function called before each test
#staticmethod
def teardown_method():
# Function called after each test
#staticmethod
def test_my_function(mocker, resource):
# How to do this ???
mocker.patch("ninja.what_to_do_result", return_value=None, side_effect=run_side_effect)
# Then the test
assert 1 == 1 # -> This works
with pytest.raises(MyTestException):
ninja_function(a=1, b=2)
assert ninja_function(a=5, b=10)
The point is that I want to mock the function ninja.what_to_do_result and apply a side effect (= run a function).
I want the side effect to use the parameter (kwargs) or the function what_to_do_result.
But I don't know how to do this.
For example:
Because there are multiple possibilities (in the step 2, the call of what_to_do_result could be with 3 & 5, which are linked with 2 differents use cases I wxant to test.
Can you help me?
I did not found the related section in the documentation below.
Link to the documentation: https://github.com/pytest-dev/pytest-mock
I'm having some issue while creating unittest for internal parameter.
My structure is:
[1] my_animal.py contains Myclass and method: do_bite()
my_animal.py
class Myclass():
def do_bite(self):
return 1
[2] my_module.py contains jobMain("") which is using the method from my_animal.py
my_module.py
import sys
from someclass import Myclass
def jobMain(directoryPath):
flag = -1
result = Myclass()
if result.do_bite() is None:
flag = 0
if result.do_bite() is 1:
flag = 1
if result.do_bite() is 2:
flag = 2
[3] my_test.py contains the unittest to test jobMain in my_module.py
my_test.py
# Mock Myclass.dobite to None
#pytest.fixture
def mock_dobite0():
with mock.patch('my_module.Myclass') as mocked_animal:
mocked_animal.return_value.do_bite.return_value = None
yield
# Mock Myclass.dobite to 1
#pytest.fixture
def mock_dobite1():
with mock.patch('my_module.Myclass') as mocked_animal:
mocked_animal.return_value.do_bite.return_value = 1
yield
# Mock Myclass.dobite to 2
#pytest.fixture
def mock_dobite2():
with mock.patch('my_module.Myclass') as mocked_animal:
mocked_animal.return_value.do_bite.return_value = 2
yield
# My unittest to test dobite() method
def test_dobite0(mock_Myclass, mock_dobite0):
jobMain("")
def test_dobite1(mock_Myclass, mock_dobite1):
jobMain("")
def test_dobite2(mock_Myclass, mock_dobite2):
jobMain("")
My question is: How to test 'flag' parameter inside JobMain?
'flag' para must be assigned the correct value.( eg: dobite = 1 => flag = 1)
The variable para only exists in the scope of jobMain. If you want to use the variable outside jobMain the most common ways are
1) return the value
This is quite obvious. Since jobMain is a function, it returns a value. Without an explicit return statement you return None. You could just
def jobmain(pth):
# do stuff and assign flag
return flag
# and inside tests
assert jobmain("") == 1
2) Use a class instead
If you want the jobMain to remember some state, then it is common practice to use objects. Then flag would be attribute of the object and could be accessed from outside, after you call any method (function) of JobMain. For example
class JobMain:
def __init__(self):
self.flag = -1
def run(self, pth):
result = Myclass()
if result.do_bite() is None:
self.flag = 0
if result.do_bite() is 1:
self.flag = 1
if result.do_bite() is 2:
self.flag = 2
# and inside test
job = JobMain()
job.run()
assert job.flag == 1
Note
I just copy-pasted your code for setting the flag. Note that you call do_bite() many times, if the resulting value is None or 1. Also, when testing against a number, one should use == instead of is.
How to test 'flag' parameter inside JobMain?
You don't. It's an internal variable. Testing it would be glass-box testing; the test will break if the implementation changes.
Instead, test the effect of flag. This is black-box testing. Only the interface is tested. If the implementation changes the test still works allowing the code to be aggressively refactored.
Note: If you don't hard code result = Myclass() you don't need to mock. Pass it in as an argument with the default being Myclass().
def jobMain(directoryPath, result=Myclass()):
Then you don't need to patch Myclass(). Instead, pass in a mock object.
# I don't know unittest.mock very well, but something like this.
mock = Mock(Myclass)
mock.do_bite.return_value = 2
jobMain('', result=mock)
This also makes the code more flexible outside of testing.
My test case looks like this. Following is the code:
#patch('something.mysqlclient')
#patch('something.esclient')
def testcase1(mysql,esclient):
esclient.return_value = 1
mysql.return_value = 3
assert something.modeul1.esclient == 1
assert something.modeul1.mysql == 3
Decorator works from bottom to top.
#patch('something.mysqlclient')
#patch('something.esclient')
def testcase1(esclient, mysql):
pass
#pytest.mark.incremental
class Test_aws():
def test_case1(self):
----- some code here ----
result = someMethodTogetResult
assert result[0] == True
orderID = result[1]
def test_case2(self):
result = someMethodTogetResult # can be only perform once test case 1 run successfully.
assert result == True
def test_deleteOrder_R53HostZonePrivate(self):
result = someMethodTogetResult
assert result[0] == True
The current behavior is if test 1 passes then test 2 runs and if test 2 passes then test 3 runs.
What I need is:
If test_case 3 should be run if test_case 1 passed. test_case 2 should not change any behavior. Any thoughts here?
I guess you are looking for pytest-dependency which allows setting conditional run dependencies between tests. Example:
import random
import pytest
class TestAWS:
#pytest.mark.dependency
def test_instance_start(self):
assert random.choice((True, False))
#pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
def test_instance_stop(self):
assert random.choice((True, False))
#pytest.mark.dependency(depends=['TestAWS::test_instance_start'])
def test_instance_delete(self):
assert random.choice((True, False))
test_instance_stop and test_instance_delete will run only if test_instance_start succeeds and skip otherwise. However, since test_instance_delete does not depend on test_instance_stop, the former will execute no matter what the result of the latter test is. Run the example test class several times to verify the desired behaviour.
To complement hoefling's answer, another option is to use pytest-steps to perform incremental testing. This can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.
However it does not implement advanced dependency mechanisms like pytest-dependency, so use the package that better suits your goal.
With pytest-steps, hoefling's example would write:
import random
from pytest_steps import test_steps, depends_on
def step_instance_start():
assert random.choice((True, False))
#depends_on(step_instance_start)
def step_instance_stop():
assert random.choice((True, False))
#depends_on(step_instance_start)
def step_instance_delete():
assert random.choice((True, False))
#test_steps(step_instance_start, step_instance_stop, step_instance_delete)
def test_suite(test_step):
# Execute the step
test_step()
EDIT: there is a new 'generator' mode to make it even easier:
import random
from pytest_steps import test_steps, optional_step
#test_steps('step_instance_start', 'step_instance_stop', 'step_instance_delete')
def test_suite():
# First step (Start)
assert random.choice((True, False))
yield
# Second step (Stop)
with optional_step('step_instance_stop') as stop_step:
assert random.choice((True, False))
yield stop_step
# Third step (Delete)
with optional_step('step_instance_delete') as delete_step:
assert random.choice((True, False))
yield delete_step
Check the documentation for details. (I'm the author of this package by the way ;) )
You can use pytest-ordering package to order your tests using pytest mark. The author of the package explains the usage here
Example:
#pytest.mark.first
def test_first():
pass
#pytest.mark.second
def test_2():
pass
#pytest.mark.order5
def test_5():
pass
Hi How can i generate test method dynamically for a list or for number of files.
Say I have file1,file2 and filen with input value in json. Now I need to run the same test for multiple values like below,
class Test_File(unittest.TestCase):
def test_$FILE_NAME(self):
return_val = validate_data($FILE_NAME)
assert return_val
I am using the following command to run the py.test to generate html and junit report
py.test test_rotate.py --tb=long --junit-xml=results.xml --html=results.html -vv
At present I am manually defining the methods as below,
def test_lease_file(self):
return_val = validate_data(lease_file)
assert return_val
def test_string_file(self):
return_val = validate_data(string_file)
assert return_val
def test_data_file(self):
return_val = validate_data(data_file)
assert return_val
Please let me know how I can specify py test to dynamically generate test_came method while giving reports.
I am expecting exactly which is mentioned in this blog "http://eli.thegreenplace.net/2014/04/02/dynamically-generating-python-test-cases"
But above blog uses unittest and if I use that I am not able to generate html and junit report
When we use fixtures as below I get error like its requiring 2 parameters,
test_case = []
class Memory_utlization(unittest.TestCase):
#classmethod
def setup_class(cls):
fname = "test_order.txt"
with open(fname) as f:
content = f.readlines()
file_names = []
for i in content:
file_names.append(i.strip())
data = tuple(file_names)
test_case.append(data)
logging.info(test_case) # here test_case=[('dhcp_lease.json'),('dns_rpz.json'),]
#pytest.mark.parametrize("test_file",test_case)
def test_eval(self,test_file):
logging.info(test_case)
When I execute the above I get the following error,
> testMethod()
E TypeError: test_eval() takes exactly 2 arguments (1 given)
This might help you with this.
Your test class would then look like
class Test_File():
#pytest.mark.parametrize(
'file', [
(lease_file,),
(string_file,),
(data_file,)
]
)
def test_file(self, file):
assert validate_data(file)