I am trying to use parametrize for which I want to give testcases which I get from a different function using pytest.
I have tried this
test_input = []
rarp_input1 = ""
rarp_output1 = ""
count =1
def test_first_rarp():
global test_input
config = ConfigParser.ConfigParser()
config.read(sys.argv[2])
global rarp_input1
global rarp_output1
rarp_input1 = config.get('rarp', 'rarp_input1')
rarp_input1 =dpkt.ethernet.Ethernet(rarp_input1)
rarp_input2 = config.get('rarp','rarp_input2')
rarp_output1 = config.getint('rarp','rarp_output1')
rarp_output2 = config.get('rarp','rarp_output2')
dict_input = []
dict_input.append(rarp_input1)
dict_output = []
dict_output.append(rarp_output1)
global count
test_input.append((dict_input[0],count,dict_output[0]))
#assert test_input == [something something,someInt]
#pytest.mark.parametrize("test_input1,test_input2,expected1",test_input)
def test_mod_rarp(test_input1,test_input2,expected1):
global test_input
assert mod_rarp(test_input1,test_input2) == expected1
But the second test case is getting skipped. It says
test_mod_rarp1.py::test_mod_rarp[test_input10-test_input20-expected10]
Why is the test case getting skipped? I have checked that neither the function nor the input is wrong. Because the following code is working fine
#pytest.mark.parametrize("test_input1,test_input2,expected1,[something something,someInt,someInt])
def test_mod_rarp(test_input1,test_input2,expected1):
assert mod_rarp(test_input1,test_input2) == expected1
I have not put actual inputs here. Its correct anyway. Also I have config file from which I am taking inputs using configParser. test_mod_rarp1.py is the python file name where I am doing this. I basically want to know if we can access variables(test_input in my example) from other functions to use in parametrize if that is causing problem here. If we can't how do I change the scope of the variable?
Parametrization happens at compile time so that is the reason if you want to parametrized on data generated at run time it skips that.
The ideal way to acheive what you are trying to do is by using fixture parametrization.
Below example should clear things for you and then you could apply the same logic in your case
import pytest
input = []
def generate_input():
global input
input = [10,20,30]
#pytest.mark.parametrize("a", input)
def test_1(a):
assert a < 25
def generate_input2():
return [10, 20, 30]
#pytest.fixture(params=generate_input2())
def a(request):
return request.param
def test_2(a):
assert a < 25
OP
<SKIPPED:>pytest_suites/test_sample.py::test_1[a0]
********** test_2[10] **********
<EXECUTING:>pytest_suites/test_sample.py::test_2[10]
Collected Tests
TEST::pytest_suites/test_sample.py::test_1[a0]
TEST::pytest_suites/test_sample.py::test_2[10]
TEST::pytest_suites/test_sample.py::test_2[20]
TEST::pytest_suites/test_sample.py::test_2[30]
See test_1 was skipped because parameterization happened before execution of generate_input() but test_2 gets parameterized as required
Related
I am trying to force myself to understand how decorators work and how I might use them to run a function multiple times.
I am trying to simulate datasets with three variables, but they vary on their sample size and whether the sampling was conditional or not.
So I create the population distribution that I am sampling from:
from numpy.random import normal, negative_binomial, binomial
import pandas as pd
population_N = 100000
data = pd.DataFrame({
"Variable A": normal(0, 1, population_N),
"Variable B": negative_binomial(1, 0.5, population_N),
"Variable C": binomial(1, 0.5, population_N)
})
Rather than doing the following:
sample_20 = data.sample(20)
sample_50 = data.sample(50)
condition = data["Variable B"] != 0
sample_20_non_random = data[condition].sample(20)
sample_50_non_random = data[condition].sample(50)
I wanted to simplify things and make it more efficient. So I started with a super simple function where I can pass whether or not the sample will be random or not.
def simple_function(data_frame, type = "random"):
if (type == "random"):
sample = data_frame.sample(sample_size)
else:
condition = data_frame["Variable B"] != 0
sample = data_frame[condition].sample(sample_size)
return sample
But, I want to do this for more than one sample size. So I thought that rather than writing a for-loop that can be slow, I could maybe just use a decorator. I also have tried but have failed to understand their logic, so I thought this could be good practice to try to understand them better.
import functools
def decorator(cache = {}, **case):
def inner(function):
function_name = function.__name__
if function_name not in cache:
cache[function_name] = function
#functools.wraps(function)
def wrapped_function(**kwargs):
if cache[function_name] != function:
cache[function_name](**case)
else:
function(**case)
return wrapped_function
return inner
#decorator(sample_size = [20, 50])
def sample(data_frame, type = "random"):
if (type == "random"):
sample = data_frame.sample(sample_size)
else:
condition = data_frame["Variable B"] != 0
sample = data_frame[condition].sample(sample_size)
return sample
I guess what I am not understanding is how the inheritance of the arguments works and how that then affects the iteration over the function in the decorator.
I am reading my test data from a Python file as follows.
//testdata.py -- its a list of sets.
TEST_DATA = [
(
{"test_scenario":"1"}, {"test_case_id":1}
),
(
{"test_scenario":"2"}, {"test_case_id":2}
)
]
Now I use this test data as part of a pytest test file.
// test.py
// import testdata
test_data = testdata.TEST_DATA
start = 0
class TestOne():
#pytest.mark.parametrize(("test_scenario,testcase_id"),test_data)
#testcaseid.marktc[test_data[start][1]["test_case_id"]]
def testfunction():
global start
start = start + 1
// Doing test here.
Now when I print start, it changes its value continuoulsy. But when I try to retrieve the pytest results, I still keep getting start = 0 due to which my test case ID isnt being recorded properly.
Can I either
Pass marker from within the function.
Or is there a way to change the count of start dynamically in this example?
P.S. This is the best way that I am able to store my test data currently.
Here's how i have my testcaseid.marktc defined.
// testrailthingy.py
class testcaseid(object):
#staticmethod
def marktc(*ids):
return pytest.mark.testrail(ids=ids)
What I am trying to do is to skip tests that are not supported by the code I am testing. My PyTest is running tests against an embedded system that could have different versions of code running. What I want to do mark my test such that they only run if they are supported by the target.
I have added a pytest_addoption method:
def pytest_addoption(parser):
parser.addoption(
'--target-version',
action='store', default='28',
help='Version of firmware running in target')
Create a fixture to decide if the test should be run:
#pytest.fixture(autouse = True)
def version_check(request, min_version: int = 0, max_version: int = 10000000):
version_option = int(request.config.getoption('--target-version'))
if min_version and version_option < min_version:
pytest.skip('Version number is lower that versions required to run this test '
f'({min_version} vs {version_option})')
if max_version and version_option > max_version:
pytest.skip('Version number is higher that versions required to run this test '
f'({max_version} vs {version_option})')
Marking the tests like this:
#pytest.mark.version_check(min_version=24)
def test_this_with_v24_or_greater():
print('Test passed')
#pytest.mark.version_check(max_version=27)
def test_not_supported_after_v27():
print('Test passed')
#pytest.mark.version_check(min_version=13, max_version=25)
def test_works_for_range_of_versions():
print('Test passed')
In the arguments for running the test I just want to add --target-version 22 and have only the right tests run. I haven't been able to figure out how to pass the arguments from #pytest.mark.version_check(max_version=27), to version_check.
Is there a way to do this or am I completely off track and should be looking at something else to accomplish this?
You are not far from a solution, but you're mixing up markers with fixtures; they are not the same, even if you give them the same name. You can, however, read markers of each test function in your version_check fixture and skip the test depending on what was provided by the version_check marker if set. Example:
#pytest.fixture(autouse=True)
def version_check(request):
version_option = int(request.config.getoption('--target-version'))
# request.node is the current test item
# query the marker "version_check" of current test item
version_marker = request.node.get_closest_marker('version_check')
# if test item was not marked, there's no version restriction
if version_marker is None:
return
# arguments of #pytest.mark.version_check(min_version=10) are in marker.kwargs
# arguments of #pytest.mark.version_check(0, 1, 2) would be in marker.args
min_version = version_marker.kwargs.get('min_version', 0)
max_version = version_marker.kwargs.get('max_version', 10000000)
# the rest is your logic unchanged
if version_option < min_version:
pytest.skip('Version number is lower that versions required to run this test '
f'({min_version} vs {version_option})')
if version_option > max_version:
pytest.skip('Version number is higher that versions required to run this test '
f'({max_version} vs {version_option})')
I'm trying to divide up my SConstruct file into blocks of code, where each block
is controlled by an Alias, and no code is run by default; i.e. just by runningscons.
The Aliases are of course run from the command line e.g. (in the example below):
scons h
Here is some example code. This appears to work Ok. However, three questions.
Is there a better way to do this?
More specifically, I don't understand how the target arguments in the Alias call
get passed to the h and h3 action functions. I notice if I leave them blank the
build does not work. However there is no obvious way for the targets to be passed
to these functions, since they do not take any arguments.
Relatedly, the documentation says that action functions requires target, source,
and env arguments. These action functions don't have these but work anyway. How come?
Code follows:
#!/usr/bin/python
Default(None)
def h(env):
x = env.Program("hello1", "hello1.c")
y = env.Program("hello2", "hello2.c")
return 0
def h3(env):
y = env.Program("hello3", "hello3.c")
return 0
env = Environment()
env.AddMethod(h, "HELLO")
env.AddMethod(h3, "HELLO3")
env.Alias("h", ["hello1", "hello2"], env.HELLO())
env.Alias("h3", ["hello3"],env.HELLO3())
To answer your first question: yes, there is a better way.
env = Environment()
# h:
x = env.Program("hello1", "hello1.c")
y = env.Program("hello2", "hello2.c")
env.Alias("h", [x,y])
# equivalently: env.alias("h", ["hello1", "hello2"])
# h3
y = env.Program("hello3", "hello3.c")
env.Alias("h3", y)
Default(None)
Alternatively, if you like grouping your Program() calls in a subroutine, that's okay, too. You just don't need AddMethod() for what you're doing:
env = Environment()
def h(env):
x = env.Program("hello1", "hello1.c")
y = env.Program("hello2", "hello2.c")
return x,y
def h3(env):
return env.Program("hello3", "hello3.c")
env.Alias("h", h(env))
env.Alias("h3", h3(env))
Default(None)
I'm using nosetests and in two separate files I have two tests. Both run fine when run individually, but when run together, the mock from the first test messes up the results in the second test. How do I insure that all mocks/patches are reset after a test function is finished so that I get a clean test on every run?
If possible, explaining through my tests would be particularly appreciated. My first test looks like:
def test_list_all_channel(self):
from notification.models import Channel, list_all_channel_names
channel1 = Mock();
channel2 = Mock();
channel3 = Mock();
channel1.name = "ch1"
channel2.name = "ch2"
channel3.name = "ch3"
channel_list = [channel1, channel2, channel3]
Channel.all = MagicMock()
Channel.all.return_value = channel_list
print Channel
channel_name_list = list_all_channel_names()
self.assertEqual("ch1", channel_name_list[0])
self.assertEqual("ch2", channel_name_list[1])
self.assertEqual("ch3", channel_name_list[2])
And my second test is:
def test_can_list_all_channels(self):
add_channel_with_name("channel1")
namelist = list_all_channel_names()
self.assertEqual("channel1", namelist[0])
But the return value from Channel.all() is still set to the list from the first function so I get `"ch1" is not equal to "channel1". Any suggestions? Thank you much!
Look up https://docs.python.org/3/library/unittest.mock.html#patch
At the start of your test you initiate your patch and run
p = patch("Channel.all", new=MagicMock(return_value=channel_list))
p.start()
At the end:
p.stop()
This will ensure that your mocks are isolated to the test.