Pytest class scope parametrization - python

I have a couple of fixtures that do some initialization that is rather expensive. Some of those fixtures can take parameters, altering their behaviour slightly.
Because these are so expensive, I wanted to do initialisation of them once per test class. However, it does not destroy and reinit the fixtures on the next permutation of parameters.
See this example: https://gist.github.com/vhdirk/3d7bd632c8433eaaa481555a149168c2
I would expect that StuffStub would be a different instance when DBStub is recreated for parameters 'foo' and 'bar'.
Did I misunderstand something? Is this a bug?

I've recently encountered the same problem and wanted to share another solution. In my case the graph of fixtures that required regenerating for each parameter set was very deep and it's not so easy to control. An alternative is to bypass the pytest parametrization system and programmatically generate the test classes like so:
import pytest
import random
def make_test_class(name):
class TestFoo:
#pytest.fixture(scope="class")
def random_int(self):
return random.randint(1, 100)
def test_someting(self, random_int):
assert random_int and name == "foo"
return TestFoo
TestFooGood = make_test_class("foo")
TestFooBad = make_test_class("bar")
TestFooBad2 = make_test_class("wibble")
You can see from this that three tests are run, one passes (where "foo" == "foo") the other two fail, but you can see that the class scope fixtures have been recreated.

This is not a bug. There is no relation between the fixtures so one of them is not going to get called again just because the other one was due to having multiple params.
In your case db is called twice because db_factory that it uses has 2 params. The stuff fixture on the other hand is called only once because stuff_factory has only one item in params.
You should get what you expect if stuff would include db_factory as well without actually using its output (db_factory would not be called more than twice):
#pytest.fixture(scope="class")
def stuff(stuff_factory, db_factory):
return stuff_factory()

Related

Pytest: parametrizing tests that require a slow initialization

I want to do tests with randomized parameters of a class with a very slow init method. The tests themself are very quick, but require a time consuming initialization step.
Of course. I do something like this:
#pytest.mark.parametrize("params", LIST_OF_RANDOMIZED_PARAMS)
def test_one(params):
state = very_slow_initialization(params)
assert state.fast_test()
#pytest.mark.parametrize("params", LIST_OF_RANDOMIZED_PARAMS)
def test_two(params):
state = very_slow_initialization(params)
assert state.another_fast_test()
From my unsuccessful tries so far I've learnt:
initializing a Testclass with a parametrized set_class(params) method is not supported
Using a fixture that initialized the class still calls the slow initialization every time
I could create a list with all initialized states in advance, however they demand a lot of memory. Furthermore sometimes I like to rune a lot of randomized tests overnight and just stop them the next morning. This this I would need to know precisely how many tests I should to so that all initializations are finished before that.
If possible I would prefer a solution that runs both tests for the first parameter, then runs both with the second parameter and so on.
There is probably a really simple solution for this.
pytest fixtures is a solution for you. Lifetime of fixture might be a single test, class, module or whole test session.
fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and tests according to configuration and component options, or to re-use fixtures across function, class, module or whole test session scopes.
Per Fixture availability paragraph, you need to define feature in class, or on module level.
Consider using module-scoped ones (pay attention, that initialization launched only once):
import pytest
#pytest.fixture(scope="module")
def heavy_context():
# Use your LIST_OF_RANDOMIZED_PARAMS randomized parameters here
# to initialize whatever you want.
print("Slow fixture initialized")
return ["I'm heavy"]
def test_1(heavy_context):
print(f"\nUse of heavy context: {heavy_context[0]}")
def test_2(heavy_context):
print(f"\nUse of heavy context: {heavy_context[0]}")
Tests output:
...
collecting ... collected 2 items
test_basic.py::test_1 Slow fixture initialized
PASSED [ 50%]
Use of heavy context: I'm heavy
test_basic.py::test_2 PASSED [100%]
Use of heavy context: I'm heavy
Now, if you need it to be assertion safe (release resources even when test fails), consider creating heavy_context in a context-manager manner (much more details here: Fixture, Running multiple assert statements safely):
import pytest
#pytest.fixture(scope="module")
def heavy_context():
print("Slow context initialized")
obj = ["I'm heavy"]
# It is mandatory to put deinitialiation into "finally" scope
# otherwise in case of exception it won't be executed
try:
yield obj[0]
finally:
print("Slow context released")
def test_1(heavy_context):
# Pay attention, that in fact heavy_context now
# is what we initialized as 'obj' in heavy_context
# function.
print(f"\nUse of heavy context: {heavy_context}")
def test_2(heavy_context):
print(f"\nUse of heavy context: {heavy_context}")
Output:
collecting ... collected 2 items
test_basic.py::test_1 Slow context initialized
PASSED [ 50%]
Use of heavy context: I'm heavy
test_basic.py::test_2 PASSED [100%]
Use of heavy context: I'm heavy
Slow context released
============================== 2 passed in 0.01s ===============================
Process finished with exit code 0
Could you perhaps run the tests one after another without initializing the object again, e.g.:
#pytest.mark.parametrize("params", LIST_OF_RANDOMIZED_PARAMS)
def test_one(params):
state = very_slow_initialization(params)
assert state.fast_test()
assert state.another_fast_test()
or using separate functions for organization:
#pytest.mark.parametrize("params", LIST_OF_RANDOMIZED_PARAMS)
def test_main(params):
state = very_slow_initialization(params)
step_one(state)
step_two(state)
def step_one(state):
assert state.fast_test()
def step_two(state):
assert state.another_fast_test()
Although it's a test script, you can still use functions to organize your code. In the version with separate functions you may even declare a fixture, in case the state may be needed in other tests, too:
#pytest.fixture(scope="module", params=LIST_OF_RANDOMIZED_PARAMS)
def state(request):
return very_slow_initialization(request.param)
def test_main(state):
step_one(state)
step_two(state)
def step_one(state):
assert state.fast_test()
def step_two(state):
assert state.another_fast_test()
I hope I didn't do a mistake here, but it should work like this.

Pytest: Parameterize unit test using a fixture that uses another fixture as input

I am new to parameterize and fixtures and still learning. I found a few post that uses indirect paramerization but it is difficult for me to implement based on what I have in my code. Would appreciate any ideas on how I could achieve this.
I have a couple of fixtures in my conftest.py that supply input files to a function "get_fus_output()" in my test file. That function process the input and generate two data-frames to compare in my testing. Further, I am subletting those two DF based on a common value ('Fus_id') to testthem individually. So the output of this function would be[(Truth_df1, test_df1),(Truth_df2, test_df2)...] just to parameterize the testing of each of these test and truth df. Unfortunately I am not able to use this in my test function "test_annotation_match" since this function needs a fixture.
I am not able to feed the fixture as input to another fixture to parameterize. Yes it is not supported in pytest but not able to figure out a workaround with indirect parameterization.
#fixtures from conftest.py
#pytest.fixture(scope="session")
def test_input_df(fixture_path):
fus_bkpt_file = os.path.join(fixture_path, 'test_bkpt.tsv')
test_input_df= pd.read_csv(fus_bkpt_file, sep='\t')
return test_input_df
#pytest.fixture
def test_truth_df(fixture_path):
test_fus_out_file = os.path.join(fixture_path, 'test_expected_output.tsv')
test_truth_df = pd.read_csv(test_fus_out_file, sep='\t')
return test_truth_df
#pytest.fixture
def res_path():
return utils.get_res_path()
#test script
#pytest.fixture
def get_fus_output(test_input_df, test_truth_df, res_path):
param_list = []
# get output from script
script_out = ex_annot.run(test_input_df, res_path)
for index, row in test_input_df.iterrows():
fus_id = row['Fus_id']
param_list.append((get_frame(test_truth_df, fus_id), get_frame(script_out, fus_id)))
# param_list eg : [(Truth_df1, test_df1),(Truth_df2, test_df2)...]
print(param_list)
return param_list
#pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)
def test_annotation_match(get_fus_output):
test, expected = get_fusion_output
assert_frame_equal(test, expected, check_dtype=False, check_like=True)
#OUTPUT
================================================================================ ERRORS ================================================================================
_______________________________________________________ ERROR collecting test_annotations.py
_______________________________________________________
test_annotations.py:51: in <module>
#pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)
E NameError: name 'test_input_df' is not defined
======================================================================= short test summary info ========================================================================
ERROR test_annotations.py - NameError: name 'test_input_df' is not defined
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================================== 1 error in 1.46s ===========================================================================
I'm not 100% sure I understand what you are trying to do here, but I think your understanding of parameterization and the role of fixtures is incorrect. It seems like you are trying to use the fixtures to create the parameter lists for your tests, which isn't really the right way to go about it (and the way you are doing it certainly won't work, as you are seeing).
To fully explain how to fix this, first, let me give a little background about how parameterization and fixtures are meant to be used.
Parameterization
I don't think anything here should be new, but just to make sure we are on the same page:
Normally, in Pytest, one test_* function is one test case:
def test_square():
assert square(3) == 9
If you want to do the same test but with different data, you can write separate tests:
def test_square_pos():
assert square(3) == 9
def test_square_frac():
assert square(0.5) == 0.25
def test_square_zero():
assert square(0) == 0
def test_square_neg():
assert square(-3) == 9
This isn't great, because it violates the DRY principle. Parameterization is the solution to this. You turn one test case into several by providing a list of test parameters:
#pytest.mark.parametrize('test_input,expected',
[(3, 9), (0.5, 0.25), (0, 0), (-3, 9)])
def test_square(test_input, expected):
assert square(test_input) == expected
Fixtures
Fixtures are also about DRY code, but in a different way.
Suppose you are writing a web app. You might have several tests that need a connection to the database. You can add the same code to each test to open and set up a test database, but that's definitely repeating yourself. If you, say, switch databases, that's a lot of test code to update.
Fixtures are functions that allow you to do some setup (and potentially teardown) that can be used for multiple tests:
#pytest.fixture
def db_connection():
# Open a temporary database in memory
db = sqlite3.connect(':memory:')
# Create a table of test orders to use
db.execute('CREATE TABLE orders (id, customer, item)')
db.executemany('INSERT INTO orders (id, customer, item) VALUES (?, ?, ?)',
[(1, 'Max', 'Pens'),
(2, 'Rachel', 'Binders'),
(3, 'Max', 'White out'),
(4, 'Alice', 'Highlighters')])
return db
def test_get_orders_by_name(db_connection):
orders = get_orders_by_name(db_connection, 'Max')
assert orders = [(1, 'Max', 'Pens'),
(3, 'Max', 'White out')]
def test_get_orders_by_name_nonexistent(db_connection):
orders = get_orders_by_name(db_connection, 'John')
assert orders = []
Fixing Your Code
Ok, so with that background out of the way, let's dig into your code.
The first problem is with your #pytest.mark.parametrize decorator:
#pytest.mark.parametrize("get_fus_output", [test_input_df, test_truth_df, res_path], indirect=True)
This isn't the right situation to use indirect. Just like tests can be parameterized, fixtures can be parameterized, too. It's not very clear from the docs (in my opinion), but indirect is just an alternative way to parameterize fixtures. That's totally different from using a fixture in another fixture, which is what you want.
In fact, for get_fus_output to use the test_input_df, test_truth_df, and res_path fixtures, you don't need the #pytest.mark.parametrize line at all. In general, any argument to a test function or fixture is automatically assumed to be a fixture if it's not otherwise used (e.g. by the #pytest.mark.parametrize decorator).
So, your existing #pytest.mark.parametrize isn't doing what you expect. How do you parameterize your test then? This is getting into the bigger problem: you are trying to use the get_fus_output fixture to create the parameters for test_annotation_match. That isn't the sort of thing you can do with a fixture.
When Pytest runs, first it collects all the test cases, then it runs them one by one. Test parameters have to be ready during the collection stage, but fixtures don't run until the testing stage. There is no way for code inside a fixture to help with parameterization. You can still generate your parameters programmatically, but fixtures aren't the way to do it.
You'll need to do a few things:
First, convert get_fus_output from a fixture to a regular function. That means removing the #pytest.fixture decorator, but you've also got to update it not to use the test_input_df test_truth_df, and res_path fixtures. (If nothing else needs them as fixtures, you can convert them all to regular functions, in which case, you probably want to put them in their own module outside of conftest.py or just move them into the same test script.)
Then, #pytest.mark.parametrize needs to use that function to get a list of parameters:
#pytest.mark.parametrize("expected,test", get_fus_output())
def test_annotation_match(expected, test):
assert_frame_equal(test, expected, check_dtype=False, check_like=True)

Mock variable in function

For unit testing, I want to mock a variable inside a function, such as:
def function_to_test(self):
foo = get_complex_data_structure() # Do not test this
do_work(foo) # Test this
I my unit test, I don't want to be dependent on what get_complex_data_structure() would return, and therefore want to set the value of foo manually.
How do I accomplish this? Is this the place for #patch.object?
Just use #patch() to mock out get_complex_data_structure():
#patch('module_under_test.get_complex_data_structure')
def test_function_to_test(self, mocked_function):
foo_mock = mocked_function.return_value
When the test function then calls get_complex_data_structure() a mock object is returned and stored in the local name foo; the very same object that mocked_function.return_value references in the above test; you can use that value to test if do_work() got passed the right object, for example.
Assuming that get_complex_data_structure is a function1, you can just patch it using any of the various mock.patch utilities:
with mock.patch.object(the_module, 'get_complex_data_structure', return_value=something)
val = function_to_test()
...
they can be used as decorators or context managers or explicitly started and stopped using the start and stop methods.2
1If it's not a function, you can always factor that code out into a simple utility function which returns the complex data-structure
2There are a million ways to use mocks -- It pays to read the docs to figure out all the ways that you can set the return value, etc.

Python: Modify SetUp based on TestCase in unittest.TestCase

I want each testCase while loading setup function should declare different values of "x". Is there a way I can achieve in setUp function. Sample code is mentioned below. How to change PSEUDO CODE in setUp function below?
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
# ***PSEUDO CODE***
x = 10 # if test_shuffle uses setUp()
x = 20 # if test_choice uses setUp()
x = 30 # if test_sample uses setUp()
# ***PSEUDO CODE***
def test_shuffle(self):
#test_shuffle
def test_choice(self):
#test_choice
def test_sample(self):
#test_choice
if __name__ == '__main__':
unittest.main()
I can achieve by writing each testcase in different file but I would drastically increases number of files.
One unittest file thematically captures tests that all cover similar features. The setup is used to get that feature into a testable state.
Move that assignment of X into the actual test method (keeps X = 0 in the setup if you want every test to actually have an X). It makes it clearer when reading the test exactly what is happening and how it is being tested. You shouldn't have conditional logic that affect how tests work inside your setup function because you are introducing complexity into the test's preconditions, which means you have a much larger surface area for errors.
Perhaps I am missing the point, but the assignment in your pseudo code could just be moved to the start of the corresponding test. If the "assignment" is more complex, or spans multiple tests, then just create functions outside the test case but inside the file and the corresponding tests invoke whatever functions are supposed to be part of their "setUp".

How to write functional/integration tests in Selenium Python

I'm new to testing and I would like to
1) test the login
2) create a folder
3) add content (a page) into the folder
I have each of the tests written and they work but obviously I would like to build ontop of each other, eg, in order to do 3 I need to do 1 then 2. In order to do 2 I need to do 1. This is my basic test structure:
class TestSelenium(unittest.TestCase):
def setUp(self):
# Create a new instance of the Firefox driver
self.driver = webdriver.Firefox()
def testLogin(self):
print '1'
...
def testFolderCreation(self):
print '2'
...
def testContentCreation(self):
print '3'
...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
At first, I thought the tests would run in order and the 2nd function would continue off where the first one left off, but I've found this is not the case, it seems to be starting over with each test. I've also realized that they execute in reverse order. I get an output of 3,2,1 in the terminal. How should I achieve what I want? If I call the previous functions before I run the one I want, I feel like it's repetitively testing the same thing over and over since each one is a test (eg, in testContentCreation, I would call 'testLogin' then call testFolderCreation and inside testFolderCreation call testLogin. If I were to do more, the testLogin would've been called a number of times!). Should I instead turn the previous steps into regular non-test functions and in the final last one (the test function) call the previous ones in order? If I do it that way then I guess if any of the steps fail, the last one fails, there would be one big test function.
Any suggestions on how you should write this type of test?
Also, why are the tests running in reverse order?
Thanks!
You are seeing what you are seeing, I think, because you are making some incorrect assumptions about the assumptions unittest makes. Each test case is assumed to be a self-contained entity, so there is no run order imposed. In addition, SetUp() and TearDown() operate before and after each individual case. If you want global setup/teardown, you need to make classmethods named SetUpClass() and TearDownClass(). You may also want to look in to the TestSuite class. More here: http://docs.python.org/library/unittest.html
Keep in mind that when the unittest library does test discovery (reflects your testcase class to find the test cases to run), it is essentially limited to looking at the .__dict__ and dir() values for the object, which are inherently unordered.

Categories