I want each testCase while loading setup function should declare different values of "x". Is there a way I can achieve in setUp function. Sample code is mentioned below. How to change PSEUDO CODE in setUp function below?
import random
import unittest
class TestSequenceFunctions(unittest.TestCase):
def setUp(self):
# ***PSEUDO CODE***
x = 10 # if test_shuffle uses setUp()
x = 20 # if test_choice uses setUp()
x = 30 # if test_sample uses setUp()
# ***PSEUDO CODE***
def test_shuffle(self):
#test_shuffle
def test_choice(self):
#test_choice
def test_sample(self):
#test_choice
if __name__ == '__main__':
unittest.main()
I can achieve by writing each testcase in different file but I would drastically increases number of files.
One unittest file thematically captures tests that all cover similar features. The setup is used to get that feature into a testable state.
Move that assignment of X into the actual test method (keeps X = 0 in the setup if you want every test to actually have an X). It makes it clearer when reading the test exactly what is happening and how it is being tested. You shouldn't have conditional logic that affect how tests work inside your setup function because you are introducing complexity into the test's preconditions, which means you have a much larger surface area for errors.
Perhaps I am missing the point, but the assignment in your pseudo code could just be moved to the start of the corresponding test. If the "assignment" is more complex, or spans multiple tests, then just create functions outside the test case but inside the file and the corresponding tests invoke whatever functions are supposed to be part of their "setUp".
Related
I have a couple of fixtures that do some initialization that is rather expensive. Some of those fixtures can take parameters, altering their behaviour slightly.
Because these are so expensive, I wanted to do initialisation of them once per test class. However, it does not destroy and reinit the fixtures on the next permutation of parameters.
See this example: https://gist.github.com/vhdirk/3d7bd632c8433eaaa481555a149168c2
I would expect that StuffStub would be a different instance when DBStub is recreated for parameters 'foo' and 'bar'.
Did I misunderstand something? Is this a bug?
I've recently encountered the same problem and wanted to share another solution. In my case the graph of fixtures that required regenerating for each parameter set was very deep and it's not so easy to control. An alternative is to bypass the pytest parametrization system and programmatically generate the test classes like so:
import pytest
import random
def make_test_class(name):
class TestFoo:
#pytest.fixture(scope="class")
def random_int(self):
return random.randint(1, 100)
def test_someting(self, random_int):
assert random_int and name == "foo"
return TestFoo
TestFooGood = make_test_class("foo")
TestFooBad = make_test_class("bar")
TestFooBad2 = make_test_class("wibble")
You can see from this that three tests are run, one passes (where "foo" == "foo") the other two fail, but you can see that the class scope fixtures have been recreated.
This is not a bug. There is no relation between the fixtures so one of them is not going to get called again just because the other one was due to having multiple params.
In your case db is called twice because db_factory that it uses has 2 params. The stuff fixture on the other hand is called only once because stuff_factory has only one item in params.
You should get what you expect if stuff would include db_factory as well without actually using its output (db_factory would not be called more than twice):
#pytest.fixture(scope="class")
def stuff(stuff_factory, db_factory):
return stuff_factory()
When parametrizing tests and fixtures in pytest, pytest seem to eagerly evaluate all parameters and to construct some test list datastructure before starting to execute the tests.
This is a problem in 2 situations:
when you have many parameter values (e.g. from a generator) - the generator and test itself may run fast but all those parameter values eat up all the memory
when parametrizing a fixture with different kind of expensive resources, where you only can afford to run one resource at the same time (e.g. because they listen on the same port or something like that)
Thus my question: Is it possibly to tell pytest to evaluate the parameters on the fly (i.e. lazily)?
EDIT: my first reaction would be "that is exactly what parametrized fixtures are for": a function-scoped fixture is a lazy value being called just before the test node is executed, and by parametrizing the fixture you can predefine as many variants (for example from a database key listing) as you like.
from pytest_cases import fixture_plus
#fixture_plus
def db():
return <todo>
#fixture_plus
#pytest.mark.parametrize("key", [<list_of keys>])
def sample(db, key):
return db.get(key)
def test_foo(sample):
return sample
That being said, in some (rare) situations you still need lazy values in a parametrize function, and you do not wish these to be the variants of a parametrized fixture. For those situations, there is now a solution also in pytest-cases, with lazy_value. With it, you can use functions in the parameter values, and these functions get called only when the test at hand is executed.
Here is an example showing two coding styles (switch the use_partial boolean arg to True to enable the other alternative)
from functools import partial
from random import random
import pytest
from pytest_cases import lazy_value
database = [random() for i in range(10)]
def get_param(i):
return database[i]
def make_param_getter(i, use_partial=False):
if use_partial:
return partial(get_param, i)
else:
def _get_param():
return database[i]
return _get_param
many_lazy_parameters = (make_param_getter(i) for i in range(10))
#pytest.mark.parametrize('a', [lazy_value(f) for f in many_lazy_parameters])
def test_foo(a):
print(a)
Note that lazy_value also has an id argument if you wish to customize the test ids. The default is to use the function __name__, and a support for partial functions is on the way.
You can parametrize fixtures the same way, but remember that you have to use #fixture_plus instead of #pytest.fixture. See pytest-cases documentation for details.
I'm the author of pytest-cases by the way ;)
As for your 2 question - proposed in comment link to manual seems like exactly what one should do. It allows "to setup expensive resources like DB connections or subprocess only when the actual test is run".
But as for 1 question it seems like such feature not implemented. You may directly pass generator to parametrize like so:
#pytest.mark.parametrize('data', data_gen)
def test_gen(data):
...
But pytest will list() of your generator -> RAM problems persists here as well.
I've also found some github issues than shed more light about why pytest not handle generator lazily. And it seems like a design problem. So "its not possible to correctly manage parametrization having a generator as value" because of
"pytest would have to collect all those tests with all the metadata...
collection happens always before test running".
There are also some refers to hypothesis or nose's yield-base tests in such cases. But if you still want to stick to pytest there are some workarounds:
If you somehow knew the number of generated params you may do the following:
import pytest
def get_data(N):
for i in range(N):
yield list(range(N))
N = 3000
data_gen = get_data(N)
#pytest.mark.parametrize('ind', range(N))
def test_yield(ind):
data = next(data_gen)
assert data
So here you parametrize over index (which is not so useful - just indicating pytest number of executions it must made) and generate data inside next run.
You may also wrap it to memory_profiler:
Results (46.53s):
3000 passed
Filename: run_test.py
Line # Mem usage Increment Line Contents
================================================
5 40.6 MiB 40.6 MiB #profile
6 def to_profile():
7 76.6 MiB 36.1 MiB pytest.main(['test.py'])
And compare with straightforward:
#pytest.mark.parametrize('data', data_gen)
def test_yield(data):
assert data
Which 'eats' much more memory:
Results (48.11s):
3000 passed
Filename: run_test.py
Line # Mem usage Increment Line Contents
================================================
5 40.7 MiB 40.7 MiB #profile
6 def to_profile():
7 409.3 MiB 368.6 MiB pytest.main(['test.py'])
If you want to parametrize your test over another params at the same time you may do a bit generalization of previous clause like so:
data_gen = get_data(N)
#pytest.fixture(scope='module', params=len_of_gen_if_known)
def fix():
huge_data_chunk = next(data_gen)
return huge_data_chunk
#pytest.mark.parametrize('other_param', ['aaa', 'bbb'])
def test_one(fix, other_param):
data = fix
...
So we use fixture here at module scope level in order to "preset" our data for parametrized test. Note that right here you may add another test and it will receive generated data as well. Simply add it after test_two:
#pytest.mark.parametrize('param2', [15, 'asdb', 1j])
def test_two(fix, param2):
data = fix
...
NOTE: if you do not know the number of generated data you may use this trick: set some approximate value (better if it be a bit higher than generated tests count) and 'mark' tests passed if it stops with StopIteration which will happen when all data generated already.
Another possibility is to use Factories as fixtures. Here you embed your generator into fixture and try yield in your test till it not ends. But here is another disadvantage - pytest will treat it as single test (with possibly bunch of checks inside) and will fail if one of generated data fails. Other words if compare to parametrize approach not all pytest statistic/features may be accessed.
And yet one another is to use pytest.main() in the loop something like so:
# data_generate
# set_up test
pytest.main(['test'])
Is not concerning iterators itself rather the way to save more Time/RAM if one has parametrized test:
Simply move some parametrization inside tests. Example:
#pytest.mark.parametrize("one", list_1)
#pytest.mark.parametrize("two", list_2)
def test_maybe_convert_objects(self, one, two):
...
Change to:
#pytest.mark.parametrize("one", list_1)
def test_maybe_convert_objects(self, one):
for two in list_2:
...
It's similar to factories but even more easy to implement. Also it not only reduce RAM multiple times but time for collecting metainfo as well. Drawbacks here - for pytest it would be one test for all two values. And it works smoothly with "simple" tests - if one have some special xmarks inside or something there might be problems.
I've also opened corresponding issue there might appear some additional info/tweaks about this problem.
You may find this workaround useful:
from datetime import datetime, timedelta
from time import sleep
import pytest
#pytest.mark.parametrize(
'lazy_params',
[
lambda: (datetime.now() - timedelta(days=1), datetime.now()),
lambda: (datetime.now(), datetime.now() + timedelta(days=1)),
],
)
def test_it(lazy_params):
yesterday, today = lazy_params()
print(f'\n{yesterday}\n{today}')
sleep(1)
assert yesterday < today
Sample output:
========================================================================= test session starts ==========================================================================
platform darwin -- Python 3.7.7, pytest-5.3.5, py-1.8.1, pluggy-0.13.1 -- /usr/local/opt/python/bin/python3.7
cachedir: .pytest_cache
rootdir: /Users/apizarro/tmp
collected 2 items
test_that.py::test_it[<lambda>0]
2020-04-14 18:34:08.700531
2020-04-15 18:34:08.700550
PASSED
test_that.py::test_it[<lambda>1]
2020-04-15 18:34:09.702914
2020-04-16 18:34:09.702919
PASSED
========================================================================== 2 passed in 2.02s ===========================================================================
I am trying to define a test method. Currently I am not receiving any errors, but the test is not actually running. The test is trying to make sure that only the first word in a string that is in list_first_words is being returned.
import unittest
class TestSong(unittest.TestCase):
def first_words_list(self):
self.assertEqual(Song().firstwords(["hello world"]),["hello"])
if __name__ == "__main__":
unittest.main()
Code that is being tested:
def firstwords(self,large_song_list):
all_first_words = [] # Create an empty list
for track in large_song_list:
first_word = track.trackName.partition(' ')[0]
all_first_words.append(first_word)
return all_first_words
You need to rename the test method to test_first_words_list.
Tests are discovered by unittest only when they start with the word test. See "Organizing Test Code" in the documentation for more details.
As described in the documentation:
A testcase is created by subclassing unittest.TestCase. The three individual tests are defined with methods whose names start with the letters test. This naming convention informs the test runner about which methods represent tests.
So, you need to rename the method starts with test.
I'd like that every assertion test in a TestCase is actually tested, even if the first one fails. In my situation, all the assertions are of the same nature.
Actually I have something that evaluates formulas written as Python objects (figure it as formulas written as strings to be eval'd). I'd like to do something like:
class MyTest(TestCase):
def test_something(self):
for id in ids:
for expression in get_formulas(id):
for variable in extract_variables(expression):
self.assertIn(variable, list_of_all_variables)
=> I want to see printed all of the variables that are not in the list_of_all_variables!
This is necessary for me to review all my so-called formulas and be able to correct errors.
Some more context:
I'm having a variable number of tests to perform (depending on a list of IDs written in a versioned data file) in one app.
To have a variable number of TestCase instances, I did write a base class (mixin), then build on-the-fly classes with the use of 3-args type function (that is creating classes).
This way, I have n tests, corresponding to the n different ids. It's a first step, but what I want is that each and every assertion in those tests gets tested and the corresponding assertion errors get printed.
As referenced in the question Continuing in Python's unittest when an assertion fails, failing at assertion errors is the hardcoded behavior of the TestCase class.
So instead of changing it's behavior, I generated a lot of different test_... methods to my classes, in the following style:
from django.test import TestCase
from sys import modules
# The list of all objects against which the tests have to be performed
formids = [12,124,234]
# get_formulas returns a list of formulas I have to test independently, linked to a formid
formulas = {id: get_formulas(id) for id in formids}
current_module = sys.modules(__name__)
def test_formula_method(self, formula):
# Does some assertions
self.assertNotEqual(formula.id, 0)
for formid in formids:
attrs = {'formid': formid}
for f in formulas[formid]:
# f=f so the 2nd arg to test_formula_method is staying local
# and not overwritten by last one in loop
attrs['test_formula_%s' % f.name] = lambda self, f=f: test_formula_method(self, f)
klass_name = "TestForm%s" % formid
klass = type(klass_name, (TestCase,), attrs)
setattr(current_module, klass_name, klass)
I'm new to testing and I would like to
1) test the login
2) create a folder
3) add content (a page) into the folder
I have each of the tests written and they work but obviously I would like to build ontop of each other, eg, in order to do 3 I need to do 1 then 2. In order to do 2 I need to do 1. This is my basic test structure:
class TestSelenium(unittest.TestCase):
def setUp(self):
# Create a new instance of the Firefox driver
self.driver = webdriver.Firefox()
def testLogin(self):
print '1'
...
def testFolderCreation(self):
print '2'
...
def testContentCreation(self):
print '3'
...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
At first, I thought the tests would run in order and the 2nd function would continue off where the first one left off, but I've found this is not the case, it seems to be starting over with each test. I've also realized that they execute in reverse order. I get an output of 3,2,1 in the terminal. How should I achieve what I want? If I call the previous functions before I run the one I want, I feel like it's repetitively testing the same thing over and over since each one is a test (eg, in testContentCreation, I would call 'testLogin' then call testFolderCreation and inside testFolderCreation call testLogin. If I were to do more, the testLogin would've been called a number of times!). Should I instead turn the previous steps into regular non-test functions and in the final last one (the test function) call the previous ones in order? If I do it that way then I guess if any of the steps fail, the last one fails, there would be one big test function.
Any suggestions on how you should write this type of test?
Also, why are the tests running in reverse order?
Thanks!
You are seeing what you are seeing, I think, because you are making some incorrect assumptions about the assumptions unittest makes. Each test case is assumed to be a self-contained entity, so there is no run order imposed. In addition, SetUp() and TearDown() operate before and after each individual case. If you want global setup/teardown, you need to make classmethods named SetUpClass() and TearDownClass(). You may also want to look in to the TestSuite class. More here: http://docs.python.org/library/unittest.html
Keep in mind that when the unittest library does test discovery (reflects your testcase class to find the test cases to run), it is essentially limited to looking at the .__dict__ and dir() values for the object, which are inherently unordered.