I'm making a test suite using py.test that starts by generating randomly simulated files and the filenames are stored in an initialization object. The tests are then generated by pytest_generate_tests; file0.txt, file1.txt, etc.
Tests are generated from a YAML file which includes in input string like cat %s and a substitution string like file*.txt, which generates 1 test per file it matches in pytest_generate_tests. Thus, I need the files to exist before pytest_generate_tests is called, else files won't be matched.
Before I had encountered the issue, I had an initialization fixture in conftest.py:
#pytest.fixture(scope="session", autouse=True)
def initializer(request):
# ...do some stuff with the request
return InitializeTests(...)
class InitializeTests():
def __init__(self, n):
# ...generate n files
which I could then use in the file tests_0.py:
test_a(initializer, input_string):
# ...
and test_a's are generated by:
def pytest_generate_tests(metafunc):
input_strings = manipulate_the_yaml_file() # This requires the files to exist.
if "input_string" in metafunc.fixturenames:
metafunc.parametrize("input_string", input_strings)
I then tried using a global variable to get the initializer and share it across the files as explained in here. I then put the initialization at the top of pytest_generate_tests and calling conftest.initializer from within test_a, but then the initialization step gets run for every test method I add, test_b etc.
So the question is, how can I run a method before pytest_generate_tests and keep the instance of the initialization class across all tests in the session?
Just writing the problem gave me an obvious solution given the second method using globals:
if "initializer" not in globals():
initialize()
where initialize creates the global variable initializer and thus only creates it once. However, I don't really like working with globals as I thought fixtures or some other py.test technique could help me, and would gladly hear a better answer.
Related
I have a handful of fixtures in conftest.py that work well inside actual test functions. However, I would like to parameterize some tests using pytest_generate_tests() based on the data in some of these fixtures.
What I'd like to do (simplified):
-- conftest.py --
# my fixture returns a list of device names.
#pytest.fixture(scope="module")
def device_list(something):
return ['dev1', 'dev2', 'dev3', 'test']
-- test001.py --
# generate tests using the device_list fixture I defined above.
def pytest_generate_tests(metafunc):
metafunc.parametrize('devices', itertools.chain(device_list), ids=repr)
# A test that is parametrized by the above function.
def test_do_stuff(devices):
assert "dev" in devices
# Output should/would be:
dev1: pass
dev2: pass
dev3: pass
test: FAIL
Of course, the problem I'm hitting is that in pytest_generate_tests(), it complains that device_list is undefined. If I try to pass it in, pytest_generate_tests(metafunc, device_list), I get an error.
E pluggy.callers.HookCallError: hook call must provide argument 'device_list'
The reason I want to do this is that I use that device_list list inside a bunch of different tests in different files, so I want to use pytest_generate_tests() to parametrize tests using the same list.
Is this just not possible? What is the point of using pytest_generate_tests() if I have to duplicate my fixtures inside that function?
From what I've gathered over the years, fixtures are pretty tightly coupled to pytest's post-collection stage. I've tried a number of times to do something similar, and it's never really quite worked out.
Instead, you could make a function that does the things your fixture would do, and call that inside the generate_tests hook. Then if you need it still as a fixture, call it again (or save the result or whatever).
#pytest.fixture(scope="module", autouse=True)
def device_list(something):
device_list = ['dev1', 'dev2', 'dev3', 'test']
return device_list
By using autouse=True in the pytest fixture decorator you can ensure that pytest_generate_tests has access to device_list.
This article somehow provides a workaround.
Just have a look at section Hooks at the rescue, and you're gonna get this:
import importlib
def load_tests(name):
# Load module which contains test data
tests_module = importlib.import_module(name)
# Tests are to be found in the variable `tests` of the module
for test in tests_module.tests.iteritems():
yield test
def pytest_generate_tests(metafunc):
"""This allows us to load tests from external files by
parametrizing tests with each test case found in a data_X
file
"""
for fixture in metafunc.fixturenames:
if fixture.startswith('data_'):
# Load associated test data
tests = load_tests(fixture)
metafunc.parametrize(fixture, tests)
See, here it is loading the data by invoking the fixture that is prefixed with data_.
Due to circular-import issues which are common with Celery tasks in Django, I'm often importing Celery tasks inside of my methods, like so:
# some code omitted for brevity
# accounts/models.py
def refresh_library(self, queue_type="regular"):
from core.tasks import refresh_user_library
refresh_user_library.apply_async(
kwargs={"user_id": self.user.id}, queue=queue_type
)
return 0
In my pytest test for refresh_library, I'd only like to test that refresh_user_library (the Celery task) is called with the correct args and kwargs. But this isn't working:
# tests/test_accounts_models.py
#mock.patch("accounts.models.UserProfile.refresh_library.refresh_user_library")
def test_refresh_library():
Error is about refresh_library not having an attribute refresh_user_library.
I suspect this is due to the fact that the task(refresh_user_library) is imported inside the function itself, but I'm not too experienced with mocking so this might be completely wrong.
Even though apply_async is your own-created function in your core.tasks, if you do not want to test it but only make sure you are giving correct arguments, you need to mock it. In your question you're mocking wrong package. You should do:
# tests/test_accounts_models.py
#mock.patch("core.tasks.rehresh_user_library.apply_sync")
def test_refresh_library():
In your task function, refresh_user_library is a local name, not an attribute of the task. What you want is the real qualified name of the function you want to mock:
#mock.patch("core.tasks.refresh_user_library")
def test_refresh_library():
# you test here
I am just getting started with Django, so this may be something stupid, but I am not even sure what to google at this point.
I have a method that looks like this:
def get_user(self,user):
return Utilities.get_userprofile(user)
The method looks like this:
#staticmethod
def get_userprofile(user):
return UserProfile.objects.filter(user_auth__username=user)[0]
When I include this in the view, everything is fine. When I write a test case to use any of the methods inside the Utility class, I get None back:
Two test cases:
def test_stack_overflow(self):
a = ObjName()
print(a.get_user('admin'))
def test_Utility(self):
print(Utilities.get_user('admin'))
Results:
Creating test database for alias 'default'...
None
..None
.
----------------------------------------------------------------------
Can someone tell me why this is working in the view, but not working inside of the test case and does not generate any error messages?
Thanks
Verify whether your unit test comply the followings,
TestClass must be written in a file name test*.py
TestClass must have been subclassed from unittest.TestCase
TestClass should have a setUp function to create objects(usually done in this way, but objects creation can happen in the test functions as well) in the database
TestClass functions should start with test so as to be identified and run by the ./manage.py test command
TestClass may have tearDown to properly end the unit test case.
Test Case Execution process:
When you run ./manage.py test django sets up a test_your_database_name and creates all the objects mentioned in the setUp function(Usually) and starts executing the test functions in the order of placement inside the class and once when all the test functions are executed, finally looks for the tearDown function executes if any present in the test class and destroys the test database.
It may be because that you might not have invoked objects creation in setUp function or elsewhere in the TestClass.
Can you kindly post the entire traceback and test file to help you better?
I've the read pytest documentation. Section 7.4.3 gives instructions for registering markers. I have followed the instructions exactly, but it doesn't seem to have worked for me.
I'm using Python 2.7.2 and pytest 2.5.1.
I have a pytest.ini file at the root of my project. Here is the entire contents of that file:
[pytest]
python_files=*.py
python_classes=Check
python_functions=test
rsyncdirs = . logs
rsyncignore = docs archive third_party .git procs
markers =
mammoth: mark a test as part of the Mammoth regression suite
A little background to give context: The folks that created the automation framework I am working on no longer work for the company. They created a custom plugin that extended the functionality of the default pytest.mark. From what I understand, the only thing the custom plugin does is make it so that I can add marks to a test like this:
#pytest.marks(CompeteMarks.MAMMOTH, CompeteMarks.QUICK_TEST_A, CompeteMarks.PROD_BVT)
def my_test(self):
instead of like this:
#pytest.mark.mammoth
#pytest.mark.quick_test_a
#pytest.mark.prod_bvt
def my_test(self):
The custom plugin code remains present in the code base. I do not know if that has any negative effect on trying to register a mark, but thought it was worth mentioning if someone knows otherwise.
The problem I'm having is when I execute the following command on a command-line, I do NOT see my mammoth mark listed among the other registered marks.
py.test --markers
The output returned after running the above command is this:
#pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
#pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
#pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: #parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
#pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
#pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
#pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
What am I doing wrong and how can I get my mark registered?
One more piece of info, I have applied the mammoth mark to a single test (shown below) when I ran the py.test --markers command:
#pytest.mark.mammoth
def my_test(self):
If I understand your comments correctly the project layout is the following:
~/customersites/
~/customersites/automation/
~/customersites/automation/pytest.ini
Then invoking py.test as follows:
~/customersites$ py.test --markers
will make py.test look for a configuration file in ~/customersites/ and subsequently all the parents: ~/, /home/, /. In this case this will not make it find pytest.ini.
However when you invoke it with one or more arguments, py.test will try to interpret each argument as a file or directory and start looking for a configuration file from that directory and it's parents. It then iterates through all arguments in order until it found the first configuration file.
So with the above directory layout invoking py.test as follows will make it find pytest.ini and show the markers registered in it:
~/customersites$ py.test automation --markers
as now py.test will first look in ~/customersites/automation/ for a configuration file before going up the directory tree and looking in ~/customersites/. But since it finds one in ~/customersites/automation/pytest.ini it stops there and uses that.
Have you tried here?
From the docs:
API reference for mark related objects
class MarkGenerator[source]
Factory for MarkDecorator objects - exposed as a pytest.mark singleton
instance.
Example:
import py
#pytest.mark.slowtest
def test_function():
pass
will set a slowtest MarkInfo object on the test_function object.
class MarkDecorator(name, args=None, kwargs=None)[source]
A decorator for test functions and test classes. When applied it will
create MarkInfo objects which may be retrieved by hooks as item keywords.
MarkDecorator instances are often created like this:
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions:
#mark2
def test_function():
pass
My test file is basically:
class Test(unittest.TestCase):
def testOk():
pass
if __name__ == "__main__":
expensiveSetup()
try:
unittest.main()
finally:
cleanUp()
However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at Caching result of setUp() using Python unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed.
How can I do the setup and cleanup once for all the tests in my TestSuite?
The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.
If you use Python >= 2.7 (or unittest2 for Python >= 2.4 & <= 2.6), the best approach would be be to use
def setUpClass(cls):
# ...
setUpClass = classmethod(setUpClass)
to perform some initialization once for all tests belonging to the given class.
And to perform the cleanup, use:
#classmethod
def tearDownClass(cls):
# ...
See also the unittest standard library documentation on setUpClass and tearDownClass classmethods.
This is what I do:
class TestSearch(unittest.TestCase):
"""General Search tests for...."""
matcher = None
counter = 0
num_of_tests = None
def setUp(self): # pylint: disable-msg=C0103
"""Only instantiate the matcher once"""
if self.matcher is None:
self.__class__.matcher = Matcher()
self.__class__.num_of_tests = len(filter(self.isTestMethod, dir(self)))
self.__class__.counter = self.counter + 1
def tearDown(self): # pylint: disable-msg=C0103
"""And kill it when done"""
if self.counter == self.num_of_tests:
print 'KILL KILL KILL'
del self.__class__.matcher
Sadly (because I do want my tests to be independent and deterministic), I do this a lot (because system testing that take less than 5 minutes are also important).
First of all, what S. Lott said. However!, you do not want to do that. There is a reason setUp and tearDown are wrapped around each test: they help preserve the determinism of testing.
Otherwise, if some test places the system in a bad state, your next tests may fail. Ideally, each of your tests should be independent.
Also, if you insist on doing it this way, instead of writing by hand self.runTest1(), self.runTest2(), you might want to do a bit of introspection in order to find the methods to run.
Won't package-level initialization do it for you? From the Nose Wiki:
nose allows tests to be grouped into
test packages. This allows
package-level setup; for instance, if
you need to create a test database or
other data fixture for your tests, you
may create it in package setup and
remove it in package teardown once per
test run, rather than having to create
and tear it down once per test module
or test case.
To create package-level setup and
teardown methods, define setup and/or
teardown functions in the __init__.py
of a test package. Setup methods may
be named setup, setup_package, setUp,
or setUpPackage; teardown may be named
teardown, teardown_package, tearDown
or tearDownPackage. Execution of tests
in a test package begins as soon as
the first test module is loaded from
the test package.
You can save the state if expensiveSetup() is run or not.
__expensiveSetup_has_run = False
class ExpensiveSetupMixin(unittest.TestCase):
def setUp(self):
global __expensiveSetup_has_run
super(ExpensiveSetupMixin, self).setUp()
if __expensiveSetup_has_run is False:
expensiveSetup()
__expensiveSetup_has_run = True
Or some kind of variation of this. Maybe pinging xml-rpc server and create a new one if it isn't answering.
But the unit-testing way AFAIK is to setup and teardown per unittest even if it is expensive.
I know nothing about Netbeans, but I though I should mention zope.testrunner and it's support for a nifty thing: Layers. Basically, you do the testsetup in separate classes, and attach those classes to the tests. These classes can inherit from each other, forming a layer of setups. The testrunner will then only call each setup once, and saving the state of that in memory, and instead of setting up and tearing down, it will simply just copy the relevant layer context as a setup.
This speeds up test setup a lot, and is used when you test Zope products and Plone, where the testsetup often needs you to start a Plone CMS server, create a Plone site and add loads of content, a process that can take upwards half a minute. Doing that for each test method is obviously impossible, but with layers it is done only once. This shortens the test setup and protects the test methods from each other, and therefore means that the testing continues to be determenistic.
So I don't know of zope.testrunner will work for you, but it's worth a try.
shuld be possible to do it by defining startTestRun,stopTestRun of unittest.TestResult class. answer https://stackoverflow.com/a/64892396/2679740
You can assure setUp and tearDown execute once if you have only one test method, runTest. This method can do whatever else it wants. Just be sure you don't have any methods with names that start with test.
class MyExpensiveTest( unittest.TestCase ):
def setUp( self ):
self.resource = owThatHurts()
def tearDown( self ):
self.resource.flush()
self.resource.finish()
def runTest( self ):
self.runTest1()
self.tunTest2()
def runTest1( self ):
self.assertEquals(...)
def runTest2( self ):
self.assertEquals(...)
It doesn't automagically figure out what to run. If you add a test method, you also have to update runTest.