In my code I have:
def fn1():
"""
fn1 description
Parameters
----------
components_list : list
List of IDs of the reference groups
Return
------
ret : str
A string representing the aggregate expression (See example)
Example
-------
>>> reference_groups = [1,2,3,4]
>>> expression = suite.getComponentsExpression(reference_groups)
(tier1 and tier2) or (my_group)
"""
#some code here
When I run: ./manage.py test, it tries to run the example. Is there a way to prevent this from happening? e.g. maybe CLI option to skip docstrings
Django 1.6 introduced a new test runner, which does not run doctests. So one answer to prevent tests is to upgrade Django!
If that's not possible, the proper fix would be to subclass DjangoTestSuiteRunner, disable the doc test loading, and tell Django to use your new test runner with the [TEST_RUNNER] setting.
Or if that's too tricky, a quick hack would be to change your docstrings so that they don't look like doctests, e.g.
>.> reference_groups = [1,2,3,4]
Related
I defined the following fixture in a test file:
import os
from dotenv import load_dotenv, find_dotenv
from packaging import version # for comparing version numbers
load_dotenv(find_dotenv())
VERSION = os.environ.get("VERSION")
API_URL = os.environ.get("API_URL")
#pytest.fixture()
def skip_before_version():
"""
Creates a fixture that takes parameters
skips a test if it depends on certain features implemented in a certain version
:parameter target_version:
:parameter type: string
"""
def _skip_before(target_version):
less_than = version.parse(current_version) < version.parse(VERSION)
return pytest.mark.skipif(less_than)
return _skip_before
skip_before = skip_before_version()("0.0.1")
I want to use skip_before as a fixture in certain tests. I call it like this:
##skip_before_version("0.0.1") # tried this before and got the same error, so tried reworking it...
#when(parsers.cfparse("{categories} are added as categories"))
def add_categories(skip_before, create_tree, categories): # now putting the fixture alongside parameters
pass
When I run this, I get the following error:
Fixture "skip_before_version" called directly. Fixtures are not meant to be called directly,
but are created automatically when test functions request them as parameters.
See https://docs.pytest.org/en/stable/fixture.html for more information about fixtures, and
https://docs.pytest.org/en/stable/deprecations.html#calling-fixtures-directly about how to update your code.
How is this still being called directly? How can I fix this?
If I understand your goal correctly, you want to be able to skip tests based on a version restriction specifier. There are many ways to do that; I can suggest an autouse fixture that will skip the test based on a custom marker condition. Example:
import os
import pytest
from packaging.specifiers import SpecifierSet
VERSION = "1.2.3" # read from environment etc.
#pytest.fixture(autouse=True)
def skip_based_on_version_compat(request):
# get the version_compat marker
version_compat = request.node.get_closest_marker("version_compat")
if version_compat is None: # test is not marked
return
if not version_compat.args: # no specifier passed to marker
return
spec_arg = version_compat.args[0]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
The fixture skip_based_on_version_compat will be invoked on each test, but only do something if the test is marked with #pytest.mark.version_compat. Example tests:
#pytest.mark.version_compat(">=1.0.0")
def test_current_gen():
assert True
#pytest.mark.version_compat(">=2.0.0")
def test_next_gen():
raise NotImplementedError()
With VERSION = "1.2.3", the first test will be executed, the second one will be skipped. Notice the invocation of pytest.skip to immediately skip the test. Returning pytest.mark.skip in the fixture will bring you nothing since the markers are already evaluated long before that.
Also, I noticed you are writing gherkin tests (using pytest-bdd presumably). With the above approach, skipping the whole scenarios should be also possible:
#pytest.mark.version_compat(">=1.0.0")
#scenario("my.feature", "my scenario")
def test_scenario():
pass
Alternatively, you can mark the scenarios in feature files:
Feature: Foo
Lorem ipsum dolor sit amet.
#version_compat(">=1.0.0")
Scenario: doing future stuff
Given foo is implemented
When I see foo
Then I do bar
and use pytest-bdd-own hooks:
def pytest_bdd_apply_tag(tag, function):
matcher = re.match(r'^version_compat\("(?P<spec_arg>.*)"\)$', tag)
spec_arg = matcher.groupdict()["spec_arg"]
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
marker = pytest.mark.skip(
reason=f"Current version {VERSION} doesn't match restriction {spec_arg!r}."
)
marker(function)
return True
Unfortunately, neither custom fixtures nor markers will work with skipping in single steps (and you will still be skipping the whole scenario since it is an atomic test unit in gherkin). I didn't find a reliable way to befriend pytest-bdd steps with pytest stuff; looks like they are simply ignored. Nevertheless, you can easily write a custom decorator serving the same purpose:
import functools
def version_compat(spec_arg):
def deco(func):
#functools.wraps(func)
def wrapper(*args, **kwargs):
spec = SpecifierSet(spec_arg)
if VERSION not in spec:
pytest.skip(f"Current version {VERSION} doesn't match test specifiers {spec_arg!r}.")
return func(*args, **kwargs)
return wrapper
return deco
Using version_compat deco in a step:
#when('I am foo')
#version_compat(">=2.0.0")
def i_am_foo():
...
Pay attention to the ordering - placing decorators outside of pytest-bdd's own stuff will not trigger them (I guess worth opening an issue, but meh).
I have some similar unit tests in python.
There are so similar that only one argument is changing.
class TestFoo(TestCase):
def test_typeA(self):
self.assertTrue(foo(bar=TYPE_A))
def test_typeB(self):
self.assertTrue(foo(bar=TYPE_B))
def test_typeC(self):
self.assertTrue(foo(bar=TYPE_C))
...
Obviously this is not very DRY, and if you have even 4-5 different options the code is going to be very repetitive
Now I could do something like this
class TestFoo(TestCase):
BAR_TYPES = (
TYPE_A,
TYPE_B,
TYPE_C,
...
)
def _foo_test(self, bar_type):
self.assertTrue(foo(bar=bar_type))
def test_foo_bar_type(self):
for bar_type in BAR_TYPES:
_foo_test(bar=bar_type))
Which works, however when an exception gets raised, how will I know whether _foo_test failed with argument TYPE_A, TYPE_B or TYPE_C ?
Perhaps there is a better way of structuring these very similar tests?
What are you trying to do is essentially a parameterized test. This feature isn't included in standard django or python unittest modules, but a number of libs provide it: nose-parameterized, py.test, ddt
My favorite so far is ddt: it resembles NUnit-JUnit style parameterized tests most, pretty lightweight, don't get in your way and does not require dedicated test runner (like nose-parameterized do). The way it can help you is that it modifies test name to include all parameters, so you would clearly see which test case failed by looking at a test name.
With ddt your example would look like this:
import ddt
#ddt.ddt
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
#ddt.data(TYPE_A, TYPE_B, TYPE_C)
def test_foo_bar_type(self, type):
self.assertTrue(foo(bar=type))
In such case names will look like test_foo_bar_type__TYPE_A (technically, it constructs it something like [test_name]__[repr(parameter_1)]__[repr(parameter_2)]).
As a bonus, it is much cleaner (no helper method), and you get three methods instead of one. The advantage here is that you can test various code paths in a method and get one test case per each path (but a certain amount of thinking is needed, sometimes it's better to have a dedicated test for some of code paths)
Most TestCase assertion methods, including assertTrue, take an optional msg argument.
If you change your BAR_TYPES tuple to include the variable names, then you can include this in the message that is shown when the assertion fails.
class TestProcessCreateAgencyOfferAndDispatch(TestCase):
BAR_TYPES = (
('TYPE_A', TYPE_A),
('TYPE_B', TYPE_B),
('TYPE_C', TYPE_C),
...
)
def _foo_test(self, var_name, bar_type):
self.assertTrue(foo(bar=bar_type), var_name)
def test_foo_bar_type(self):
for (var_name, bar_type) in BAR_TYPES:
_foo_test(bar=bar_type), var_name=var_name)
I've the read pytest documentation. Section 7.4.3 gives instructions for registering markers. I have followed the instructions exactly, but it doesn't seem to have worked for me.
I'm using Python 2.7.2 and pytest 2.5.1.
I have a pytest.ini file at the root of my project. Here is the entire contents of that file:
[pytest]
python_files=*.py
python_classes=Check
python_functions=test
rsyncdirs = . logs
rsyncignore = docs archive third_party .git procs
markers =
mammoth: mark a test as part of the Mammoth regression suite
A little background to give context: The folks that created the automation framework I am working on no longer work for the company. They created a custom plugin that extended the functionality of the default pytest.mark. From what I understand, the only thing the custom plugin does is make it so that I can add marks to a test like this:
#pytest.marks(CompeteMarks.MAMMOTH, CompeteMarks.QUICK_TEST_A, CompeteMarks.PROD_BVT)
def my_test(self):
instead of like this:
#pytest.mark.mammoth
#pytest.mark.quick_test_a
#pytest.mark.prod_bvt
def my_test(self):
The custom plugin code remains present in the code base. I do not know if that has any negative effect on trying to register a mark, but thought it was worth mentioning if someone knows otherwise.
The problem I'm having is when I execute the following command on a command-line, I do NOT see my mammoth mark listed among the other registered marks.
py.test --markers
The output returned after running the above command is this:
#pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
#pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
#pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: #parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
#pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
#pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
#pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
What am I doing wrong and how can I get my mark registered?
One more piece of info, I have applied the mammoth mark to a single test (shown below) when I ran the py.test --markers command:
#pytest.mark.mammoth
def my_test(self):
If I understand your comments correctly the project layout is the following:
~/customersites/
~/customersites/automation/
~/customersites/automation/pytest.ini
Then invoking py.test as follows:
~/customersites$ py.test --markers
will make py.test look for a configuration file in ~/customersites/ and subsequently all the parents: ~/, /home/, /. In this case this will not make it find pytest.ini.
However when you invoke it with one or more arguments, py.test will try to interpret each argument as a file or directory and start looking for a configuration file from that directory and it's parents. It then iterates through all arguments in order until it found the first configuration file.
So with the above directory layout invoking py.test as follows will make it find pytest.ini and show the markers registered in it:
~/customersites$ py.test automation --markers
as now py.test will first look in ~/customersites/automation/ for a configuration file before going up the directory tree and looking in ~/customersites/. But since it finds one in ~/customersites/automation/pytest.ini it stops there and uses that.
Have you tried here?
From the docs:
API reference for mark related objects
class MarkGenerator[source]
Factory for MarkDecorator objects - exposed as a pytest.mark singleton
instance.
Example:
import py
#pytest.mark.slowtest
def test_function():
pass
will set a slowtest MarkInfo object on the test_function object.
class MarkDecorator(name, args=None, kwargs=None)[source]
A decorator for test functions and test classes. When applied it will
create MarkInfo objects which may be retrieved by hooks as item keywords.
MarkDecorator instances are often created like this:
mark1 = pytest.mark.NAME # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator
and can then be applied as decorators to test functions:
#mark2
def test_function():
pass
I've noticed that, when my Python unit tests contain documentation at the top of the function, sometimes the framework prints them in the test output. Normally, the test output contains one test per line:
<test name> ... ok
If the test has a docstring of the form
"""
test that so and so happens
"""
than all is well. But if the test has a docstring all on one line:
"""test that so and so happens"""
then the test output takes more than one line and includes the doc like this:
<test name>
test that so and so happens ... ok
I can't find where this is documented behavior. Is there a way to turn this off?
The first line of the docstring is used; the responsible method is TestCase.shortDescription(), which you can override in your testcases:
class MyTests(unittest.TestCase):
# ....
def shortDescription(self):
return None
By always returning None you turn the feature off entirely. If you want to format the docstring differently, it's available as self._testMethodDoc.
This is an improved version of MartijnPieters excellent answer.
Instead of overriding that method for every test, it is more convenient (at least for me) to add the following file to your list of tests. Name the file test_[whatever you want].py.
test_config.py
import unittest
# Hides Docstring from Verbosis mode
unittest.TestCase.shortDescription = lambda x: None
This code snippet could also be placed in the __init__.py files of the test folder.
In my case, I just added to the root folder of my project, scripts, since I use discover as in python -m unittest from scripts to run all the unittests of my project. As this is the only test*.py file on that directory level, it will load before any other test.
(I tried the snippet on the __init__.py of the root folder, it didn't seem to work, so I sticked with the file approach)
BTW: I actually prefer lambda x: "\t" instead of lambda x: None
After reading this I made a plugin for nosetests to avoid the boilerplate.
https://github.com/MarechJ/nosenodocstrings
From python documentation(http://docs.python.org/library/unittest.html):
import unittest
class WidgetTestCase(unittest.TestCase):
def setUp(self):
self.widget = Widget('The widget')
def tearDown(self):
self.widget.dispose()
self.widget = None
def test_default_size(self):
self.assertEqual(self.widget.size(), (50,50),
'incorrect default size')
def test_resize(self):
self.widget.resize(100,150)
self.assertEqual(self.widget.size(), (100,150),
'wrong size after resize')
Here is, how invoke those testcase:
def suite():
suite = unittest.TestSuite()
suite.addTest(WidgetTestCase('test_default_size'))
suite.addTest(WidgetTestCase('test_resize'))
return suite
Is it possible to insert parameter custom_parameter into WidgetTestCase like:
class WidgetTestCase(unittest.TestCase):
def setUp(self,custom_parameter):
self.widget = Widget('The widget')
self.custom_parameter=custom_parameter
?
What I've done is in test_suite module just added
WidgetTestCase.CustomParameter="some_address"
The simplest solutions are the best :)
I've found a way to do this, but it's a bit of a cludge.
Basically, what I do is add, to the TestCase, an __init__ method which defines a 'default' parameter and a __str__ so that we can distinguish cases:
class WidgetTestCase(unittest.TestCase):
def __init__(self, methodName='runTest'):
self.parameter = default_parameter
unittest.TestCase.__init__(self, methodName)
def __str__(self):
''' Override this so that we know which instance it is '''
return "%s(%s) (%s)" % (self._testMethodName, self.currentTest, unittest._strclass(self.__class__))
Then in suite(), I iterate over my test parameters, replacing the default parameter with one specific to each test:
def suite():
suite = unittest.TestSuite()
for test_parameter in test_parameters:
loadedtests = unittest.TestLoader().loadTestsFromTestCase(WidgetTestCase)
for t in loadedtests:
t.parameter = test_parameter
suite.addTests(loadedtests)
suite.addTests(unittest.TestLoader().loadTestsFromTestCase(OtherWidgetTestCases))
return suite
where OtherWidgetTestCases are tests which don't need to be parameterised.
For instance I have a bunch of tests on real data for which a suite of tests need to be applied to each, but I also have some synthetic data sets, designed to test certain edge cases not normally present in the data, and I only need to apply certain tests to those, so they get their own tests in OtherWidgetTestCases.
This is something that has been on my mind recently. Yes it is very possible to do. I called it scenario testing, but I think parameterized may be more accurate. I put a proof of concept up as a gist here. In short it is a meta class that allows you to define a scenario and run the tests against it a bunch. With it your example can be something like this:
class WidgetTestCase(unittest.TestCase):
__metaclass__ = ScenarioMeta
class widget_width(ScenerioTest):
scenarios = [
dict(widget_in=Widget("One Way"), expected_tuple=(50, 50)),
dict(widget_in=Widget("Another Way"), expected_tuple=(100, 150))
]
def __test__(self, widget_in, expected_tuple):
self.assertEqual(widget_in.size, expected_tuple)
When run, the meta class writes 2 seperate tests out so the output would be something like:
$ python myscerariotest.py -v
test_widget_width_0 (__main__.widget_width) ... ok
test_widget_width_1 (__main__.widget_width) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.001s
OK
As you can see the scenarios are converted to tests at runtime.
Now I am not yet sure if this is even a good idea. I use it in tests where I have a lot of text centric cases that repeat the same assertions on slightly different data, which helps me to catch the little edge cases. But the classes in that gist do work and I believe it accomplishes what you are after.
Note that the with some trickery the test cases can be given names and even pulled from an external source like a text file or database. Its not documented yet but some digging around in the meta class should get you started. There is also some more info and examples on my post here.
Edit
This is an ugly hack that I do not support anymore. The implementation should have been done as a subclass of TestCase, not as a hacked meta class. Live and learn. An even better solution would be to use nose generators.
I don't believe so, the signature for setUp needs to be what unittest is expecting, afaik, setUp is automagically called within the testcase's run method as setUp()... you're not going to be able to pass it unless you override run to pass in the var you want. But I think what you want defeats the purpose of unit testing. Don't try to use a DRY philosophy with this, each unit you're testing should be a part of a class or even part of a function/method.
I don't think this is a good idea. Unit tests should be thorough enough that you test all functionality in your cases so passing in different parameteres shouldn't be required.
You mention you're passing in a www address - this is almost certainly not a good idea. What happens if you try and run the tests on a machine where the 'net connection is down? Your tests should be:
Automatic - they will run on all machines and platforms where your app is supported, without user intervention. They shouldn't rely on external environment to pass. This means (amongst other things) that relying on a properly set up connection to the Internet is a bad idea. You can get around this by providing dummy data. Instead of passing in a URL to a resource, abstract away the data source and pass in a data-stream or whatever. This is especially easy in python since you can make use of python's duck-typing to present a stream-like object (python frequently uses a "file-like" object for this very reason!).
Thorough - your unit tests should have 100% code coverage, and cover all possible situations. You want to test your code with multiple sites? Instead, test your code with all the possible features that a site may include. Without knowing more about what your application does, I can't offer much advice in this point.
Now, it looks like you're tests are going to be heavily data-driven. There are many tools that allow you to define data-sets for unit tests and load them in the tests. Check out python test fixtures, for example.
I realise that this isn't the answer you're looking for, but I think you'll have more joy in the long-run if you follow these principles.