I may be blind and missing something in the Python Unit Test FrameWork (Python 2.7.10). I'm trying to mark a class as an expected failure but only if the class is run on Windows. Other platforms work correctly. So the basic concept would be:
#unittest.expectedFailureIf(sys.platform.startswith("win"), "Windows Fails")
class MyTestCase(unittest.TestCase):
# some class here
As mentioned, neither Python 2 nor Python 3 (as at 3.8) have this built in.
You can pretty easily create this yourself, however, by defining it at the top of your file:
def expectedFailureIf(condition):
"""The test is marked as an expectedFailure if the condition is satisfied."""
def wrapper(func):
if condition:
return unittest.expectedFailure(func)
else:
return func
return wrapper
Then you can do essentially as you suggest (I have not added reason, as that isn't in the existing expectedFailure):
class MyTestCase(unittest.TestCase):
# some class here
#expectedFailureIf(sys.platform.startswith("win"))
def test_known_to_fail_on_windows_only(self):
From the documentation **https://docs.python.org/2/library/unittest.html#skipping-tests-and-expected-failures
There is no expectedFailureIf(), you can use expectedFailure() or skipIf(sys.platform.startswith("win", "Windows Fails"))
Related
I am creating tests for some controller objects that obviously have dependencies. I want to test that it's interacting correctly with the dependencies without instantiating them for obvious reasons (database connections). So I have a class like
class A:
def __init__(self, some_dependency: InterfaceB):
self.some_dependency: InterfaceB = some_dependency
def do_thing(self, x):
self.some_dependecy.do_subtask(necessary_data=x)
And I want to test that it's correctly passing the value down to its dependency so I do something like this
def test_do_thing():
a: A = A(some_dependency=create_autospec(InterfaceB))
a.do_thing(1)
assert a.some_dependency.do_subtask.called_with(necessary_data=1)
but the code completion/linter is going to complain that do_subtask is a function and doesn't have a called_with property, so I have to do
do_subtask: Mock = cast(Mock, a.some_dependency.do_subtask)
assert do_subtask.called_with(necessary_data=1)
which, if you imagine a controller with multiple dependencies, or even just using multiple methods of one dependency, it gets pretty tedious and not very DRY and is infact very WET (What? Eeek Terrible). Is there any solution that does any one of the follwing in order of preference
Magically hints that A is of type A, but all of its dependencies are autospeced, so they're of the type that they are, but then their methods would be mocks?
A type hint that says a variable is an autospec of a type, so it has all the methods of the type, but they're actually Mock objects?
Just a less repetitive way to declare to the pycharm linter that they're mocks
I don't know if my answer could be useful for you, and if not excuse me; surely not all your questions will have an answer from the code below. I don't have checked anything about PyCharm hints.
I have executed the code below in my IDE PyCharm and the test is passed successfully.
I have changed your assert because I use unittest.mock.
import unittest
from unittest.mock import create_autospec
class InterfaceB:
def do_subtask(self, necessary_data):
pass
class A:
def __init__(self, some_dependency: InterfaceB):
self.some_dependency: InterfaceB = some_dependency
def do_thing(self, x):
self.some_dependency.do_subtask(necessary_data=x)
class MyTestCase(unittest.TestCase):
def test_do_thing(self):
a: A = A(some_dependency=create_autospec(InterfaceB))
a.do_thing(1)
# ---> Note that I have changed your assert!
#assert a.some_dependency.do_subtask.called_with(necessary_data=1)
a.some_dependency.do_subtask.assert_called_once_with(necessary_data=1)
if __name__ == '__main__':
unittest.main()
As you can see I have defined a class InterfaceB with a method do_subtask().
The method test_do_thing() verifies only that: the execution of method do_thing(1) calls one time the method do_subtask(1).
I have encountered something mysterious, when using patch decorator from mock package integrated with pytest fixture.
I have two modules:
-----test folder
-------func.py
-------test_test.py
in func.py:
def a():
return 1
def b():
return a()
in test_test.py:
import pytest
from func import a,b
from mock import patch,Mock
#pytest.fixture(scope="module")
def brands():
return 1
mock_b=Mock()
#patch('test_test.b',mock_b)
def test_compute_scores(brands):
a()
It seems that patch decorate is not compatible with pytest fixture. Does anyone have a insight on that? Thanks
When using pytest fixture with mock.patch, test parameter order is crucial.
If you place a fixture parameter before a mocked one:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(my_fixture, mocked_class):
then the mock object will be in my_fixture and mocked_class will be search as a fixture:
fixture 'mocked_class' not found
But, if you reverse the order, placing the fixture parameter at the end:
from unittest import mock
#mock.patch('my.module.my.class')
def test_my_code(mocked_class, my_fixture):
then all will be fine.
As of Python3.3, the mock module has been pulled into the unittest library. There is also a backport (for previous versions of Python) available as the standalone library mock.
Combining these 2 libraries within the same test-suite yields the above-mentioned error:
E fixture 'fixture_name' not found
Within your test-suite's virtual environment, run pip uninstall mock, and make sure you aren't using the backported library alongside the core unittest library. When you re-run your tests after uninstalling, you would see ImportErrors if this were the case.
Replace all instances of this import with from unittest.mock import <stuff>.
Hopefully this answer on an old question will help someone.
First off, the question doesn't include the error, so we don't really know what's up. But I'll try to provide something that helped me.
If you want a test decorated with a patched object, then in order for it to work with pytest you could just do this:
#mock.patch('mocked.module')
def test_me(*args):
mocked_module = args[0]
Or for multiple patches:
#mock.patch('mocked.module1')
#mock.patch('mocked.module')
def test_me(*args):
mocked_module1, mocked_module2 = args
pytest is looking for the names of the fixtures to look up in the test function/method. Providing the *args argument gives us a good workaround the lookup phase. So, to include a fixture with patches, you could do this:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
#mock.patch('mocked.module1')
def test_me(brands, *args):
mocked_module1 = args[0]
This worked for me running python 3.6 and pytest 3.0.6.
If you have multiple patches to be applied, order they are injected is important:
# from question
#pytest.fixture(scope="module")
def brands():
return 1
# notice the order
#patch('my.module.my.class1')
#patch('my.module.my.class2')
def test_list_instance_elb_tg(mocked_class2, mocked_class1, brands):
pass
This doesn't address your question directly, but there is the pytest-mock plugin which allows you to write this instead:
def test_compute_scores(brands, mock):
mock_b = mock.patch('test_test.b')
a()
a) For me the solution was to use a with block inside the test function instead of using a #patch decoration before the test function:
class TestFoo:
def test_baa(self, my_fixture):
with patch(
'module.Class.function_to_patch',
MagicMock(return_value='mocked_result')
) as mocked_function_to_patch:
result= my_fixture.baa('mocked_input')
assert result == 'mocked_result'
mocked_function_to_patch.assert_has_calls([
call('mocked_input')
])
This solution does work inside classes (that are used to structure/group my test methods). Using the with block, you don't need to worry about the order of the arguments. I find it more explicit then the injection mechanism but the code becomes ugly if you patch more then one variable. If you need to patch many dependencies, that might be a signal that your tested function does too many things and that you should refactor it, e.g. by extracting some of the functionality to extra functions.
b) If you are outside classes and do want a patched object to be injected as extra argument in a test method... please note that #patch does not support to define the mock as second argument of the decoration:
#patch('path.to.foo', MagicMock(return_value='foo_value'))
def test_baa(self, my_fixture, mocked_foo):
does not work.
=> Make sure to pass the path as only argument to the decoration. Then define the return value inside the test function:
#patch('path.to.foo')
def test_baa(self, my_fixture, mocked_foo):
mocked_foo.return_value = 'foo_value'
(Unfortunately, this does not seem to work inside classes.)
First let inject the fixture(s), then let inject the variables of the #patch decorations (e.g. 'mocked_foo').
The name of the injected fixture 'my_fixture' needs to be correct. It needs to match the name of the decorated fixture function (or the explicit name used in the fixture decoration).
The name of the injected patch variable 'mocked_foo' does not follow a distinct naming pattern. You can choose it as you like, independent from the corresponding path of the #patch decoration.
If you inject several patched variables, note that the order is reversed: the mocked instance belonging to the last #patch decoration is injected first:
#patch('path.to.foo')
#patch('path.to.qux')
def test_baa(self, my_fixture, mocked_qux, mocked_foo):
mocked_foo.return_value = 'foo_value'
I had the same problem and solution for me was to use mock library in 1.0.1 version (before I was using unittest.mock in 2.6.0 version). Now it works like a charm :)
I have some relatively complex integration tests in my Python code. I simplified them greatly with a custom decorator and I'm really happy with the result. Here's a simple example of what my decorator looks like:
def specialTest(fn):
def wrapTest(self):
#do some some important stuff
pass
return wrapTest
Here's what a test may look like:
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
This works great and is executed by PyCharm's test runner without a problem. However, when I run a test from the commandline using Nose, it skips any test with the #specialTest decorator.
I have tried to name the decorator as testSpecial, so it matches default rules, but then my FN parameter doesn't get passed.
How can I get Nose to execute those test methods and treat the decorator as it is intended?
SOLUTION
Thanks to madjar, I got this working by restructuring my code to look like this, using functools.wraps and changing the name of the wrapper:
from functools import wraps
def specialTest(fn):
#wraps(fn)
def test_wrapper(self,*args,**kwargs):
#do some some important stuff
pass
return test_wrapper
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
If I remember correctly, nose loads the test based on their names (functions whose name begins with test_). In the snippet you posted, you do not copy the __name__ attribute of the function in your wrapper function, so the name of the function returned is wrapTest and nose decides it's not a test.
An easy way to copy the attributes of the function to the new one is to used functools.wraps.
I am trying to get started with unittest, but I am having a problem getting setUpClass() to work. Here is my test code...
import unittest
class TestRepGen(unittest.TestCase):
""" Contains methods for training data testing """
testvar = None
#classmethod
def setUpClass(cls):
cls.testvar = 6
def test_method(self):
""" Ensure data is selected """
self.assertIsNotNone(self.testvar,"self.testvar is None!")
# run tests
if __name__ == '__main__':
unittest.main()
The assert error message is displayed, indicating that self.testvar == None, and has not been changed by setUpClass(). Is there something wrong with my code?
I get the same result if I run the code from within my IDE (Wing), or directly from the command line. For the record, I am using Python 3.2.1 under Windows7.
setUpClass is new to the unittest framework as of 2.7 and 3.2. If you want to use it with an older version you will need to use nose as your test runner instead of unittest.
I'm going to guess you aren't calling this test class directly, but a derived class.
If that's the case, you have to call up to setUpClass() manually -- it's not automatically called.
class TestB(TestRepGen):
#classmethod
def setUpClass(cls):
super(TestB, cls).setUpClass()
Also, accoding to the docs class-level fixtures are implemented by the test suite. So, if you're calling TestRepGen or test_method some weird way you didn't post, setUpClass might not get run.
Ok - it's hands-up time to admit that it was my mistake. I was using 3.1.2 when I should have been (and thought I was) using 3.2.1. I have now changed to the correct version, all is well and the test is passing. Many thanks to all who replied (and sorry for wasting your time :-( ).
I've written a function which opens a vim editor with the given filename when called.. How can I do the unittest of these types of operations....
To unit test something like this you must mock/stub out your dependencies. In this case lets say you are launching vim by calling os.system("vim").
In your unit test you can stub out that function call doing something like:
def launchVim():
os.system("vim")
def testThatVimIsLaunched():
try:
realSystem = os.system
called = []
def stubSystem(command):
if command == "vim":
called.append(True)
os.system = stubSystem
launchVim() # function under test
assert(called == [True])
finally:
os.system = realSystem
For more details on mocking and stubbing take a look at this article
Update: I added the try/finally to restore the original system function as suggested by Dave Kirby
This is no longer unittesting but integration testing. Why do you need to launch vim? Usually, you would 'mock' this, simulate the process spawning and depend on the fact that python's subprocess module is well tested.
To accomplish this in your code, you can, for example, subclass the class that implements your functionality and override the method that's responsible for spawning. Then test this subclass. I.e.
class VimSpawner(object): # your actual code, to be tested
...
def spawn(self):
... do subprocess magic
def other_logic(self):
...
self.spawn()
class TestableVimSpawner(VimSpawner):
def spawn(self):
... mock the spawning
self.ididit = True
class Test(..):
def test_spawning(self):
t = TestableVimSpawner()
t.other_logic()
self.failUnless(t.ididit)