How to run fixture before mark.parametrize - python

Sample_test.py
#pytest.mark.parametrize(argnames="key",argvalues=ExcelUtils.getinputrows(__name__),scope="session")
def test_execute():
#Do something
conftest.py
#pytest.fixture(name='setup',autouse=True,scope="session")
def setup_test(pytestconfig):
dict['environment']="QA"
As shown in the code above , I need to run the setup fixture before the test_execute method because the getinputrows method requires the environment to read the sheet. Unfortunately, the parametrize fixture gets executed before the setup_test. Is there any way this is possible?

You need to execute the parameter inside the test function, not in the decorator:
#pytest.mark.parametrize("key", [ExcelUtils.getinputrows], scope="session")
def test_execute(key):
key(__name__)
#Do something
or bind __name__ to the function call beforehand, but again, call the function inside your test:
#pytest.mark.parametrize("key", [lambda: ExcelUtils.getinputrows(__name__)], scope="session")
def test_execute(key):
key()
#Do something
Mind you I am not fully understanding what you're doing, so these examples might or might not make sense.

Related

Dynamically Defining a Dummy Decorator

I am using a nice tool called https://github.com/rkern/line_profiler
To use it, you need to put a decorator #profile at multiple places in the script to indicate which functions should be profiled. Then you execute the script via
kernprof -l script_to_profile.py
Obviously, when running the script by itself via python script_to_profile.py, the decorator is not defined and hence the script crashes.
I know how to define an identity decorator and I can pass a flag from the command line and define it in the main script depending on how the flag is set. However, I don't know how to pass the decorator definition (or the flag) to modules I load so they don't crash at the moment they are loaded. Any ideas?
def profile(func):
return func
A very simple way would be to check if something named profile exists, and if it doesn't, then define it to be your identity decorator. Something like this.
try:
profile
except NameError:
def profile(func):
return func
You could go a little further and make sure it's something callable — probably not necessary:
import typing
try:
profile
except NameError:
profile = None
if not isinstance(profile, typing.Callable):
def profile(func):
return func

How do I patch a method registered by a decorator in Python's datashape?

I'm using the datashape Python package and registering a new type with the #datashape.discover.register decorator. I'd like to test that when I call datashape.discover on an object of the type I'm registering, it calls the function being decorated. I'd also like to do this with good unit testing principles, meaning not actually executing the function being decorated, as it would have side effects I don't want in the test. However, this isn't working.
Here's some sample code to demonstrate the problem:
myfile.py:
#datashape.discover.register(SomeType)
def discover_some_type(data)
...some stuff i don't want done in a unit test...
test_myfile.py:
class TestDiscoverSomeType(unittest.TestCase):
#patch('myfile.discover_some_type')
def test_discover_some_type(self, mock_discover_some_type):
file_to_discover = SomeType()
datashape.discover(file_to_discover)
mock_discover_some_type.assert_called_with(file_to_discover)
The issue seems to be that the function I want mocked is mocked in the body of the test, however, it was not mocked when it was decorated (i.e. when it was imported). The discover.register function essentially internally registers the function being decorated to look it up when discover() is called with an argument of the given type. Unfortunately, it seems to internally register the real function every time, and not the patched version I want, so it will always call the real function.
Any thoughts on how to be able to patch the function being decorated and assert that it is called when datashape.discover is called?
Here's a solution I've found that's only a little hacky:
sometype.py:
def discover_some_type(data):
...some stuff i don't want done in a unit test...
discovery_channel.py:
import sometype
#datashape.discover.register(SomeType)
def discover_some_type(data):
return sometype.discover_some_type(data)
test_sometype.py:
class TestDiscoverSomeType(unittest.TestCase):
#patch('sometype.discover_some_type')
def test_discover_some_type(self, mock_discover_some_type):
import discovery_channel
file_to_discover = SomeType()
datashape.discover(file_to_discover)
mock_discover_some_type.assert_called_with(file_to_discover)
The key is that you have to patch out whatever will actually do stuff before you import the module that has the decorated function that will register the patched function to datashape. This unfortunately means that you can't have your decorated function and the function doing the discovery in the same module (so things that should logically go together are now apart). And you have the somewhat hacky import-in-a-function in your unit test (to trigger the discover.register). But at least it works.

Can python decorators be executed or skipped depending on an earlier decorator result?

I learnt from this stack_overflow_entry that in Python decorators are applied in the order they appear in the source.
So how will the following code snippet should behave?
#unittest.skip("Something no longer supported")
#skipIf(not something_feature_enabled, "Requires extra crunchy cookies to run")
def test_this():
....
The first decorator (noted below) asks test runner to completely skip test_this()
#unittest.skip("Something no longer supported")
While the second decorator asks test runner to skip running test_this() conditionally.
#skipIf(not something_feature_enabled, "Requires extra crunchy cookies to run")
So does it mean that test_this won't be run at all unless we put conditional skip decorator first?
Also, is there any way in Python to define dependent execution of decorators? e.g.
#skipIf("Something goes wrong")
#skipIf(not something_feature_enabled, "Requires extra crunchy cookies to run")
#log
#send_email
def test_this():
....
The idea is to enable execution of #log and #send_email if #skipIf("Something goes wrong") is true.
Apologies if I am missing something very obvious.
I think you may be missing a key point: a decorator is just a function that gets passed a function and returns a function.
So, these are identical:
#log
def test_this():
pass
def test_this():
pass
test_this = log(test_this)
And likewise:
#skip("blah")
def test_this():
pass
def test_this():
pass
test_this = skip("blah")(test_this)
Once you understand that, all of your questions become pretty simple.
First, yes, skip(…) is being used to decorate skipIf(…)(test), so if it skips the thing it decorates, test will never get called.
And the way to define the order in which decorators get called is to write them in the order you want them called.
If you want to do that dynamically, you'd do so by applying the decorators dynamically in the first place. For example:
for deco in sorted_list_of_decorators:
test = deco(test)
Also, is there any way in Python to define dependent execution of decorators?
No, they all get executed. More relevant to what you're asking, each decorator gets applied to the decorated function, not to the decorator.
But you can always just pass a decorator to a conditional decorator:
def decorate_if(cond, deco):
return deco if cond else lambda f: f
Then:
#skipIf("Something goes wrong")
#decorate_if(something_feature_enabled, log)
#decorate_if(something_feature_enabled, send_email)
def test_this():
pass
Simple, right?
Now, the log and send_email decorators will be applied if something_feature_enabled is truthy; otherwise a decorator that doesn't decorate the function in any way and just returns it unchanged will get applied.
But what if you can't pass the decorator, because the function is already decorated? Well, if you define each decorator to expose the function it's wrapped, you can always unwrap it. If you always use functools.wraps (which you generally should if you have no reason to do otherwise—and which you can easily emulate in this way even when you have such a reason), the wrapped function is always available as __wrapped__. So, you can write a decorator that conditionally removes the outermost level of decoration easily:
def undecorate_if(cond):
def decorate(f):
return f.__unwrapped__ if cond else f
return decorate
And again, if you're trying to do this dynamically, you're probably going to be decorating dynamically. So, an easier solution is to just skip the decorator(s) you don't want by removing them from the decos iterable before they get applied.

Destroy a mock in Python after the test

Let's say I have a couple of tests like these:
class TestMyTest(unittest.TestCase):
def SetUpClass(cls):
cls.my_lib = MyLib()
def my_first_test(self):
self.my_lib.my_function = Mock(return_value=True)
self.assertTrue(self.my_lib.run_my_function(), 'my function failed')
def my_second_test(self):
# Some other test that calls self.my_lib.my_function...
And let's say I have something like this in MyLib:
class MyLib(Object):
def my_function(self):
# This function does a whole bunch of stuff using an external API
# ...
def run_my_function(self):
result = self.my_function()
# Does some more stuff
# ...
In my_first_test I am mocking my_lib.my_function and returning a True when the function is executed. In this example, my assertion is calling run_my_function(), which is another function from the same library that among other things, it calls my_lib.my_function. But when my_second_test is executed I don't want the mocked function to be called but the real one. So I guess I would need to destroy the mock somehow after running my_first_test, maybe during tearDown(). How do I destroy that mock?
I edited my original question to add more details since looks like it was not that clear, sorry about that.
You can do this:
class TestYourLib(unittest.TestCase):
def setUp(self):
self.my_lib = MyLib()
def test_my_first_test(self):
self.my_lib.my_function = Mock(return_value=True)
self.assertTrue(self.run_my_function(), 'my function failed')
def test_my_second_test(self):
# Some other test that calls self.my_lib.my_function...
Then the Mock is "destroyed" by passing out of scope when setUp is called for the next test case.
Destroying the mock won't do it. You'll either have to re-assign self.my_lib.my_function or call Mock(return_value=True) in a different manner.
The first is what Patrick seems to suggest.

Nose ignores test with custom decorator

I have some relatively complex integration tests in my Python code. I simplified them greatly with a custom decorator and I'm really happy with the result. Here's a simple example of what my decorator looks like:
def specialTest(fn):
def wrapTest(self):
#do some some important stuff
pass
return wrapTest
Here's what a test may look like:
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
This works great and is executed by PyCharm's test runner without a problem. However, when I run a test from the commandline using Nose, it skips any test with the #specialTest decorator.
I have tried to name the decorator as testSpecial, so it matches default rules, but then my FN parameter doesn't get passed.
How can I get Nose to execute those test methods and treat the decorator as it is intended?
SOLUTION
Thanks to madjar, I got this working by restructuring my code to look like this, using functools.wraps and changing the name of the wrapper:
from functools import wraps
def specialTest(fn):
#wraps(fn)
def test_wrapper(self,*args,**kwargs):
#do some some important stuff
pass
return test_wrapper
class Test_special_stuff(unittest.TestCase):
#specialTest
def test_something_special(self):
pass
If I remember correctly, nose loads the test based on their names (functions whose name begins with test_). In the snippet you posted, you do not copy the __name__ attribute of the function in your wrapper function, so the name of the function returned is wrapTest and nose decides it's not a test.
An easy way to copy the attributes of the function to the new one is to used functools.wraps.

Categories