Python mock function with only specific argument - python

I'm new to Python and I'm trying to mock a function only when a specific argument is passed. If other than the desired argument is passed, I'd like to call the original function instead.
In Python 2.7 I tried something like this:
from foo import config
def test_something(self):
original_config = config # config is a Module.
def side_effect(key):
if key == 'expected_argument':
return mocked_result
else:
return original_config.get(key)
config.get = Mock(side_effect=side_effect)
# actualy_test_something...
It won't work 'cause original_config is not a copy of config. It references the same module ending up in an infinite loop. I could try cloning the original config module instead but that seems to be overkill.
Is there something similar to RSpec's mocks I could use? e.g:
obj.stub(:message).with('an_expected_argument').and_return('a_mocked_result')
Any help would be appreciated. Thanks.

You'd need to store a reference to the unpatched function first:
def test_something(self):
original_config_get = config.get
def side_effect(key):
if key == 'expected_argument':
return mocked_result
else:
return original_config_get(key)
config.get = Mock(side_effect=side_effect)
Here original_config_get references the original function before you replaced it with a Mock() object.

Related

Programmatically register function as a test function in pytest

I would like to programmatically add or mark a function as a test-case in pytest, so instead of writing
def test_my_function():
pass
I would like to do something like (pseudo-api, I know neither pytest.add_test nor pytest.testcase exist by that identifier).
def a_function_specification():
pass
pytest.add_test(a_function_specification)
or
I would like to do something like
#pytest.testcase
def a_function_specification():
pass
Basically I would like to write some test-case-generating decorator that isn't exactly working like pytest.mark/parametrizing which is why I started to dig into the internals but I haven't found an obvious way how this can be done for python code.
The YAML example in the pytest docs seem to use pytest.Item but I have a hard time mapping this to something that would work within python and not as part of a non-Python-file test collection.
Starting from version 2.6, pytest support:
nose-style __test__ attribute on modules, classes and functions, including unittest-style Classes. If set to False, the test will not be collected.
So, you need to add this attribute.
One approach is:
def not_a_test1():
assert 1 + 2 == 3
not_a_test1.__test__ = True
Another is:
def make_test(func):
func.__test__ = True
return func
#make_test
def not_a_test2():
assert 1 + 2 == 3

find and call any function by function in string

It is tricky question, I need to know one thing that...
two function with different functionality and one more function called 3rd function which will decide that to use any one function. That decision will be passed as argument. Below with clarity code.
# Present in project/testing/local/funtion_one.py
def testing_function_one(par1, par2, par3):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_one
# Present in project/testing/local/funtion_two.py
def testing_function_two(par1, par2, par3, par4, par5):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_two
# Present in project/testing/function_testing.py
def general_function_testing(function_name, function_path, funtion_params, extra_params):
"""
function_name: would be any function testing_function_one or testing_function_two
function_path: path for where the function is located.
funtion_params: arguments for that calling function.
"""
Now I need like based on above params details, how to call the required function
using path and pass the params for that function and how to handle on passing
number of params for that perticular funtion.
I am looking like:
funt_res = function_name(funtion_params)
# After getting result do something with other params.
new_res = funt_res * extra_params
if __name__ == "__main__"
function_name = "testing_function_two"
function_path = "project/testing/local/funtion_two.py"
funtion_params = pass values to testing_function_two funtion. it
can be {"par1": 2, "par2": 2, "par3": 4, "par4": 6, "par5": 8}
extra_params = 50
res = general_function_testing(function_name, function_path,
funtion_params, extra_params)
Tried:
# This part will work only when **calling_funtion_name**
present in same file otherwise it gives error.
For me it should check all the project or specified path
f_res = globals()["calling_funtion_name"](*args, **kwargs)
print('f_ress', f_res)
anyone can try this one...
If above is not clear, let me know, i will try to explain with other examples.
Though possible, in Python, few times one will need to pass a function by its name as a string. Specially, if the wanted result is for the function to be called in its destination - the reason for that is that functions are themselves "first class objects" in Python, and can be assigned to new variable names (which will simply reference the function) and be passed as arguments to other functions.
So, if one wants to pass sin from the module math to be used as a numericd function inside some other code, instead of general_function_testing('sin', 'math', ...) one can simply write:
import math
general_function_testing(math.sin, ...)
And the function callad with this parameter can simply use whatever name it has for the parameter to call the passed function:
def general_function_testing(target_func, ...):
...
result = target_func(argument)
...
While it is possible to retrieve a function from its name and module name as strings, its much more cumbersome due to nested packages: the code retrieveing the function would have to take care of any "."s in the "function path" as you call it, make carefull use of the built-in __import__, which allows one to import a module given its name as a string, though it has a weird API, and then retrieve the function from the module using a getattr call. And all this to have the a reference to the function object itself, which could be passed as a parameter from the very first moment.
The example above doing it via strings could be:
import sys
def general_function_testing(func_name, func_path, ...):
...
__import__(func_path) # imports the module where the function lives, func_path being a string
module = sys.modules[func_path] # retrieves the module path itself
target_func = getattr(module, func_name)
result = target_func(argument)
...

pytest - how to assert if a method of a class is called inside a method

I am trying to figure out how to know if a method of class is being called inside a method.
following is the code for the unit test:
# test_unittes.py file
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation()
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
inside the purge_s3_file method bulk_delete is called but when asserting it says that the method was expected to be called and it is not called!
mocker = <pytest_mock.plugin.MockerFixture object at 0x7fac28d57208>
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
> mock_s3fs.bulk_delete.assert_called()
E AssertionError: Expected 'bulk_delete' to have been called.
I don't know how to test this and how to assert if the method is called!
Below you can find the method being testing:
# segments.py file
import s3fs
def purge_s3_files(self, prefix=None):
bucket = 'sample_bucket'
files = []
fs = s3fs.S3FileSystem()
if fs.exists(f'{bucket}/{prefix}'):
files.extend(fs.ls(f'{bucket}/{prefix}'))
else:
print(f'Directory {bucket}/{prefix} does not exist in s3.')
print(f'Purging S3 files from {bucket}/{prefix}.')
print(*files, sep='\n')
fs.bulk_delete(files)
The problem you are facing is that the mock you are setting up is mocking out the class, and you are not using the instance to use and check your mocks. In short, this should fix your problem (there might be another issue explained further below):
m = mocker.patch('s3fs.S3FileSystem')
mock_s3fs = m.return_value # (or mock_s3())
There might be a second problem in how you are not referencing the right path to what you want to mock.
Depending on what your project root is considered (considering your comment here) your mock would need to be referenced accordingly:
mock('app.segments.s3fs.S3FileSystem')
The rule of thumb is that you always want to mock where you are testing.
If you are able to use your debugger (or output to your console) you will (hopefully :)) see that your expected call count will be inside the return_value of your mock object. Here is a snippet from my debugger using your code:
You will see the call_count attribute set to 1. Pointing back to what I mentioned at the beginning of the answer, by making that change, you will now be able to use the intended mock_s3fs.bulk_delete_assert_called().
Putting it together, your working test with modification runs as expected (note, you should also set up the expected behaviour and assert the other fs methods you are calling in there):
def test_purge_s3_files(mocker):
m = mocker.patch("app.segments.s3fs.S3FileSystem")
mock_s3fs = m.return_value # (or m())
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
Python mock testing depends on where the mock is being used. So you have the mock the function calls where it is imported.
Eg.
app/r_executor.py
def r_execute(file):
# do something
But the actual function call happens in another namespace ->
analyse/news.py
from app.r_executor import r_execute
def analyse():
r_execute(file)
To mock this I should use
mocker.patch('analyse.news.r_execute')
# not mocker.patch('app.r_executor.r_execute')

Python: how to get a function based on whether it matches an assigned string to it [duplicate]

I have a function name stored in a variable like this:
myvar = 'mypackage.mymodule.myfunction'
and I now want to call myfunction like this
myvar(parameter1, parameter2)
What's the easiest way to achieve this?
funcdict = {
'mypackage.mymodule.myfunction': mypackage.mymodule.myfunction,
....
}
funcdict[myvar](parameter1, parameter2)
It's much nicer to be able to just store the function itself, since they're first-class objects in python.
import mypackage
myfunc = mypackage.mymodule.myfunction
myfunc(parameter1, parameter2)
But, if you have to import the package dynamically, then you can achieve this through:
mypackage = __import__('mypackage')
mymodule = getattr(mypackage, 'mymodule')
myfunction = getattr(mymodule, 'myfunction')
myfunction(parameter1, parameter2)
Bear in mind however, that all of that work applies to whatever scope you're currently in. If you don't persist them somehow, you can't count on them staying around if you leave the local scope.
def f(a,b):
return a+b
xx = 'f'
print eval('%s(%s,%s)'%(xx,2,3))
OUTPUT
5
Easiest
eval(myvar)(parameter1, parameter2)
You don't have a function "pointer". You have a function "name".
While this works well, you will have a large number of folks telling you it's "insecure" or a "security risk".
Why not store the function itself? myvar = mypackage.mymodule.myfunction is much cleaner.
modname, funcname = myvar.rsplit('.', 1)
getattr(sys.modules[modname], funcname)(parameter1, parameter2)
eval(compile(myvar,'<str>','eval'))(myargs)
compile(...,'eval') allows only a single statement, so that there can't be arbitrary commands after a call, or there will be a SyntaxError. Then a tiny bit of validation can at least constrain the expression to something in your power, like testing for 'mypackage' to start.
I ran into a similar problem while creating a library to handle authentication. I want the app owner using my library to be able to register a callback with the library for checking authorization against LDAP groups the authenticated person is in. The configuration is getting passed in as a config.py file that gets imported and contains a dict with all the config parameters.
I got this to work:
>>> class MyClass(object):
... def target_func(self):
... print "made it!"
...
... def __init__(self,config):
... self.config = config
... self.config['funcname'] = getattr(self,self.config['funcname'])
... self.config['funcname']()
...
>>> instance = MyClass({'funcname':'target_func'})
made it!
Is there a pythonic-er way to do this?

Python - update current function __name__ attribute programatically

I'm currently using nose to perform some tests, and when using generators with nose+xunit output you need to set the current function's __name__ attribute to properly control the name of the test in the xunit output (see here for example).
Since I don't want to hard-code the name of the function each time like this:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
my_function.__name__ = foo.get('name')
yield fn
How can I programatically reference the function and set __name__?
I had tried with sys._getframe() which yields various properties about the current function (name etc), which I tried to use with setattr(*something*, "__name__", some_test_name), but that didn't work as I couldn't seem to work out which part of sys._getframe() references the function.
Finally found a solution via SO: https://stackoverflow.com/a/4506081/1808861
A lot more complicated than I expected, but I can now:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
setattr(get_func(), "__name__", foo.get('name'))
yield fn
The xunit output then contains the generator's data name entry.

Categories