I have a method in a django model that does a computation relative to the current time. Here is a snippet:
def next_date():
now = datetime.now()
trial_expires = max(self.date_status_changed + timedelta(self.trial_days), now)
return timezone.datetime(trial_expires.year, trial_expires.month+1, 1, tzinfo=trial_expires.tzinfo)
What's the proper way to test this in django/python using unittest? What I'd like to do is be able to hard code some values for "now" in the test so I can try the various edge cases. Ideally, I'd like to avoid relying on the current time and date in the test.
One approach would be to modify my method to accept an optional parameter that would override the 'now' value it uses. Does python have any functions to do something similar without having to modify my method signature?
You could extract datetime.now() as a parameter:
def next_date(nowfunc=datetime.now):
now = nowfunc()
...
or as a class' dependency:
class X:
def __init__(self, nowfunc=datetime.now):
self._nowfunc = nowfunc
def next_date(self):
now = self._nowfunc()
...
And pass a mock function with the required result from your tests.
But if you don't want to modify the signature, use patches:
#patch.object(datetime, 'now')
def test_next_date(self, nowfunc):
nowfunc.return_value = ... # required result
# the rest of the test
Related
It is tricky question, I need to know one thing that...
two function with different functionality and one more function called 3rd function which will decide that to use any one function. That decision will be passed as argument. Below with clarity code.
# Present in project/testing/local/funtion_one.py
def testing_function_one(par1, par2, par3):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_one
# Present in project/testing/local/funtion_two.py
def testing_function_two(par1, par2, par3, par4, par5):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_two
# Present in project/testing/function_testing.py
def general_function_testing(function_name, function_path, funtion_params, extra_params):
"""
function_name: would be any function testing_function_one or testing_function_two
function_path: path for where the function is located.
funtion_params: arguments for that calling function.
"""
Now I need like based on above params details, how to call the required function
using path and pass the params for that function and how to handle on passing
number of params for that perticular funtion.
I am looking like:
funt_res = function_name(funtion_params)
# After getting result do something with other params.
new_res = funt_res * extra_params
if __name__ == "__main__"
function_name = "testing_function_two"
function_path = "project/testing/local/funtion_two.py"
funtion_params = pass values to testing_function_two funtion. it
can be {"par1": 2, "par2": 2, "par3": 4, "par4": 6, "par5": 8}
extra_params = 50
res = general_function_testing(function_name, function_path,
funtion_params, extra_params)
Tried:
# This part will work only when **calling_funtion_name**
present in same file otherwise it gives error.
For me it should check all the project or specified path
f_res = globals()["calling_funtion_name"](*args, **kwargs)
print('f_ress', f_res)
anyone can try this one...
If above is not clear, let me know, i will try to explain with other examples.
Though possible, in Python, few times one will need to pass a function by its name as a string. Specially, if the wanted result is for the function to be called in its destination - the reason for that is that functions are themselves "first class objects" in Python, and can be assigned to new variable names (which will simply reference the function) and be passed as arguments to other functions.
So, if one wants to pass sin from the module math to be used as a numericd function inside some other code, instead of general_function_testing('sin', 'math', ...) one can simply write:
import math
general_function_testing(math.sin, ...)
And the function callad with this parameter can simply use whatever name it has for the parameter to call the passed function:
def general_function_testing(target_func, ...):
...
result = target_func(argument)
...
While it is possible to retrieve a function from its name and module name as strings, its much more cumbersome due to nested packages: the code retrieveing the function would have to take care of any "."s in the "function path" as you call it, make carefull use of the built-in __import__, which allows one to import a module given its name as a string, though it has a weird API, and then retrieve the function from the module using a getattr call. And all this to have the a reference to the function object itself, which could be passed as a parameter from the very first moment.
The example above doing it via strings could be:
import sys
def general_function_testing(func_name, func_path, ...):
...
__import__(func_path) # imports the module where the function lives, func_path being a string
module = sys.modules[func_path] # retrieves the module path itself
target_func = getattr(module, func_name)
result = target_func(argument)
...
I am trying to call a dynamic method created using exec(), after calling globals() it is considering params as fixture and returning error as fixture 'Template_SI' not found.
Can someone help on how to pass dynamic parameters using globals()function_name(params)?
import pytest
import input_csv
datalist = input_csv.csvdata()
def display():
return 10 + 5
for data in datalist:
functionname = data['TCID']
parameters = [data['Template_name'], data['File_Type']]
body = 'print(display())'
def createfunc(name, *params, code):
exec('''
#pytest.mark.regression
def {}({}):
{}'''.format(name, ', '.join(params), code), globals(), globals())
createfunc(functionname, data['Template_name'], data['File_Type'], code=body)
templateName = data['Template_name']
fileType = data['File_Type']
globals()[functionname](templateName, fileType)
It looks like you're trying to automate the generation of lots of different tests based on different input data. If that's the case, using exec is probably not the best way to go.
pytest provides parameterized tests: https://docs.pytest.org/en/6.2.x/example/parametrize.html
which accomplish test generation by hiding all the details inside Metafunc.parameterize().
If you really want to generate the tests yourself, consider adapting Metafunc to your own purposes. Or, alternatively, checking the unittest framework.
I am trying to figure out how to know if a method of class is being called inside a method.
following is the code for the unit test:
# test_unittes.py file
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation()
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
inside the purge_s3_file method bulk_delete is called but when asserting it says that the method was expected to be called and it is not called!
mocker = <pytest_mock.plugin.MockerFixture object at 0x7fac28d57208>
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
> mock_s3fs.bulk_delete.assert_called()
E AssertionError: Expected 'bulk_delete' to have been called.
I don't know how to test this and how to assert if the method is called!
Below you can find the method being testing:
# segments.py file
import s3fs
def purge_s3_files(self, prefix=None):
bucket = 'sample_bucket'
files = []
fs = s3fs.S3FileSystem()
if fs.exists(f'{bucket}/{prefix}'):
files.extend(fs.ls(f'{bucket}/{prefix}'))
else:
print(f'Directory {bucket}/{prefix} does not exist in s3.')
print(f'Purging S3 files from {bucket}/{prefix}.')
print(*files, sep='\n')
fs.bulk_delete(files)
The problem you are facing is that the mock you are setting up is mocking out the class, and you are not using the instance to use and check your mocks. In short, this should fix your problem (there might be another issue explained further below):
m = mocker.patch('s3fs.S3FileSystem')
mock_s3fs = m.return_value # (or mock_s3())
There might be a second problem in how you are not referencing the right path to what you want to mock.
Depending on what your project root is considered (considering your comment here) your mock would need to be referenced accordingly:
mock('app.segments.s3fs.S3FileSystem')
The rule of thumb is that you always want to mock where you are testing.
If you are able to use your debugger (or output to your console) you will (hopefully :)) see that your expected call count will be inside the return_value of your mock object. Here is a snippet from my debugger using your code:
You will see the call_count attribute set to 1. Pointing back to what I mentioned at the beginning of the answer, by making that change, you will now be able to use the intended mock_s3fs.bulk_delete_assert_called().
Putting it together, your working test with modification runs as expected (note, you should also set up the expected behaviour and assert the other fs methods you are calling in there):
def test_purge_s3_files(mocker):
m = mocker.patch("app.segments.s3fs.S3FileSystem")
mock_s3fs = m.return_value # (or m())
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
Python mock testing depends on where the mock is being used. So you have the mock the function calls where it is imported.
Eg.
app/r_executor.py
def r_execute(file):
# do something
But the actual function call happens in another namespace ->
analyse/news.py
from app.r_executor import r_execute
def analyse():
r_execute(file)
To mock this I should use
mocker.patch('analyse.news.r_execute')
# not mocker.patch('app.r_executor.r_execute')
When trying to unittest the below seen code snippet i get limited by the timing limit that the decorator that wraps calc_something functions puts to me. It seems that I cant override RAND_RATE on my unittests since then I import the module containing my implementation the decorators have already wrapped my function. How can I solve that issue?
RAND_RATE=20
RAND_PERIOD=10
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Without knowing exactly what limits does, we don't know what (if anything) can be patched. Instead, leave the base implementation undecorated for use by unit test. calc_something will be saved as the separate result of applying limits manually.
RAND_RATE=20
RAND_PERIOD=10
def _do_calc():
...
calc_something = limits(calls=RAND_RATE, period=RAND_PERIOD)(_do_calc)
#limits(calls=RAND_RATE, period=RAND_PERIOD)
def calc_something():
...
Now in your tests, you can define any decorated version you like:
test_me = limits(10, 5)(my_module._do_calc)
I'm currently using nose to perform some tests, and when using generators with nose+xunit output you need to set the current function's __name__ attribute to properly control the name of the test in the xunit output (see here for example).
Since I don't want to hard-code the name of the function each time like this:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
my_function.__name__ = foo.get('name')
yield fn
How can I programatically reference the function and set __name__?
I had tried with sys._getframe() which yields various properties about the current function (name etc), which I tried to use with setattr(*something*, "__name__", some_test_name), but that didn't work as I couldn't seem to work out which part of sys._getframe() references the function.
Finally found a solution via SO: https://stackoverflow.com/a/4506081/1808861
A lot more complicated than I expected, but I can now:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
setattr(get_func(), "__name__", foo.get('name'))
yield fn
The xunit output then contains the generator's data name entry.