I'm currently using nose to perform some tests, and when using generators with nose+xunit output you need to set the current function's __name__ attribute to properly control the name of the test in the xunit output (see here for example).
Since I don't want to hard-code the name of the function each time like this:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
my_function.__name__ = foo.get('name')
yield fn
How can I programatically reference the function and set __name__?
I had tried with sys._getframe() which yields various properties about the current function (name etc), which I tried to use with setattr(*something*, "__name__", some_test_name), but that didn't work as I couldn't seem to work out which part of sys._getframe() references the function.
Finally found a solution via SO: https://stackoverflow.com/a/4506081/1808861
A lot more complicated than I expected, but I can now:
def my_function():
for foo in bar:
fn = lambda: some_generated_test(foo)
fn.description = foo.get('name')
setattr(get_func(), "__name__", foo.get('name'))
yield fn
The xunit output then contains the generator's data name entry.
Related
It is tricky question, I need to know one thing that...
two function with different functionality and one more function called 3rd function which will decide that to use any one function. That decision will be passed as argument. Below with clarity code.
# Present in project/testing/local/funtion_one.py
def testing_function_one(par1, par2, par3):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_one
# Present in project/testing/local/funtion_two.py
def testing_function_two(par1, par2, par3, par4, par5):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_two
# Present in project/testing/function_testing.py
def general_function_testing(function_name, function_path, funtion_params, extra_params):
"""
function_name: would be any function testing_function_one or testing_function_two
function_path: path for where the function is located.
funtion_params: arguments for that calling function.
"""
Now I need like based on above params details, how to call the required function
using path and pass the params for that function and how to handle on passing
number of params for that perticular funtion.
I am looking like:
funt_res = function_name(funtion_params)
# After getting result do something with other params.
new_res = funt_res * extra_params
if __name__ == "__main__"
function_name = "testing_function_two"
function_path = "project/testing/local/funtion_two.py"
funtion_params = pass values to testing_function_two funtion. it
can be {"par1": 2, "par2": 2, "par3": 4, "par4": 6, "par5": 8}
extra_params = 50
res = general_function_testing(function_name, function_path,
funtion_params, extra_params)
Tried:
# This part will work only when **calling_funtion_name**
present in same file otherwise it gives error.
For me it should check all the project or specified path
f_res = globals()["calling_funtion_name"](*args, **kwargs)
print('f_ress', f_res)
anyone can try this one...
If above is not clear, let me know, i will try to explain with other examples.
Though possible, in Python, few times one will need to pass a function by its name as a string. Specially, if the wanted result is for the function to be called in its destination - the reason for that is that functions are themselves "first class objects" in Python, and can be assigned to new variable names (which will simply reference the function) and be passed as arguments to other functions.
So, if one wants to pass sin from the module math to be used as a numericd function inside some other code, instead of general_function_testing('sin', 'math', ...) one can simply write:
import math
general_function_testing(math.sin, ...)
And the function callad with this parameter can simply use whatever name it has for the parameter to call the passed function:
def general_function_testing(target_func, ...):
...
result = target_func(argument)
...
While it is possible to retrieve a function from its name and module name as strings, its much more cumbersome due to nested packages: the code retrieveing the function would have to take care of any "."s in the "function path" as you call it, make carefull use of the built-in __import__, which allows one to import a module given its name as a string, though it has a weird API, and then retrieve the function from the module using a getattr call. And all this to have the a reference to the function object itself, which could be passed as a parameter from the very first moment.
The example above doing it via strings could be:
import sys
def general_function_testing(func_name, func_path, ...):
...
__import__(func_path) # imports the module where the function lives, func_path being a string
module = sys.modules[func_path] # retrieves the module path itself
target_func = getattr(module, func_name)
result = target_func(argument)
...
I am trying to figure out how to know if a method of class is being called inside a method.
following is the code for the unit test:
# test_unittes.py file
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation()
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
inside the purge_s3_file method bulk_delete is called but when asserting it says that the method was expected to be called and it is not called!
mocker = <pytest_mock.plugin.MockerFixture object at 0x7fac28d57208>
def test_purge_s3_files(mocker):
args = Args()
mock_s3fs = mocker.patch('s3fs.S3FileSystem')
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
> mock_s3fs.bulk_delete.assert_called()
E AssertionError: Expected 'bulk_delete' to have been called.
I don't know how to test this and how to assert if the method is called!
Below you can find the method being testing:
# segments.py file
import s3fs
def purge_s3_files(self, prefix=None):
bucket = 'sample_bucket'
files = []
fs = s3fs.S3FileSystem()
if fs.exists(f'{bucket}/{prefix}'):
files.extend(fs.ls(f'{bucket}/{prefix}'))
else:
print(f'Directory {bucket}/{prefix} does not exist in s3.')
print(f'Purging S3 files from {bucket}/{prefix}.')
print(*files, sep='\n')
fs.bulk_delete(files)
The problem you are facing is that the mock you are setting up is mocking out the class, and you are not using the instance to use and check your mocks. In short, this should fix your problem (there might be another issue explained further below):
m = mocker.patch('s3fs.S3FileSystem')
mock_s3fs = m.return_value # (or mock_s3())
There might be a second problem in how you are not referencing the right path to what you want to mock.
Depending on what your project root is considered (considering your comment here) your mock would need to be referenced accordingly:
mock('app.segments.s3fs.S3FileSystem')
The rule of thumb is that you always want to mock where you are testing.
If you are able to use your debugger (or output to your console) you will (hopefully :)) see that your expected call count will be inside the return_value of your mock object. Here is a snippet from my debugger using your code:
You will see the call_count attribute set to 1. Pointing back to what I mentioned at the beginning of the answer, by making that change, you will now be able to use the intended mock_s3fs.bulk_delete_assert_called().
Putting it together, your working test with modification runs as expected (note, you should also set up the expected behaviour and assert the other fs methods you are calling in there):
def test_purge_s3_files(mocker):
m = mocker.patch("app.segments.s3fs.S3FileSystem")
mock_s3fs = m.return_value # (or m())
segment_obj = segments.Segmentation(environment='qa',
verbose=True,
args=args)
segment_obj.purge_s3_files('sample')
mock_s3fs.bulk_delete.assert_called()
Python mock testing depends on where the mock is being used. So you have the mock the function calls where it is imported.
Eg.
app/r_executor.py
def r_execute(file):
# do something
But the actual function call happens in another namespace ->
analyse/news.py
from app.r_executor import r_execute
def analyse():
r_execute(file)
To mock this I should use
mocker.patch('analyse.news.r_execute')
# not mocker.patch('app.r_executor.r_execute')
I am currently developing an automated function tester in Python.
The purpose of this application is to automatically test if functions are returning an expected return type based on their defined hints.
Currently I have two test functions (one which fails and one which passes), along with the rest of my code in one file. My code utilizes the globals() command in order to scan the Python file for all existing functions and to isolate user-made functions and exclude the default ones.
This initial iteration works well. Now I am trying to import the function and use it from another .py file.
When I run it in the other .py file it still returns results for the functions from the original file instead of the new test-cases in the new file.
Original File - The Main Application
from math import floor
import random
#declaring test variables
test_string = 'test_string'
test_float = float(random.random() * 10)
test_int = int(floor(random.random() * 10))
#Currently supported test types (input and return)
supported_types = ['int', 'float', 'str']
autotest_result = {}
def int_ret(number: int) -> str:
string = "cactusmonster"
return string
def false_test(number: int) -> str:
floating = 3.2222
return floating
def test_typematching():
for name in list(globals()):
if not name.startswith('__'):
try:
return_type = str((globals()[name].__annotations__)['return'])
autotest_result.update({name: return_type.replace("<class '", "").replace("'>", "")})
except:
continue
for func in autotest_result:
if autotest_result[func] != None:
this_func = globals()[func].__annotations__
for arg in this_func:
if arg != 'return':
input_type = str(this_func[arg]).replace("<class '", "").replace("'>", "")
for available in supported_types:
if available == input_type:
func_return = globals()[func]("test_" + input_type)
func_return = globals()[func]("test_" + input_type)
actual_return_type = str(type(func_return)).replace("<class '", "").replace("'>", "")
if actual_return_type == autotest_result[func]:
autotest_result[func] = 'Passed'
else:
autotest_result[func] = 'Failed'
return autotest_result
Test File - Where I Am Importing The "test_typematching()" Function
from auto_test import test_typematching
print(test_typematching())
def int_ret_newfile(number: int) -> str:
string="cactusmonster"
# print(string)
# return type(number)
return string
Regardless if I run my main "auto_test.py" file or the "tester.py" file, I still get the following output:
{'int_ret': 'Passed', 'false_test': 'Failed'}
I am guessing this means that even when I am running the function from auto_test.py on my tester.py file it still just scans itself. I would like it to scan the file where the function is currently being called. For example, I expect it to test the int_ret_newfile function of tester.py.
Any advice or help would be much appreciated.
globals() is a bit of a misnomer. It gets the calling module's __dict__. (Python's true "global" namespace is actually builtins.)
How can globals() get its caller's __dict__ when it's defined in the builtins module? Here's a clue:
PyObject *
PyEval_GetGlobals(void)
{
PyThreadState *tstate = _PyThreadState_GET();
PyFrameObject *current_frame = _PyEval_GetFrame(tstate);
if (current_frame == NULL) {
return NULL;
}
assert(current_frame->f_globals != NULL);
return current_frame->f_globals;
}
globals() is one of those builtins that's implemented in C (in CPython), but you get the gist. It reads the frame globals from the current stack frame, so in Python,
import inspect
inspect.currentframe().f_globals
would do the same thing as globals(). But you can't just put this in a function and expect it to work the same way, because calling it would add a stack frame, and that frame's globals depends on the function's .__globals__ attribute, which is set to the .__dict__ of the module that defined it. You want the caller's frame.
def myglobals():
"""Behaves like the builtin globals(), but written in Python!"""
return inspect.currentframe().f_back.f_globals
You could do the same thing in test_typematching. But walking up the stack to the previous frame like that is a weird thing to do. It can be surprising and brittle. It amounts to passing the caller's frame as an implicit hidden argument, something that normally is not supposed to matter. Consider what happens if you wrap it in a decorator. Now which stack frame are you getting the globals from?
So really, you should be passing in globals() as an explicit argument to test_typematching(), like test_typematching(globals()). A defined and documented parameter would be much less confusing than implicit introspection. "Explicit is better than implicit".
Still, Python's standard library does do this kind of thing occasionally, with globals() itself being a notable example. And exec() can use the current namespace if you don't give it a different one. It's also how super() can now work without arguments in Python 3. So stack frame inspection does have precedent for this kind of use case.
I have a module testing system in Python where individual modules call something like:
class Hello(object):
_DOC_ATTR = { 'greeting': '''
a greeting message.
>>> h = Hello()
>>> h.greeting = 'hi there'
>>> h.greeting
'hi there'
''' }
def __init__(self):
self.greeting = "hello"
class Test(unittest.TestCase):
# tests here
if __name__ == '__main__':
import tester
tester.test(Test)
inside tester, I run the tests in Test along with a doctest on "__main__". This works great and has worked fine for a long time. Our specialized _DOC_ATTR dictionary documents individual attributes on the function when we build into Sphinx. However, doctests within this dictionary are not called. What I would like to do is within tester.test() to run doctests on the values in each class's _DOC_ATTR as well.
The problem that I'm having is trying to find a way within tester.test() to figure out all the variables (specifically classes) defined in __main__. I've tried looking at relevant places in traceback to no avail. I thought that because I was passing in a class from __main__, namely __main__.Test that I'd be able to use the .__module__ from Test to get access to the local variables there, but I can't figure out how to do it.
I would rather not need to alter the call to tester.test(Test) since it's used in hundreds of modules and I've trained all the programmers working on the project to follow this paradigm. Thanks for any help!
I think that I may have found the answer:
import inspect
stacks = inspect.stack()
if len(stacks) > 1:
outerFrame = stacks[1][0]
else:
outerFrame = stacks[0][0]
localVariables = outerFrame.f_locals
for lv in list(localVariables.keys()):
lvk = localVariables[lv]
if (inspect.isclass(lvk)):
docattr = getattr(lvk, '_DOC_ATTR', None)
if docattr is not None:
# ... do something with docattr ...
Another solution: since we are passing the "Test" class in, and in order to run there needs to be a "runTest" function defined, one could also use the func_globals on that function. Note that it cannot be a function inherited from a superclass, such as __init__, so it may have limited functionality for wider use cases.
import inspect
localVariables = Test.runTest.func_globals
for lv in list(localVariables.keys()):
lvk = localVariables[lv]
if (inspect.isclass(lvk)):
#### etc.
I'm new to Python and I'm trying to mock a function only when a specific argument is passed. If other than the desired argument is passed, I'd like to call the original function instead.
In Python 2.7 I tried something like this:
from foo import config
def test_something(self):
original_config = config # config is a Module.
def side_effect(key):
if key == 'expected_argument':
return mocked_result
else:
return original_config.get(key)
config.get = Mock(side_effect=side_effect)
# actualy_test_something...
It won't work 'cause original_config is not a copy of config. It references the same module ending up in an infinite loop. I could try cloning the original config module instead but that seems to be overkill.
Is there something similar to RSpec's mocks I could use? e.g:
obj.stub(:message).with('an_expected_argument').and_return('a_mocked_result')
Any help would be appreciated. Thanks.
You'd need to store a reference to the unpatched function first:
def test_something(self):
original_config_get = config.get
def side_effect(key):
if key == 'expected_argument':
return mocked_result
else:
return original_config_get(key)
config.get = Mock(side_effect=side_effect)
Here original_config_get references the original function before you replaced it with a Mock() object.