Dynamic function creation using globals() in python - python

I am trying to call a dynamic method created using exec(), after calling globals() it is considering params as fixture and returning error as fixture 'Template_SI' not found.
Can someone help on how to pass dynamic parameters using globals()function_name(params)?
import pytest
import input_csv
datalist = input_csv.csvdata()
def display():
return 10 + 5
for data in datalist:
functionname = data['TCID']
parameters = [data['Template_name'], data['File_Type']]
body = 'print(display())'
def createfunc(name, *params, code):
exec('''
#pytest.mark.regression
def {}({}):
{}'''.format(name, ', '.join(params), code), globals(), globals())
createfunc(functionname, data['Template_name'], data['File_Type'], code=body)
templateName = data['Template_name']
fileType = data['File_Type']
globals()[functionname](templateName, fileType)

It looks like you're trying to automate the generation of lots of different tests based on different input data. If that's the case, using exec is probably not the best way to go.
pytest provides parameterized tests: https://docs.pytest.org/en/6.2.x/example/parametrize.html
which accomplish test generation by hiding all the details inside Metafunc.parameterize().
If you really want to generate the tests yourself, consider adapting Metafunc to your own purposes. Or, alternatively, checking the unittest framework.

Related

Mocking a path which is given in a method - Python

How can I mock the path (".test/locations.yml), because it does not exist in this project where I run my test. It exists in the CI environment.
I test my function get_matches_mr and then it says path location file not found
Do you have any idea?
Code
def read_location_file():
locations_file_path = os.path.join(".test/location.yml")
if not os.path.isfile(locations_file_path):
raise RuntimeError("Location file not found: " + locations_file_path)
with open(locations_file_path, "r") as infile:
location_file = yaml.safe_load(infile.read())
test_locations= location_file["paths"]
return test_locations
def get_matches_mr(self):
merge_request = MergeRequest()
locations = self.read_location_file()
data_locations= merge_request.get_matches(locations)
return data_locations
Like suggested in the comment, I would also say the best way to test such a scenario is to mock read_location_file. Because mocking the file system methods like os.path.join would mean that you limit the test to a certain implementation, which is a bad practice. The unit test suite should not know about the implementation detail, but only about the interfaces to be tested. Usually, in test driven development you write the test before the logic is implemented. This way you would not even know os.path.join is used.
The following code shows how to mock the read_location_file method. Assuming the class containing your two methods is called ClassToBeTested (replace with your actual class name).
import os.path
from class_to_test import ClassToBeTested
def test_function_to_test(tmpdir, monkeypatch):
def mockreturn():
return [
os.path.join(tmpdir, "sample/path/a"),
os.path.join(tmpdir, "sample/path/b"),
os.path.join(tmpdir, "sample/path/c"),
]
monkeypatch.setattr(ClassToBeTested, 'read_location_file', mockreturn)
c = ClassToBeTested()
assert c.get_matches_mr()
Note: I use the fixtures tmpdir and monkeypatch, which are both built-ins of pytest:
See this answer to find some info about tmpdir (in the linked answer I explained tmp_path, but it provides the same concept as tmpdir; the difference is tmp_path returns a pathlib.Path object, and tmpdir returns a py.path.local object).
monkeypatch is a pytest fixture that provides methods for mocking/patching of objects.
Split your function into two parts:
Finding and opening the correct file.
Reading and parsing the opened file.
Your function only does the second part; the call can be responsible for the first part.
def read_location_file(infile):
location_file = yaml.safe_load(infile.read())
test_locations= location_file["paths"]
return test_locations
Your test code can then use something like io.StringIO to verify that your function can parse it correctly.
def test_read_location():
assert read_location_file(io.StringIO("...")) == ...
Your production code will handle opening the file:
with open(location_file_path) as f:
locations = read_location_file(f)

Python: know function is called from a unittest?

Is there a way to know in Python if a function is called from the context of a unittest execution or a debugging run?
For the context, I am trying to unittest a code where I use functions that perform a database call. In order to avoid database calls during the test of that function (DB calls are tested separately), I am trying to make the DB IO functions aware of their environement and to mock when they are called within a unittest and log additional variables during a debug run.
My current aproach is to read/write environment variables, but it seems a little bit of an overkill and I think Python must have a better mechanism for that.
Edit:
Here is the example of a function I am trying to unittest:
from Database_IO import Database_read
def some_function(significance_level, time_range)
data = Database_read(time_range)
significant_data = data > significance_level
return significant_data
In my opinion, if you write your function to behave in a different way when tested, you are not really testing it.
To test the function I'd mock.patch() the database object, and then check it has used correctly in your function.
The most difficult thing when you start using the mock library is to find the correct object to replace.
In your example, if in your_module you import the Database_read object from the Database_IO module, you can test it by using a code similar to the following
with mock.patch('your_module.Database_read') as dbread_mock:
# prepare the dbread_mock
dbread_mock.return_value = 10
# execute a test call
retval = some_function(3, 'some range')
# check the result
dbread_mock.assert_called_with('some range')

Find variables defined in other module (python)

I have a module testing system in Python where individual modules call something like:
class Hello(object):
_DOC_ATTR = { 'greeting': '''
a greeting message.
>>> h = Hello()
>>> h.greeting = 'hi there'
>>> h.greeting
'hi there'
''' }
def __init__(self):
self.greeting = "hello"
class Test(unittest.TestCase):
# tests here
if __name__ == '__main__':
import tester
tester.test(Test)
inside tester, I run the tests in Test along with a doctest on "__main__". This works great and has worked fine for a long time. Our specialized _DOC_ATTR dictionary documents individual attributes on the function when we build into Sphinx. However, doctests within this dictionary are not called. What I would like to do is within tester.test() to run doctests on the values in each class's _DOC_ATTR as well.
The problem that I'm having is trying to find a way within tester.test() to figure out all the variables (specifically classes) defined in __main__. I've tried looking at relevant places in traceback to no avail. I thought that because I was passing in a class from __main__, namely __main__.Test that I'd be able to use the .__module__ from Test to get access to the local variables there, but I can't figure out how to do it.
I would rather not need to alter the call to tester.test(Test) since it's used in hundreds of modules and I've trained all the programmers working on the project to follow this paradigm. Thanks for any help!
I think that I may have found the answer:
import inspect
stacks = inspect.stack()
if len(stacks) > 1:
outerFrame = stacks[1][0]
else:
outerFrame = stacks[0][0]
localVariables = outerFrame.f_locals
for lv in list(localVariables.keys()):
lvk = localVariables[lv]
if (inspect.isclass(lvk)):
docattr = getattr(lvk, '_DOC_ATTR', None)
if docattr is not None:
# ... do something with docattr ...
Another solution: since we are passing the "Test" class in, and in order to run there needs to be a "runTest" function defined, one could also use the func_globals on that function. Note that it cannot be a function inherited from a superclass, such as __init__, so it may have limited functionality for wider use cases.
import inspect
localVariables = Test.runTest.func_globals
for lv in list(localVariables.keys()):
lvk = localVariables[lv]
if (inspect.isclass(lvk)):
#### etc.

Is there a Python test framework suitable to run simulations

I am looking into characterising some software by running many simulations with different parameters.
Each simulation can be assimilated to a test with different input parameters.
The test specification lists the different parameters:
param_a = 1
param_b = range(1,10)
param_c = {'package_1':1.1, 'params':[1,2,34]}
function = algo_1
and that would generate a list of tests:
['test-0':{'param_a':1, 'param_b':1, param_c:},
'test-1':{'param_a':1, 'param_b':2, param_c:},
...]
and call the function with these parameters. The return value of the function is the test results that should be reported in a 'friendly way'.
test-0: performance = X%, accuracy = Y%, runtime = Zsec ...
For example, Erlang's Common Test and Quickcheck are very suitable for this task, and provide HTML reporting of the tests.
Is there anything similar in Python?
you could give Robot Framework a chance. It will be easy/native to call your Python code from Robot test cases. We will get nice HTML reports as well. If you get blocked you will get some help on SO (tag robotframework) or on the Robot User Mailing List.
Considering the lack of available packages, here is the implementation of a couple of different wanted features:
test definition:
a python file that contain a config variable which is a dictionary of static requirements and a variables variable which is a dictionary of varying requirements (stored as lists).
config = {'db' : 'database_1'}
variables = {'threshold' : [1,2,3,4]}
The test specification is import using imp, through parsing the arguments of the script into args:
testspec = imp.load_source("testspec", args.test)
test generation:
The list of tests are generated using a modified version of product from numpy:
def my_product(dicts):
return (dict(izip(dicts, x)) for x in product(*dicts.itervalues()))
def generate_tests(testspec):
return [dict(testspec.config.items() + x.items()) for x in my_product(testspec.variables)]
which returns:
[{'db': 'database_1', 'threshold': 1},
{'db': 'database_1', 'threshold': 2},
{'db': 'database_1', 'threshold': 3},
{'db': 'database_1', 'threshold': 4}]
dynamic module loading:
To load the correct module database_1 under the generic name db, I used again imp in combination with the testspec in the class that uses module:
dbModule = testspec['db']
global db
db = imp.load_source('db', 'config/'+dbModule+'.py')
pretty printing:
not much here, just logging to terminal.

Most Pythonic way to provide function metadata at compile time?

I am building a very basic platform in the form of a Python 2.7 module. This module has a read-eval-print loop where entered user commands are mapped to function calls. Since I am trying to make it easy to build plugin modules for my platform, the function calls will be from my Main module to an arbitrary plugin module. I'd like a plugin builder to be able to specify the command that he wants to trigger his function, so I've been looking for a Pythonic way to remotely enter a mapping in the command->function dict in the Main module from the plugin module.
I've looked at several things:
Method name parsing: the Main module would import the plugin module
and scan it for method names that match a certain format. For
example, it might add the download_file_command(file) method to its
dict as "download file" -> download_file_command. However, getting a
concise, easy-to-type command name (say, "dl") requires that the
function's name also be short, which isn't good for code
readability. It also requires the plugin developer to conform to a
precise naming format.
Cross-module decorators: decorators would let
the plugin developer name his function whatever he wants and simply
add something like #Main.register("dl"), but they would necessarily
require that I both modify another module's namespace and keep
global state in the Main module. I understand this is very bad.
Same-module decorators: using the same logic as above, I could add a
decorator that adds the function's name to some command name->function mapping local to the
plugin module and retrieve the mapping to the Main module with an
API call. This requires that certain methods always be present or
inherited though, and - if my understanding of decorators is correct - the function will only register itself the first time it is run and will unnecessarily re-register itself every subsequent time
thereafter.
Thus, what I really need is a Pythonic way to annotate a function with the command name that should trigger it, and that way can't be the function's name. I need to be able to extract the command name->function mapping when I import the module, and any less work on the plugin developer's side is a big plus.
Thanks for the help, and my apologies if there are any flaws in my Python understanding; I'm relatively new to the language.
Building or Standing on the first part of #ericstalbot's answer, you might find it convenient to use a decorator like the following.
################################################################################
import functools
def register(command_name):
def wrapped(fn):
#functools.wraps(fn)
def wrapped_f(*args, **kwargs):
return fn(*args, **kwargs)
wrapped_f.__doc__ += "(command=%s)" % command_name
wrapped_f.command_name = command_name
return wrapped_f
return wrapped
################################################################################
#register('cp')
def copy_all_the_files(*args, **kwargs):
"""Copy many files."""
print "copy_all_the_files:", args, kwargs
################################################################################
print "Command Name: ", copy_all_the_files.command_name
print "Docstring : ", copy_all_the_files.__doc__
copy_all_the_files("a", "b", keep=True)
Output when run:
Command Name: cp
Docstring : Copy many files.(command=cp)
copy_all_the_files: ('a', 'b') {'keep': True}
User-defined functions can have arbitrary attributes. So you could specify that plug-in functions have an attribute with a certain name. For example:
def a():
return 1
a.command_name = 'get_one'
Then, in your module you could build a mapping like this:
import inspect #from standard library
import plugin
mapping = {}
for v in plugin.__dict__.itervalues():
if inspect.isfunction(v) and v.hasattr('command_name'):
mapping[v.command_name] = v
To read about arbitrary attributes for user-defined functions see the docs
There are two parts in a plugin system:
Discover plugins
Trigger some code execution in a plugin
The proposed solutions in your question address only the second part.
There many ways to implement both depending on your requirements e.g., to enable plugins, they could be specified in a configuration file for your application:
plugins = some_package.plugin_for_your_app
another_plugin_module
# ...
To implement loading of the plugin modules:
plugins = [importlib.import_module(name) for name in config.get("plugins")]
To get a dictionary: command name -> function:
commands = {name: func
for plugin in plugins
for name, func in plugin.get_commands().items()}
Plugin author can use any method to implement get_commands() e.g., using prefixes or decorators — your main application shouldn't care as long as get_commands() returns the command dictionary for each plugin.
For example, some_plugin.py (full source):
def f(a, b):
return a + b
def get_commands():
return {"add": f, "multiply": lambda x,y: x*y}
It defines two commands add, multiply.

Categories