I have a Python main program that imports another module (called actions) with multiple functions. The main program should run some things, get a string (i.e. goto(114)) and then run actions.goto(114), in which 114 is the argument to the function goto(x) in actions.
I've tried the obvious which was just trying to run the string but that did not work. I've also find the globals() method which would work if the goto(x) was inside my main module and I've also found the getattr method, but in this case I haven't found any example in which I pass the function name and argument so I'm kind of lost here.
#main.py
import actions
def main():
getc = 'goto(114)'
result = actions.getc #this would be actions.goto(114)
print result
#actions.py
def goto(x):
#code
return something
The actual program gets the string from a .txt file that another program wrote, I just made the example that way so that its simple to understand.
One option you could use is __getattribute__ on the action class to get the function goto, then call it with the encompassing argument. You'd need to parse it like so:
import re
import action
getc = 'goto(114)'
func, arg = re.search('(\w+)\((\d+)\)', 'goto(114)').groups()
# f is the function action.goto with the argument 114 supplied as an int
# __getattribute__ allows you to look up a class method by a string name
f = action.__getattribute__(func)
# now you can just call it with the arg converted to int
result = f(int(arg))
The regex might need to be refined a bit, but it's looking for the name of the calling function, and the arguments wrapped in parentheses. The __getattribute__ will get the function object from action and return it uncalled so you can call it later.
For multiple arguments you can leverage the ast library:
import re
import ast
# I'm going to use a sample class as a stand-in
# for action
class action:
def goto(*args):
print(args)
getc = 'goto(114, "abc", [1,2,3])'
func, args = re.search('(\w+)\((.*)\)', getc).groups()
# will convert this into python data structures
# safely, and will fail if the argument to literal_eval
# is not a python structure
args = ast.literal_eval('(%s)' % args)
f = getattr(action, func)
f(*args)
# 114, "abc", [1,2,3]
The easier option (proceed with caution) would be to use eval:
cmd = 'action.%s' % getc
result = eval(cmd)
Note that this is considered bad practice in the python community, though there are examples in the standard library that use it. This is not safe for un-validated code, and is easily exploited if you don't monitor your source file
Related
It is tricky question, I need to know one thing that...
two function with different functionality and one more function called 3rd function which will decide that to use any one function. That decision will be passed as argument. Below with clarity code.
# Present in project/testing/local/funtion_one.py
def testing_function_one(par1, par2, par3):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_one
# Present in project/testing/local/funtion_two.py
def testing_function_two(par1, par2, par3, par4, par5):
"""do something may be add all par value"""
sum_parms = par1 + par2 + par3
return sum_params_two
# Present in project/testing/function_testing.py
def general_function_testing(function_name, function_path, funtion_params, extra_params):
"""
function_name: would be any function testing_function_one or testing_function_two
function_path: path for where the function is located.
funtion_params: arguments for that calling function.
"""
Now I need like based on above params details, how to call the required function
using path and pass the params for that function and how to handle on passing
number of params for that perticular funtion.
I am looking like:
funt_res = function_name(funtion_params)
# After getting result do something with other params.
new_res = funt_res * extra_params
if __name__ == "__main__"
function_name = "testing_function_two"
function_path = "project/testing/local/funtion_two.py"
funtion_params = pass values to testing_function_two funtion. it
can be {"par1": 2, "par2": 2, "par3": 4, "par4": 6, "par5": 8}
extra_params = 50
res = general_function_testing(function_name, function_path,
funtion_params, extra_params)
Tried:
# This part will work only when **calling_funtion_name**
present in same file otherwise it gives error.
For me it should check all the project or specified path
f_res = globals()["calling_funtion_name"](*args, **kwargs)
print('f_ress', f_res)
anyone can try this one...
If above is not clear, let me know, i will try to explain with other examples.
Though possible, in Python, few times one will need to pass a function by its name as a string. Specially, if the wanted result is for the function to be called in its destination - the reason for that is that functions are themselves "first class objects" in Python, and can be assigned to new variable names (which will simply reference the function) and be passed as arguments to other functions.
So, if one wants to pass sin from the module math to be used as a numericd function inside some other code, instead of general_function_testing('sin', 'math', ...) one can simply write:
import math
general_function_testing(math.sin, ...)
And the function callad with this parameter can simply use whatever name it has for the parameter to call the passed function:
def general_function_testing(target_func, ...):
...
result = target_func(argument)
...
While it is possible to retrieve a function from its name and module name as strings, its much more cumbersome due to nested packages: the code retrieveing the function would have to take care of any "."s in the "function path" as you call it, make carefull use of the built-in __import__, which allows one to import a module given its name as a string, though it has a weird API, and then retrieve the function from the module using a getattr call. And all this to have the a reference to the function object itself, which could be passed as a parameter from the very first moment.
The example above doing it via strings could be:
import sys
def general_function_testing(func_name, func_path, ...):
...
__import__(func_path) # imports the module where the function lives, func_path being a string
module = sys.modules[func_path] # retrieves the module path itself
target_func = getattr(module, func_name)
result = target_func(argument)
...
Lets say you have the following code
def set_args():
#Set Arguments
parser = argparse.ArgumentParser()
parser.add_argument("-test", help = 'test')
return parser
def run_code():
def fileCommand():
print("the file type is\n")
os.system("file " + filename).read().strip()
def main():
parser = set_args()
args = parser.parse_args()
what is best way to call that fileCommand() function from def main():?
I have the following code which of course doesn't work:
def main():
parser = set_args()
args = parser.parse_args()
#If -test,
if args.test:
filename=args.test
fileCommand()
So if you were to run python test.py -test test.txt it should run the file command on it to get the file type
I know if I keep messing around I'll get it one way or another, but I typically start to over complicated things so its harder down the line later. So what is the proper way to call nested functions?
Thanks in advance!
Python inner functions (or nested functions) have many use cases but in non of them, accessing through the nested function from outside is an option. A nested function is created to have access to the parent function variables and calling it from outside of the parent function is in conflict with principle.
You can call the function from the parent function or make it a decorator:
Calling the nested function from the parent would be:
def run_code(filename):
def fileCommand():
print("the file type is\n")
os.system("file " + filename).read().strip()
fileCommand()
If you describe more about your use case and the reason why you want to run the code like this, I can give more details about how you can implement your code.
This question already has answers here:
Calling functions by array index in Python
(5 answers)
Closed 1 year ago.
I am new to python and I am exploring python I have many different functions in 1 file want to expose those functions to client.
This is my app.py file
import sys
def get_data(e_id, t_id):
#some code
def get_2_data(e_id, t_id)
#some code
def get_3_data(t_id)
#some code
if __name__ == '__main__':
get_data(sys.argv[1], sys.argv[2])
Here I want to get specific function data. Currently I am running python app.py 1234 1.
The function which is defined under main gets called.
But I want get_3_data() data. How to call the particular function or someone wants to fetch get_2_data(). How to expose those functions. Any suggestion. I dont want any HTTP call or API. I want to call by method name.
The clean way to do this is by using the module argparse.
Here is how to do this:
import argparse
def get_data(args):
e_id = args.e_id
t_id = args.t_id
print(f'Function get_data. e_id: {e_id}. t_id: {t_id}')
def get_data_2(args):
e_id = args.e_id
t_id = args.t_id
print(f'Function get_data_2. e_id: {e_id}. t_id: {t_id}')
def get_data_3(args):
t_id = args.t_id
print(f'Function get_data_3. t_id: {t_id}')
if __name__ == '__main__':
# Create the arguments parser
argparser = argparse.ArgumentParser()
# Create the subparsers
subparsers = argparser.add_subparsers()
# Add a parser for the first function
get_data_parser = subparsers.add_parser('get_data')
# Set the function name
get_data_parser.set_defaults(func=get_data)
# Add its arguments
get_data_parser.add_argument('e_id')
get_data_parser.add_argument('t_id')
# Add a parser for the second function
get_data_2_parser = subparsers.add_parser('get_data_2')
# Set the function name
get_data_2_parser.set_defaults(func=get_data_2)
# Add its arguments
get_data_2_parser.add_argument('e_id')
get_data_2_parser.add_argument('t_id')
# Add a parser for the third function
get_data_3_parser = subparsers.add_parser('get_data_3')
# Set the function name
get_data_3_parser.set_defaults(func=get_data_3)
# Add its arguments
get_data_3_parser.add_argument('t_id')
# Get the arguments from the comand line
args = argparser.parse_args()
# Call the selected function
args.func(args)
As showed in this example, you will have to change your functions a little bit:
They will take only one argument called args (created in the main function with args = argparser.parse_args())
And then, in each function, you will get the needed parameters by their names (the ones added with add_argument, like in get_data_3_parser.add_argument('t_id'). So, to get the argument called t_id, you will write t_id = args.t_id.
argparse documentation: https://docs.python.org/3/library/argparse.html
argparse tutorial: https://docs.python.org/3/howto/argparse.html
From the documentation:
Many programs split up their functionality into a number of
sub-commands, for example, the svn program can invoke sub-commands
like svn checkout, svn update, and svn commit. Splitting up
functionality this way can be a particularly good idea when a program
performs several different functions which require different kinds of
command-line arguments. ArgumentParser supports the creation of such
sub-commands with the add_subparsers() method.
I am currently developing an automated function tester in Python.
The purpose of this application is to automatically test if functions are returning an expected return type based on their defined hints.
Currently I have two test functions (one which fails and one which passes), along with the rest of my code in one file. My code utilizes the globals() command in order to scan the Python file for all existing functions and to isolate user-made functions and exclude the default ones.
This initial iteration works well. Now I am trying to import the function and use it from another .py file.
When I run it in the other .py file it still returns results for the functions from the original file instead of the new test-cases in the new file.
Original File - The Main Application
from math import floor
import random
#declaring test variables
test_string = 'test_string'
test_float = float(random.random() * 10)
test_int = int(floor(random.random() * 10))
#Currently supported test types (input and return)
supported_types = ['int', 'float', 'str']
autotest_result = {}
def int_ret(number: int) -> str:
string = "cactusmonster"
return string
def false_test(number: int) -> str:
floating = 3.2222
return floating
def test_typematching():
for name in list(globals()):
if not name.startswith('__'):
try:
return_type = str((globals()[name].__annotations__)['return'])
autotest_result.update({name: return_type.replace("<class '", "").replace("'>", "")})
except:
continue
for func in autotest_result:
if autotest_result[func] != None:
this_func = globals()[func].__annotations__
for arg in this_func:
if arg != 'return':
input_type = str(this_func[arg]).replace("<class '", "").replace("'>", "")
for available in supported_types:
if available == input_type:
func_return = globals()[func]("test_" + input_type)
func_return = globals()[func]("test_" + input_type)
actual_return_type = str(type(func_return)).replace("<class '", "").replace("'>", "")
if actual_return_type == autotest_result[func]:
autotest_result[func] = 'Passed'
else:
autotest_result[func] = 'Failed'
return autotest_result
Test File - Where I Am Importing The "test_typematching()" Function
from auto_test import test_typematching
print(test_typematching())
def int_ret_newfile(number: int) -> str:
string="cactusmonster"
# print(string)
# return type(number)
return string
Regardless if I run my main "auto_test.py" file or the "tester.py" file, I still get the following output:
{'int_ret': 'Passed', 'false_test': 'Failed'}
I am guessing this means that even when I am running the function from auto_test.py on my tester.py file it still just scans itself. I would like it to scan the file where the function is currently being called. For example, I expect it to test the int_ret_newfile function of tester.py.
Any advice or help would be much appreciated.
globals() is a bit of a misnomer. It gets the calling module's __dict__. (Python's true "global" namespace is actually builtins.)
How can globals() get its caller's __dict__ when it's defined in the builtins module? Here's a clue:
PyObject *
PyEval_GetGlobals(void)
{
PyThreadState *tstate = _PyThreadState_GET();
PyFrameObject *current_frame = _PyEval_GetFrame(tstate);
if (current_frame == NULL) {
return NULL;
}
assert(current_frame->f_globals != NULL);
return current_frame->f_globals;
}
globals() is one of those builtins that's implemented in C (in CPython), but you get the gist. It reads the frame globals from the current stack frame, so in Python,
import inspect
inspect.currentframe().f_globals
would do the same thing as globals(). But you can't just put this in a function and expect it to work the same way, because calling it would add a stack frame, and that frame's globals depends on the function's .__globals__ attribute, which is set to the .__dict__ of the module that defined it. You want the caller's frame.
def myglobals():
"""Behaves like the builtin globals(), but written in Python!"""
return inspect.currentframe().f_back.f_globals
You could do the same thing in test_typematching. But walking up the stack to the previous frame like that is a weird thing to do. It can be surprising and brittle. It amounts to passing the caller's frame as an implicit hidden argument, something that normally is not supposed to matter. Consider what happens if you wrap it in a decorator. Now which stack frame are you getting the globals from?
So really, you should be passing in globals() as an explicit argument to test_typematching(), like test_typematching(globals()). A defined and documented parameter would be much less confusing than implicit introspection. "Explicit is better than implicit".
Still, Python's standard library does do this kind of thing occasionally, with globals() itself being a notable example. And exec() can use the current namespace if you don't give it a different one. It's also how super() can now work without arguments in Python 3. So stack frame inspection does have precedent for this kind of use case.
I'd like to write a method in python which dynamically reads a module and creates a list of all the functions in that module. Then I'd like to loop through this list and call each function. So far I have the following code:
import mymodule
from inspect import getmembers, isfunction
def call_the_functions():
functions_list = [f for f in getmembers(mymodule) if isfunction(f[1])]
for f in functions_list:
result = f()
My problem is that my program is crashing because some of the functions require arguments. I'd like to do something like the following, but don't know how:
for f in functions_list:
args = [""] * f.expectedNumberOfArguments()
result = f(*args)
Am I going about this the right way? (I'm basically writing a unit test, and the first check is simply that the functions return an object of the right type, regardless of the arguments they are called with.)
Your approach is fundamentally flawed. If written carefully, the functions will reject arguments of invalid type by raising TypeError or asserting. Failing that, they will try to access an attribute or method on the parameter and promptly get an AttributeError.
It is futile to try to avoid writing unit tests that know something about the functions being tested.
You could use inspect.getargspec():
In [17]: def f(x, z=2, *args, **kwargs): pass
In [18]: inspect.getargspec(f)
Out[18]: ArgSpec(args=['x', 'z'], varargs='args', keywords='kwargs', defaults=(2,))
Whether it's meaningful to call functions that you know nothing about with arbitrary arguments is a different question...
You can use the inspect module and getargspec. Here is a simple example:
import inspect
def myfunc(x):
return x * 2
print inspect.getargspec(myfunc);
gives:
ArgSpec(args=['x'], varargs=None, keywords=None, defaults=None)
Some functions might be generators, so your test strategy of calling them might not give you what you expect, inspect.isgeneratorfunction() will allow you to test for that.