I'd like to have a set of functions which can be called upon specific input types. I provide a brief example, let's say I have the following description in a JSON
{
"type": "location",
"precision": 100
}
and I have 2 functions such as
fun1(type,param) # Here param is intended as the precision
fun2(type,param) # Here param is intended as another variable
however, I want to be able to match the description only with fun1 which has the correct type and param, although the python type of param can be the same for both function, however with a different meaning. Moreover, there can be multiple param to check.
Has python something handy to handle this?
Let's suppose, you have already loaded your functions to dict in Python.
There are many approaches, how to do the job, so I will write only few of them down here and demonstrate on only on few of them.
Function decorators to verify, whether the dictionary contains the right variable before calling it. -- This approach is by my opinion best for short scripts.
If else chain with your types -- I think, this approach is the best for long term maintenance.
Check in the beginning of function whether you want to run it. -- If you don't care about anything and want a short code to run in shortest possible time.
Map from type to correct function -- This approach is for good performance
Demonstration of first approach
First, we have to make a function generating decorators.
def dec_gen(the_type: str):
def dec(func):
def inner(d: dict):
if d.get('type') == the_type:
func(d)
return inner
return dec
Let's change fun1 a little bit.
#dec_gen('location')
def fun1(d: dict):
...your code....
Demonstration of third approach
Let's change fun1 a little bit (again)
def fun1(d: dict):
if d.get('type') == 'location':
...your code...
If you write such header for all fun1, fun2,..., funn, you can just pass the dictionary and it will be run only on few of them.
Of course, this one can get terribly slow for many different types and large N, but there is no requirement on speed in your question.
Demonstration of forth approach
See the other answer.
The easiest way is probably to use a dictionary for the mapping and (optional) associate every function with an appropriate attribute to keep track:
# untested
def func1(data, param):
pass
# do something
func1.type = "location"
def func2(data, param):
pass
# do something
func2.type = "something_else"
funcs = [func1, func2]
type_func_map = {func.type: func for func in funcs}
# apply the function to data:
def apply_matching_func(data, param):
func = type_func_map.get(data["type"])
if func:
return func(data, param)
Related
Given the following code,
def myfunc(a=None, b=None, c=None, **kw):
func(arga=a, argb=b, **kw)
#do something with c
def func(arga=None, argb=None, argc=None):
....
Can I replicate part of the signature of func, namely the missing args, without imitating every missing arg of func manually?
Put it more simply, I want to see argc in keywords of myfunc such that myfunc? would be different. It would contain argc. myfunc(a=None,b=None,c=None,argc=None)
#functools.wraps allows for wrapping a complete functions. Using partial can subtract args. But don't know to add.
yes, it is possible, though not trivial -
Python's introspection capabilities allow you to check all parameters the target function declares, and it is possible to build a new function programmatically that will include those attributes automatically.
I have written this for a project of mine, and had exposed the relevant code as my answer here: Signature-changing decorator: properly documenting additional argument
I will not mark this as duplicate, since the other question is more worried about documenting the new function.
If you want to give a try, with your code, maybe with something simpler, you can check the inspect.signature call from the standard library, which allows one to discover everything about parameters and default arguments of the target function.
Building a new function from this information is a bit more tricky, but possible - but one can always resort to a exec call which will can create a new function from a string template. The answer there follows this line.
I'm not sure what is being asked here either but I have here alternative code to functools.partial that might be adapted ???
(edit>)
The difference here from partial is that the mkcall argument is a string rather than a series of arguments. This string can then be formatted and analysed according to whatever appropriate requirements are needed before the target function is called.
(<edit)
def mkcall(fs, globals=None,locals=None):
class func:
def __init__(f,fcnm=None,params=None,globals=None,locals=None):
f.nm = fcnm
f.pm = params
f.globals = globals
f.locals = locals
def __call__(f):
s = f.nm + f.pm
eval(s,f.globals,f.locals)
if '(' in fs:
funcn,lbr,r = fs.partition('(')
tp = lbr + r
newf = func(funcn,tp,globals,locals)
callf = newf.__call__
else:
callf = eval(fs,globals,locals)
return callf
#call examples
# mkcall("func(arg)")
# mkcall("func")
Issue: I have 2 functions that both require the same nested functions to operate so they're currently copy-pasted into each function. These functions cannot be combined as the second function relies on calling the first function twice. Unnesting the functions would result in the addition of too many parameters.
Question: Is it better to run the nested functions in the first function and append their values to an object to be fed into the 2nd function, or is it better to copy and paste the nested functions?
Example:
def func_A(thing):
def sub_func_A(thing):
thing += 1
return sub_func_A(thing)
def func_B(thing):
def sub_func_B(thing):
thing += 1
val_A, val_B = func_A(5), func_A(5)
return sub_func_B(val_A), sub_func_B(val_B)
Imagine these functions couldn't be combined and the nested function relied on so many parameters that moving it outside and calling it would be too cluttered
The "better option" depends on a few factors -:
The type of optimization you want to achieve.
The time taken by the functions to execute.
If the type of optimization to be achieved here is based on the time taken to execute the second function in the two cases, then it depends on the time taken for the nested function to fully execute, if that time is less than the time taken to store it's output when it's first called by the first function then its better copy pasting them.
While, if the time taken by the nested function to execute is more than the time taken to store it's output, then its a better option to execute it first time and then store it's output for future use.
Further, As mentioned by #DarylG in the comments, a class based approach can also be used wherein the nested function(subfunction) can be a private function(only accessible by the class's inner components), while the two functions(func_A and func_B) can be public thus allowing them to be used and accessed widely from the outside as well. If implemented in code it might look something like this -:
class MyClass() :
def __init__(self, ...) :
...
return
def __subfunc(self, thing) :
# PRIVATE SUBFUNC
thing += 1
return thing
def func_A(self, thing):
# PUBLIC FUNC A
return self.__subfunc(thing)
def func_B(self, thing):
# PUBLIC FUNC B
val_A, val_B = self.func_A(5), self.func_A(5)
return self.__subfunc(val_A), self.__subfunc(val_B)
There were several discussions on "returning multiple values in Python", e.g.
1,
2.
This is not the "multiple-value-return" pattern I'm trying to find here.
No matter what you use (tuple, list, dict, an object), it is still a single return value and you need to parse that return value (structure) somehow.
The real benefit of multiple return value is in the upgrade process. For example,
originally, you have
def func():
return 1
print func() + func()
Then you decided that func() can return some extra information but you don't want to break previous code (or modify them one by one). It looks like
def func():
return 1, "extra info"
value, extra = func()
print value # 1 (expected)
print extra # extra info (expected)
print func() + func() # (1, 'extra info', 1, 'extra info') (not expected, we want the previous behaviour, i.e. 2)
The previous codes (func() + func()) are broken. You have to fix it.
I don't know whether I made the question clear... You can see the CLISP example. Is there an equivalent way to implement this pattern in Python?
EDIT: I put the above clisp snippets online for your quick reference.
Let me put two use cases here for multiple return value pattern. Probably someone can have alternative solutions to the two cases:
Better support smooth upgrade. This is shown in the above example.
Have simpler client side codes. See following alternative solutions I have so far. Using exception can make the upgrade process smooth but it costs more codes.
Current alternatives: (they are not "multi-value-return" constructions, but they can be engineering solutions that satisfy some of the points listed above)
tuple, list, dict, an object. As is said, you need certain parsing from the client side. e.g. if ret.success == True: blabla. You need to ret = func() before that. It's much cleaner to write if func() == True: blabal.
Use Exception. As is discussed in this thread, when the "False" case is rare, it's a nice solution. Even in this case, the client side code is still too heavy.
Use an arg, e.g. def func(main_arg, detail=[]). The detail can be list or dict or even an object depending on your design. The func() returns only original simple value. Details go to the detail argument. Problem is that the client need to create a variable before invocation in order to hold the details.
Use a "verbose" indicator, e.g. def func(main_arg, verbose=False). When verbose == False (default; and the way client is using func()), return original simple value. When verbose == True, return an object which contains simple value and the details.
Use a "version" indicator. Same as "verbose" but we extend the idea there. In this way, you can upgrade the returned object for multiple times.
Use global detail_msg. This is like the old C-style error_msg. In this way, functions can always return simple values. The client side can refer to detail_msg when necessary. One can put detail_msg in global scope, class scope, or object scope depending on the use cases.
Use generator. yield simple_return and then yield detailed_return. This solution is nice in the callee's side. However, the caller has to do something like func().next() and func().next().next(). You can wrap it with an object and override the __call__ to simplify it a bit, e.g. func()(), but it looks unnatural from the caller's side.
Use a wrapper class for the return value. Override the class's methods to mimic the behaviour of original simple return value. Put detailed data in the class. We have adopted this alternative in our project in dealing with bool return type. see the relevant commit: https://github.com/fqj1994/snsapi/commit/589f0097912782ca670568fe027830f21ed1f6fc (I don't have enough reputation to put more links in the post... -_-//)
Here are some solutions:
Based on #yupbank 's answer, I formalized it into a decorator, see github.com/hupili/multiret
The 8th alternative above says we can wrap a class. This is the current engineering solution we adopted. In order to wrap more complex return values, we may use meta class to generate the required wrapper class on demand. Have not tried, but this sounds like a robust solution.
try inspect?
i did some try, and not very elegant, but at least is doable.. and works :)
import inspect
from functools import wraps
import re
def f1(*args):
return 2
def f2(*args):
return 3, 3
PATTERN = dict()
PATTERN[re.compile('(\w+) f()')] = f1
PATTERN[re.compile('(\w+), (\w+) = f()')] = f2
def execute_method_for(call_str):
for regex, f in PATTERN.iteritems():
if regex.findall(call_str):
return f()
def multi(f1, f2):
def liu(func):
#wraps(func)
def _(*args, **kwargs):
frame,filename,line_number,function_name,lines,index=\
inspect.getouterframes(inspect.currentframe())[1]
call_str = lines[0].strip()
return execute_method_for(call_str)
return _
return liu
#multi(f1, f2)
def f():
return 1
if __name__ == '__main__':
print f()
a, b = f()
print a, b
Your case does need code editing. However, if you need a hack, you can use function attributes to return extra values , without modifying return values.
def attr_store(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
#attr_store('extra',None)
def func(input_str):
func.extra = {'hello':input_str + " ,How r you?", 'num':2}
return 1
print(func("John")+func("Matt"))
print(func.extra)
Demo : http://codepad.org/0hJOVFcC
However, be aware that function attributes will behave like static variables, and you will need to assign values to them with care, appends and other modifiers will act on previous saved values.
the magic is you should use design pattern blablabla to not use actual operation when you process the result, but use a parameter as the operation method, for your case, you can use the following code:
def x():
#return 1
return 1, 'x'*1
def f(op, f1, f2):
print eval(str(f1) + op + str(f2))
f('+', x(), x())
if you want generic solution for more complicated situation, you can extend the f function, and specify the process operation via the op parameter
Is there a pythonic preferred way to do this that I would do in C++:
for s in str:
if r = regex.match(s):
print r.groups()
I really like that syntax, imo it's a lot cleaner than having temporary variables everywhere. The only other way that's not overly complex is
for s in str:
r = regex.match(s)
if r:
print r.groups()
I guess I'm complaining about a pretty pedantic issue. I just miss the former syntax.
How about
for r in [regex.match(s) for s in str]:
if r:
print r.groups()
or a bit more functional
for r in filter(None, map(regex.match, str)):
print r.groups()
Perhaps it's a bit hacky, but using a function object's attributes to store the last result allows you to do something along these lines:
def fn(regex, s):
fn.match = regex.match(s) # save result
return fn.match
for s in strings:
if fn(regex, s):
print fn.match.groups()
Or more generically:
def cache(value):
cache.value = value
return value
for s in strings:
if cache(regex.match(s)):
print cache.value.groups()
Note that although the "value" saved can be a collection of a number of things, this approach is limited to holding only one such at a time, so more than one function may be required to handle situations where multiple values need to be saved simultaneously, such as in nested function calls, loops or other threads. So, in accordance with the DRY principle, rather than writing each one, a factory function can help:
def Cache():
def cache(value):
cache.value = value
return value
return cache
cache1 = Cache()
for s in strings:
if cache1(regex.match(s)):
# use another at same time
cache2 = Cache()
if cache2(somethingelse) != cache1.value:
process(cache2.value)
print cache1.value.groups()
...
There's a recipe to make an assignment expression but it's very hacky. Your first option doesn't compile so your second option is the way to go.
## {{{ http://code.activestate.com/recipes/202234/ (r2)
import sys
def set(**kw):
assert len(kw)==1
a = sys._getframe(1)
a.f_locals.update(kw)
return kw.values()[0]
#
# sample
#
A=range(10)
while set(x=A.pop()):
print x
## end of http://code.activestate.com/recipes/202234/ }}}
As you can see, production code shouldn't touch this hack with a ten foot, double bagged stick.
This might be an overly simplistic answer, but would you consider this:
for s in str:
if regex.match(s):
print regex.match(s).groups()
There is no pythonic way to do something that is not pythonic. It's that way for a reason, because 1, allowing statements in the conditional part of an if statement would make the grammar pretty ugly, for instance, if you allowed assignment statements in if conditions, why not also allow if statements? how would you actually write that? C like languages don't have this problem, because they don't have assignment statements. They make do with just assignment expressions and expression statements.
the second reason is because of the way
if foo = bar:
pass
looks very similar to
if foo == bar:
pass
even if you are clever enough to type the correct one, and even if most of the members on your team are sharp enough to notice it, are you sure that the one you are looking at now is exactly what is supposed to be there? it's not unreasonable for a new dev to see this and just fix it (one way or the other) and now its definitely wrong.
Whenever I find that my loop logic is getting complex I do what I would with any other bit of logic: I extract it to a function. In Python it is a lot easier than some other languages to do this cleanly.
So extract the code that just generates the items of interest:
def matching(strings, regex):
for s in strings:
r = regex.match(s)
if r: yield r
and then when you want to use it, the loop itself is as simple as they get:
for r in matching(strings, regex):
print r.groups()
Yet another answer is to use the "Assign and test" recipe for allowing assigning and testing in a single statement published in O'Reilly Media's July 2002 1st edition of the Python Cookbook and also online at Activestate. It's object-oriented, the crux of which is this:
# from http://code.activestate.com/recipes/66061
class DataHolder:
def __init__(self, value=None):
self.value = value
def set(self, value):
self.value = value
return value
def get(self):
return self.value
This can optionally be modified slightly by adding the custom __call__() method shown below to provide an alternative way to retrieve instances' values -- which, while less explicit, seems like a completely logical thing for a 'DataHolder' object to do when called, I think.
def __call__(self):
return self.value
Allowing your example to be re-written:
r = DataHolder()
for s in strings:
if r.set(regex.match(s))
print r.get().groups()
# or
print r().groups()
As also noted in the original recipe, if you use it a lot, adding the class and/or an instance of it to the __builtin__ module to make it globally available is very tempting despite the potential downsides:
import __builtin__
__builtin__.DataHolder = DataHolder
__builtin__.data = DataHolder()
As I mentioned in my other answer to this question, it must be noted that this approach is limited to holding only one result/value at a time, so more than one instance is required to handle situations where multiple values need to be saved simultaneously, such as in nested function calls, loops or other threads. That doesn't mean you should use it or the other answer, just that more effort will be required.
We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment.
We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments.
So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times.
We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function.
However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description.
So, the question is, what are good 'pythonic' ways of adding this sort of information to a function.
The two possibilities I have thought of are:
Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec)
Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata.
Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.
Decorators are a good way to add metadata to functions. Add one that takes a list of types to append to a .params property or something:
def takes(*args):
def _takes(fcn):
fcn.params = args
return fcn
return _takes
#takes("time", "temp", "time")
def do_stuff(start_time, average_temp, stop_time):
pass
I would use some kind of decorator:
class TypeProtector(object):
def __init__(self, fun, types):
self.fun, self.types = fun, types
def __call__(self, *args, **kwargs)
# validate args with self.types
pass
# run function
return fun(*args, **kwargs)
def types(*args):
def decorator(fun):
# validate args count with fun parameters count
pass
# return covered function
return TypeProtector(fun, args)
return decorator
#types(Time, Temperature)
def myfunction(foo, bar):
pass
myfunction('21:21', '32C')
print myfunction.types
The 'pythonic' way to do this are function annotations.
def DoSomething(critical_temp: "temperature", time: "time")
pass
For python 2.x, I like to use the docstring
def my_func(txt):
"""{
"name": "Justin",
"age" :15
}"""
pass
and it can be automatically assign to the function object with this snippet
for f in globals():
if not hasattr(globals()[f], '__call__'):
continue
try:
meta = json.loads(globals()[f].__doc__)
except:
continue
for k, v in meta.items():
setattr(globals()[f], k, v)