How to check arguments of the function? - python

I have function defined this way:
def f1 (a, b, c = None, d = None):
.....
How do I check that a, b are not equal to some value. E.g. I want to check they are not empty strings like, "" or " "
Thinking about something like.
arguments = locals()
for item in arguments:
check_attribute(item, arguments[item])
And then check if arguments are not "", " ". But in this case it will also try to check None values (what I don't want to do).

A typical approach would be:
import sys
...
def check_attribute(name, value):
"""Gives warnings on stderr if the value is an empty or whitespace string.
All other values, including None, are OK and give no warning.
"""
if isinstance(value, basestring) and (not value or value.isspace()):
print>>sys.stderr, "Invalid value %r for argument %r" % (value, name)
or, of course, you could issue warnings, or raise exceptions if the problem is very serious according to your application's semantics.
One should probably delegate all of the checking to a single function, instead of looping in the function whose args you're checking (the latter would be sticking "checking code" smack in the middle of application logic -- better keep it out or the way...):
def check_arguments(d):
for name, value in d.iteritems():
check_attribute(name, value)
and the function would be just:
def f1 (a, b, c=None, d=None):
check_arguments(locals())
...
You could, alternatively, write a decorator in order to be able to code
#checked_arguments
def f1 (a, b, c=None, d=None):
...
(to get checking code even more "out of the way"), but this might be considered overkill unless you really have a lot of functions requiring exactly this kind of checks!
Argument-name introspection (while feasible, thanks to module inspect) is far less simple in a decorator than within the function itself, which is why my favorite design approach would be to eschew the decorator approach in this case (simplicity is seriously good;-).
Edit -- showing how to implement a decorator, since the OP explicitly asked for one (though without clarifying why).
The main problem (in Python 2.6 and earlier) is for the wrapper to construct a mapping equivalent to the locals() which Python makes for you, but needs to be done explicitly in a generic wrapper.
But -- if you use the new 2.7, inspect.getcallargs does it for you! So, the problem becomes much simpler, and the decorator perhaps worth doing in many more cases (if you're in 2.6 or earlier, I still recommend eschewing the decorator approach, which would be substantially more complicated, for such specialized uses).
So, here is all you need, in Python 2.7 (reusing the check_arguments function I defined above):
import functools
import inspect
def checked_arguments(f):
#functools.wraps(f)
def wrapper(*a, **k):
d = inspect.getcallargs(f, *a, **k)
check_arguments(d)
return f(*a, **k)
return wrapper
The difficulty in pre-2.7 versions comes entirely from the difficulty of implementing the equivalent of inspect.getcallargs -- so, I hope that, if you really need decorators of this kind, you can simply download Python 2.7 from www.python.org and install it on your box!-)
(If you do, you'll also get many more goodies besides, as well as a longer support cycle than just about any previous Python version, since 2.7 is slated to be the last release in the Python 2.* line).

Why can't you refer to the values by their names?
def f1 (a, b, c=None, d=None):
if not a.strip():
print('a is not empty')
If you have many arguments it is worth changing function signature to:
def f2 (*args, c=None, d=None):
for var in args:
if not var.strip():
raise ValueError('all elements should be non-empty')

for key, value in locals().items():
if value is not None:
check_attribute(key, value)
Though as others have said already, you can just check the arguments directly by name.

Related

How to map function on all argument values, as a list? but have explicit argument names in the function definition

I want to define a function using explicit argument names ff(a,b,c) in the function definition, but I also want to map a function over all arguments to get a list:
ff(a,b,c):
return list(map(myfunc,[a,b,c]))
However, I don't want to explicitly write parameter names inside function as a,b,c. I want to do it like
ff(a,b,c):
return list(map(myfunc,getArgValueList()))
getArgValueList() will retrieve the argument values in order and form a list. How to do this? Is there a built-in function like getArgValueList()?
What you're trying to do is impossible without ugly hacks. You either take *args and get a sequence of parameter values that you can use as args:
def ff(*args):
return list(map(myfunc, args))
… or you take three explicit parameters and use them by name:
def ff(a, b, c):
return list(map(myfunc, (a, b, c)))
… but it's one or the other, not both.
Of course you can put those values in a sequence yourself if you want:
def ff(a, b, c):
args = a, b, c
return list(map(myfunc, args))
… but I'm not sure what that buys you.
If you really want to know how to write a getArgValueList function anyway, I'll explain how to do it. However, if you're looking to make your code more readable, more efficient, more idiomatic, easier to understand, more concise, or almost anything else, it will have the exact opposite effect. The only reason I could imagine doing something like this is if you had to generate functions dynamically or something—and even then, I can't think of a reason you couldn't just use *args. But, if you insist:
def getArgValueList():
frame = inspect.currentframe().f_back
code = frame.f_code
vars = code.co_varnames[:code.co_argcount]
return [frame.f_locals[var] for var in vars]
If you want to know how it works, most of it's in the inspect module docs:
currentframe() gets the current frame—the frame of getArgValueList.
f_back gets the parent frame—the frame of whoever called getArgValueList.
f_code gets the code object compiled from the function body of whoever called getArgValueList.
co_varnames is a list of all local variables in that body, starting with the parameters.
co_argcount is a count of explicit positional-or-keyword parameters.
f_locals is a dict with a copy of the locals() environment of the frame.
This of course only works for a function that takes no *args, keyword-only args, or **kwargs, but you can extend it to work for them as well with a bit of work. (See co_kwonlyargcount, co_flags, CO_VARARGS, and CO_VARKEYWORDS for details.)
Also, this only works for CPython, not most other interpreters. and it could break in some future version, because it's pretty blatantly relying on implementation details of the interpreter.
The *args construction will give you the arguments as a list:
>>> def f(*args): return list(map(lambda x:x+1, args))
>>> f(1,2,3)
[2, 3, 4]
If you are bound with the signature of f, you'll have to use the inspect module:
import inspect
def f(a, b,c):
f_locals = locals()
values = [f_locals[name] for name in inspect.signature(f).parameters]
return list(map(lambda x:x+1, values))
inspect.signature(f).parameters gives you the list of arguments in the correct order. The values are in locals().

How to avoid type checking arguments to Python function

I'm creating instances of a class Foo, and I'd like to be able to instantiate these in a general way from a variety of types. You can't pass Foo a dict or list. Note that Foo is from a 3rd party code base - I can't change Foo's code.
I know that type checking function arguments in Python is considered bad form. Is there a more Pythonic way to write the function below (i.e. without type checking)?
def to_foo(arg):
if isinstance(arg, dict):
return dict([(key,to_foo(val)) for key,val in arg.items()])
elif isinstance(arg, list):
return [to_foo(i) for i in arg]
else:
return Foo(arg)
Edit: Using try/except blocks is possible. For instance, you could do:
def to_foo(arg):
try:
return Foo(arg)
except ItWasADictError:
return dict([(key,to_foo(val)) for key,val in arg.items()])
except ItWasAListError:
return [to_foo(i) for i in arg]
I'm not totally satisfied by this for two reasons: first, type checking seems like it addresses more directly the desired functionality, whereas the try/except block here seems like it's getting to the same place but less directly. Second, what if the errors don't cleanly map like this? (e.g. if passing either a list or dict throws a TypeError)
Edit: a third reason I'm not a huge fan of the try/except method here is I need to go and find what exceptions Foo is going to throw in those cases, rather than being able to code it up front.
If you're using python 3.4 you can use functools.singledispatch, or a backport for a different python version
from functools import singledispatch
#singledispatch
def to_foo(arg):
return Foo(arg)
#to_foo.register(list)
def to_foo_list(arg):
return [Foo(i) for i in arg]
#to_foo.register(dict)
def to_foo_dict(arg):
return {key: Foo(val) for key, val in arg.items()}
This is a fairly new construct for python, but a common pattern in other languages. I'm not sure you'd call this pythonic or not, but it does feel better than writing isinstances everywhere. Though, in practise, the singledispatch is probably just doing the isinstance checks for you internally.
The pythonic way to deal with your issue is to go ahead and assume (first) that arg is Foo and except any error:
try:
x = Foo(arg)
except NameError:
#do other things
The phrase for this idea is "duck typing", and it's a popular pattern in python.

Memoize a function so that it isn't reset when I rerun the file in Python

I often do interactive work in Python that involves some expensive operations that I don't want to repeat often. I'm generally running whatever Python file I'm working on frequently.
If I write:
import functools32
#functools32.lru_cache()
def square(x):
print "Squaring", x
return x*x
I get this behavior:
>>> square(10)
Squaring 10
100
>>> square(10)
100
>>> runfile(...)
>>> square(10)
Squaring 10
100
That is, rerunning the file clears the cache. This works:
try:
safe_square
except NameError:
#functools32.lru_cache()
def safe_square(x):
print "Squaring", x
return x*x
but when the function is long it feels strange to have its definition inside a try block. I can do this instead:
def _square(x):
print "Squaring", x
return x*x
try:
safe_square_2
except NameError:
safe_square_2 = functools32.lru_cache()(_square)
but it feels pretty contrived (for example, in calling the decorator without an '#' sign)
Is there a simple way to handle this, something like:
#non_resetting_lru_cache()
def square(x):
print "Squaring", x
return x*x
?
Writing a script to be executed repeatedly in the same session is an odd thing to do.
I can see why you'd want to do it, but it's still odd, and I don't think it's unreasonable for the code to expose that oddness by looking a little odd, and having a comment explaining it.
However, you've made things uglier than necessary.
First, you can just do this:
#functools32.lru_cache()
def _square(x):
print "Squaring", x
return x*x
try:
safe_square_2
except NameError:
safe_square_2 = _square
There is no harm in attaching a cache to the new _square definition. It won't waste any time, or more than a few bytes of storage, and, most importantly, it won't affect the cache on the previous _square definition. That's the whole point of closures.
There is a potential problem here with recursive functions. It's already inherent in the way you're working, and the cache doesn't add to it in any way, but you might only notice it because of the cache, so I'll explain it and show how to fix it. Consider this function:
#lru_cache()
def _fact(n):
if n < 2:
return 1
return _fact(n-1) * n
When you re-exec the script, even if you have a reference to the old _fact, it's going to end up calling the new _fact, because it's accessing _fact as a global name. It has nothing to do with the #lru_cache; remove that, and the old function will still end up calling the new _fact.
But if you're using the renaming trick above, you can just call the renamed version:
#lru_cache()
def _fact(n):
if n < 2:
return 1
return fact(n-1) * n
Now the old _fact will call fact, which is still the old _fact. Again, this works identically with or without the cache decorator.
Beyond that initial trick, you can factor that whole pattern out into a simple decorator. I'll explain step by step below, or see this blog post.
Anyway, even with the less-ugly version, it's still a bit ugly and verbose. And if you're doing this dozens of times, my "well, it should look a bit ugly" justification will wear thin pretty fast. So, you'll want to handle this the same way you always factor out ugliness: wrap it in a function.
You can't really pass names around as objects in Python. And you don't want to use a hideous frame hack just to deal with this. So you'll have to pass the names around as strings. ike this:
globals().setdefault('fact', _fact)
The globals function just returns the current scope's global dictionary. Which is a dict, which means it has the setdefault method, which means this will set the global name fact to the value _fact if it didn't already have a value, but do nothing if it did. Which is exactly what you wanted. (You could also use setattr on the current module, but I think this way emphasizes that the script is meant to be (repeatedly) executed in someone else's scope, not used as a module.)
So, here that is wrapped up in a function:
def new_bind(name, value):
globals().setdefault(name, value)
… which you can turn that into a decorator almost trivially:
def new_bind(name):
def wrap(func):
globals().setdefault(name, func)
return func
return wrap
Which you can use like this:
#new_bind('foo')
def _foo():
print(1)
But wait, there's more! The func that new_bind gets is going to have a __name__, right? If you stick to a naming convention, like that the "private" name must be the "public" name with a _ prefixed, we can do this:
def new_bind(func):
assert func.__name__[0] == '_'
globals().setdefault(func.__name__[1:], func)
return func
And you can see where this is going:
#new_bind
#lru_cache()
def _square(x):
print "Squaring", x
return x*x
There is one minor problem: if you use any other decorators that don't wrap the function properly, they will break your naming convention. So… just don't do that. :)
And I think this works exactly the way you want in every edge case. In particular, if you've edited the source and want to force the new definition with a new cache, you just del square before rerunning the file, and it works.
And of course if you want to merge those two decorators into one, it's trivial to do so, and call it non_resetting_lru_cache.
However, I'd keep them separate. I think it's more obvious what they do. And if you ever want to wrap another decorator around #lru_cache, you're probably still going to want #new_bind to be the outermost decorator, right?
What if you want to put new_bind into a module that you can import? Then it's not going to work, because it will be referring to the globals of that module, not the one you're currently writing.
You can fix that by explicitly passing your globals dict, or your module object, or your module name as an argument, like #new_bind(__name__), so it can find your globals instead of its. But that's ugly and repetitive.
You can also fix it with an ugly frame hack. At least in CPython, sys._getframe() can be used to get your caller's frame, and frame objects have a reference to their globals namespace, so:
def new_bind(func):
assert func.__name__[0] == '_'
g = sys._getframe(1).f_globals
g.setdefault(func.__name__[1:], func)
return func
Notice the big box in the docs that tells you this is an "implementation detail" that may only apply to CPython and is "for internal and specialized purposes only". Take this seriously. Whenever someone has a cool idea for the stdlib or builtins that could be implemented in pure Python, but only by using _getframe, it's generally treated almost the same as an idea that can't be implemented in pure Python at all. But if you know what you're doing, and you want to use this, and you only care about present-day versions of CPython, it will work.
There is no persistent_lru_cache in the stdlib. But you can build one pretty easily.
The functools source is linked directly from the docs, because this is one of those modules that's as useful as sample code as it is for using it directly.
As you can see, the cache is just a dict. If you replace that with, say, a shelf, it will become persistent automatically:
def persistent_lru_cache(filename, maxsize=128, typed=False):
"""new docstring explaining what dbpath does"""
# same code as before up to here
def decorating_function(user_function):
cache = shelve.open(filename)
# same code as before from here on.
Of course that only works if your arguments are strings. And it could be a little slow.
So, you might want to instead keep it as an in-memory dict, and just write code that pickles it to a file atexit, and restores it from a file if present at startup:
def decorating_function(user_function):
# ...
try:
with open(filename, 'rb') as f:
cache = pickle.load(f)
except:
cache = {}
def cache_save():
with lock:
with open(filename, 'wb') as f:
pickle.dump(cache, f)
atexit.register(cache_save)
# …
wrapper.cache_save = cache_save
wrapper.cache_filename = filename
Or, if you want it to write every N new values (so you don't lose the whole cache on, say, an _exit or a segfault or someone pulling the cord), add this to the second and third versions of wrapper, right after the misses += 1:
if misses % N == 0:
cache_save()
See here for a working version of everything up to this point (using save_every as the "N" argument, and defaulting to 1, which you probably don't want in real life).
If you want to be really clever, maybe copy the cache and save that in a background thread.
You might want to extend the cache_info to include something like number of cache writes, number of misses since last cache write, number of entries in the cache at startup, …
And there are probably other ways to improve this.
From a quick test, with save_every=1, this makes the cache on both get_pep and fib (from the functools docs) persistent, with no measurable slowdown to get_pep and a very small slowdown to fib the first time (note that fib(100) has 100097 hits vs. 101 misses…), and of course a large speedup to get_pep (but not fib) when you re-run it. So, just what you'd expect.
I can't say I won't just use #abarnert's "ugly frame hack", but here is the version that requires you to pass in the calling module's globals dict. I think it's worth posting given that decorator functions with arguments are tricky and meaningfully different from those without arguments.
def create_if_not_exists_2(my_globals):
def wrap(func):
if "_" != func.__name__[0]:
raise Exception("Function names used in cine must begin with'_'")
my_globals.setdefault(func.__name__[1:], func)
def wrapped(*args):
func(*args)
return wrapped
return wrap
Which you can then use in a different module like this:
from functools32 import lru_cache
from cine import create_if_not_exists_2
#create_if_not_exists_2(globals())
#lru_cache()
def _square(x):
print "Squaring", x
return x*x
assert "_square" in globals()
assert "square" in globals()
I've gained enough familiarity with decorators during this process that I was comfortable taking a swing at solving the problem another way:
from functools32 import lru_cache
try:
my_cine
except NameError:
class my_cine(object):
_reg_funcs = {}
#classmethod
def func_key (cls, f):
try:
name = f.func_name
except AttributeError:
name = f.__name__
return (f.__module__, name)
def __init__(self, f):
k = self.func_key(f)
self._f = self._reg_funcs.setdefault(k, f)
def __call__(self, *args, **kwargs):
return self._f(*args, **kwargs)
if __name__ == "__main__":
#my_cine
#lru_cache()
def fact_my_cine(n):
print "In fact_my_cine for", n
if n < 2:
return 1
return fact_my_cine(n-1) * n
x = fact_my_cine(10)
print "The answer is", x
#abarnert, if you are still watching, I'd be curious to hear your assessment of the downsides of this method. I know of two:
You have to know in advance what attributes to look in for a name to associate with the function. My first stab at it only looked at func_name which failed when passed an lru_cache object.
Resetting a function is painful: del my_cine._reg_funcs[('__main__', 'fact_my_cine')], and the swing I took at adding a __delitem__ was unsuccessful.

Multiple Value Return Pattern in Python (not tuple, list, dict, or object solutions)

There were several discussions on "returning multiple values in Python", e.g.
1,
2.
This is not the "multiple-value-return" pattern I'm trying to find here.
No matter what you use (tuple, list, dict, an object), it is still a single return value and you need to parse that return value (structure) somehow.
The real benefit of multiple return value is in the upgrade process. For example,
originally, you have
def func():
return 1
print func() + func()
Then you decided that func() can return some extra information but you don't want to break previous code (or modify them one by one). It looks like
def func():
return 1, "extra info"
value, extra = func()
print value # 1 (expected)
print extra # extra info (expected)
print func() + func() # (1, 'extra info', 1, 'extra info') (not expected, we want the previous behaviour, i.e. 2)
The previous codes (func() + func()) are broken. You have to fix it.
I don't know whether I made the question clear... You can see the CLISP example. Is there an equivalent way to implement this pattern in Python?
EDIT: I put the above clisp snippets online for your quick reference.
Let me put two use cases here for multiple return value pattern. Probably someone can have alternative solutions to the two cases:
Better support smooth upgrade. This is shown in the above example.
Have simpler client side codes. See following alternative solutions I have so far. Using exception can make the upgrade process smooth but it costs more codes.
Current alternatives: (they are not "multi-value-return" constructions, but they can be engineering solutions that satisfy some of the points listed above)
tuple, list, dict, an object. As is said, you need certain parsing from the client side. e.g. if ret.success == True: blabla. You need to ret = func() before that. It's much cleaner to write if func() == True: blabal.
Use Exception. As is discussed in this thread, when the "False" case is rare, it's a nice solution. Even in this case, the client side code is still too heavy.
Use an arg, e.g. def func(main_arg, detail=[]). The detail can be list or dict or even an object depending on your design. The func() returns only original simple value. Details go to the detail argument. Problem is that the client need to create a variable before invocation in order to hold the details.
Use a "verbose" indicator, e.g. def func(main_arg, verbose=False). When verbose == False (default; and the way client is using func()), return original simple value. When verbose == True, return an object which contains simple value and the details.
Use a "version" indicator. Same as "verbose" but we extend the idea there. In this way, you can upgrade the returned object for multiple times.
Use global detail_msg. This is like the old C-style error_msg. In this way, functions can always return simple values. The client side can refer to detail_msg when necessary. One can put detail_msg in global scope, class scope, or object scope depending on the use cases.
Use generator. yield simple_return and then yield detailed_return. This solution is nice in the callee's side. However, the caller has to do something like func().next() and func().next().next(). You can wrap it with an object and override the __call__ to simplify it a bit, e.g. func()(), but it looks unnatural from the caller's side.
Use a wrapper class for the return value. Override the class's methods to mimic the behaviour of original simple return value. Put detailed data in the class. We have adopted this alternative in our project in dealing with bool return type. see the relevant commit: https://github.com/fqj1994/snsapi/commit/589f0097912782ca670568fe027830f21ed1f6fc (I don't have enough reputation to put more links in the post... -_-//)
Here are some solutions:
Based on #yupbank 's answer, I formalized it into a decorator, see github.com/hupili/multiret
The 8th alternative above says we can wrap a class. This is the current engineering solution we adopted. In order to wrap more complex return values, we may use meta class to generate the required wrapper class on demand. Have not tried, but this sounds like a robust solution.
try inspect?
i did some try, and not very elegant, but at least is doable.. and works :)
import inspect
from functools import wraps
import re
def f1(*args):
return 2
def f2(*args):
return 3, 3
PATTERN = dict()
PATTERN[re.compile('(\w+) f()')] = f1
PATTERN[re.compile('(\w+), (\w+) = f()')] = f2
def execute_method_for(call_str):
for regex, f in PATTERN.iteritems():
if regex.findall(call_str):
return f()
def multi(f1, f2):
def liu(func):
#wraps(func)
def _(*args, **kwargs):
frame,filename,line_number,function_name,lines,index=\
inspect.getouterframes(inspect.currentframe())[1]
call_str = lines[0].strip()
return execute_method_for(call_str)
return _
return liu
#multi(f1, f2)
def f():
return 1
if __name__ == '__main__':
print f()
a, b = f()
print a, b
Your case does need code editing. However, if you need a hack, you can use function attributes to return extra values , without modifying return values.
def attr_store(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
#attr_store('extra',None)
def func(input_str):
func.extra = {'hello':input_str + " ,How r you?", 'num':2}
return 1
print(func("John")+func("Matt"))
print(func.extra)
Demo : http://codepad.org/0hJOVFcC
However, be aware that function attributes will behave like static variables, and you will need to assign values to them with care, appends and other modifiers will act on previous saved values.
the magic is you should use design pattern blablabla to not use actual operation when you process the result, but use a parameter as the operation method, for your case, you can use the following code:
def x():
#return 1
return 1, 'x'*1
def f(op, f1, f2):
print eval(str(f1) + op + str(f2))
f('+', x(), x())
if you want generic solution for more complicated situation, you can extend the f function, and specify the process operation via the op parameter

Why doesn't functools.partial return a real function (and how to create one that does)?

So I was playing around with currying functions in Python and one of the things that I noticed was that functools.partial returns a partial object rather than an actual function. One of the things that annoyed me about this was that if I did something along the lines of:
five = partial(len, 'hello')
five('something')
then we get
TypeError: len() takes exactly 1 argument (2 given)
but what I want to happen is
TypeError: five() takes no arguments (1 given)
Is there a clean way to make it work like this? I wrote a workaround, but it's too hacky for my taste (doesn't work yet for functions with varargs):
def mypartial(f, *args):
argcount = f.func_code.co_argcount - len(args)
params = ''.join('a' + str(i) + ',' for i in xrange(argcount))
code = '''
def func(f, args):
def %s(%s):
return f(*(args+(%s)))
return %s
''' % (f.func_name, params, params, f.func_name)
exec code in locals()
return func(f, args)
Edit: I think it might be helpful if I added more context. I'm writing a decorator that will automatically curry a function like so:
#curry
def add(a, b, c):
return a + b + c
f = add(1, 2) # f is a function
assert f(5) == 8
I want to hide the fact that f was created from a partial (maybe a bad idea :P). The message that the TypeError message above gives is one example of where whether something is a partial can be revealed. I want to change that.
This needs to be generalizable so EnricoGiampieri's and mgilson's suggestions only work in that specific case.
You definitely don't want to do this with exec.
You can find recipes for partial in pure Python, such as this one—many of them are mislabeled as curry recipes, so look for that as well. At any rate, these will show you the proper way to do it without exec, and you can just pick one and modify it to do what you want.
Or you could just wrap partial…
However, whatever you do, there's no way the wrapper can know that it's defining a function named "five"; that's just the name of the variable you store the function in. So if you want a custom name, you'll have to pass it in to the function:
five = my_partial('five', len, 'hello')
At that point, you have to wonder why this is any better than just defining a new function.
However, I don't think this is what you actually want anyway. Your ultimate goal is to define a #curry decorator that creates a curried version of the decorated function, with the same name (and docstring, arg list, etc.) as the decorated function. The whole idea of replacing the name of the intermediate partial is a red herring; use functools.wraps properly inside your curry function, and it won't matter how you define the curried function, it'll preserve the name of the original.
In some cases, functools.wraps doesn't work. And in fact, this may be one of those times—you need to modify the arg list, for example, so curry(len) can take either 0 or 1 parameter instead of requiring 1 parameter, right? See update_wrapper, and the (very simple) source code for wraps and update_wrapper to see how the basics work, and build from there.
Expanding on the previous: To curry a function, you pretty much have to return something that takes (*args) or (*args, **kw) and parse the args explicitly, and possibly raise TypeError and other appropriate exceptions explicitly. Why? Well, if foo takes 3 params, curry(foo) takes 0, 1, 2, or 3 params, and if given 0-2 params it returns a function that takes 0 through n-1 params.
The reason you might want **kw is that it allows callers to specify params by name—although then it gets much more complicated to check when you're done accumulating arguments, and arguably this is an odd thing to do with currying—it may be better to first bind the named params with partial, then curry the result and pass in all remaining params in curried style…
If foo has default-value or keyword args, it gets even more complicated, but even without those problems, you already need to deal with this problem.
For example, let's say you implement curry as a class that holds the function and all already-curried parameters as instance members. Then you'll have something like this:
def __call__(self, *args):
if len(args) + len(self.curried_args) > self.fn.func_code.co_argcount:
raise TypeError('%s() takes exactly %d arguments (%d given)' %
(self.fn.func_name, self.fn.func_code.co_argcount,
len(args) + len(self.curried_args)))
self.curried_args += args
if len(self.curried_args) == self.fn.func_code.co_argcount:
return self.fn(*self.curried_args)
else:
return self
This is horribly oversimplified, but it shows how to handle the basics.
My guess is that the partial function just delay the execution of the function, do not create a whole new function out of it.
My guess is that is just easier to define directly a new function in place:
def five(): return len('hello')
This is a very simple line, won't clutter your code and is quite clear, so i wouldn't bother writing a function to replace it, especially if you don't need this situation in a large number of cases

Categories