Given a simple Python function with an optional argument, like:
def wait(seconds=3):
time.sleep(seconds)
How do I create a function that calls this and passes on an optional argument? For example, this does NOT work:
def do_and_wait(message, seconds=None):
print(message)
wait(seconds)
Note: I want to be able to call wait from other functions with the optional seconds argument without having to know and copy the current default seconds value in the underlying wait function to every other function which calls it.
As above, if I call it with the optional argument, like do_and_wait(2) then it works, but trying to rely on wait's default, e.g. calling it like do_and_wait() causes a TypeError because inside wait seconds == None.
Is there a simple and clean way to make this work? I know I can abuse kwargs like this:
def do_and_wait(message, **kwargs):
print(message)
wait(**kwargs)
But that seems unclear to the reader and user of this function since there is no useful name on the argument.
Note: This is a stupidly simplified example.
I understand you've simplified your question, but I think you mean how one can call a function with optional arguments that could be None. Does the following work for you?
import time
def func1(mess, sec):
if sec != None:
time.sleep(sec)
print(mess)
func1('success', sec=None)
I don't think you've quite explained your problem completely because I don't expect an answer should be this simple, but I would just use the same default value (and data type) in do_and_wait() as wait() uses, like so:
def do_and_wait(message, seconds=3):
print(message)
wait(seconds)
After thinking a bit more, I came up with something like this; Han's answer suggested this and reminded me that I think PEP even suggests it somewhere. This especially avoids having to know and copy the default value into any function that calls wait and wants to support a variable value for seconds.
def wait(seconds=None):
time.sleep(seconds if seconds is not None else 3)
def do_and_wait(message, seconds=None):
print(message)
wait(seconds)
def something_else(callable, seconds=None):
callable()
wait(seconds)
Related
For example, I'd like to do something like: greet(,'hola'), where greet is:
def greet(person='stranger', greeting='hello')
This would help greatly for testing while writing code
Upon calling a function you can use the variable names to make it even more clear what variable will assume which value. At the same time, if defaults are provided in the function definition, skipping variables when calling the function does not raise any errors. So, in short you can just do this:
def greet(person='stranger', greeting='hello')
print('{} {}'.format(greeting, person))
return
greet(greeting='hola') # same as greet(person='stranger', greeting='hola')
# returns 'hola stranger'
Note that, as I said above this would not work if for example your function definition was like this:
def greet(person, greeting)
print('{} {}'.format(greeting, person))
return
Since in this case, Python would complain saying that it does not know what to do with person; no default is supplied..
And by the way, the problem you are describing is most likely the very reason defaults are used in the first place
Without knowing the other parameters, and only knowing that the parameter you want to change is in second position you could use the inspect module to get function signature & associated default values.
Then make a copy of the default values list and change the one at the index you want:
import inspect
def greet(person='stranger', greeting='hello'):
print(person,greeting)
argspec = inspect.getargspec(greet)
defaults = list(argspec.defaults)
defaults[1] = "hola" # change second default parameter
greet(**dict(zip(argspec.args,defaults)))
Assuming that all parameters have default values (else it shifts the lists an that fails) that prints:
stranger hola
I have a setting where event handlers are always functions taking a single event argument.
But more often than not, I find myself writing handlers that doesn´t use any of the event information. So I constantly write handlers of the form:
def handler(_):
#react
In order to discard the argument.
But I wish I didn´t have to, as sometimes I want to reuse handlers as general action functions that take no arguments, and sometimes I have existing zero-arguments functions that I want to use as handlers.
My current solution is wrapping the function using a lambda:
def handler():
#react
event.addHandler(lambda _:handler())
But that seems wrong for other reasons.
My intuitive understanding of a lambda is that it is first and foremost a description of a return value, and event handlers return nothing. I feel lambdas are meant to express pure functions, and here I´m using them only to cause side effects.
Another solution would be a general decorator discarding all arguments.
def discardArgs(func):
def f(*args):
return func()
return f
But I need this in a great many places, and it seems silly having to import such a utility to every script for something so simple.
Is there a particularly standard or "pythonic" way of wrapping a function to discard all arguments?
Use *args:
def handler(*args):
#react
Then handler can take 0 arguments, or any number of position arguments.
I'm using Mock (http://www.voidspace.org.uk/python/mock/mock.html), and came across a particular mock case that I cant figure out the solution.
I have a function with multiple calls to some_function that is being Mocked.
def function():
some_function(1)
some_function(2)
some_function(3)
I only wanna mock the first and third call to some_function. The second call I wanna to be made to the real some_function.
I tried some alternatives with http://www.voidspace.org.uk/python/mock/mock.html#mock.Mock.mock_calls, but with no success.
Thanks in advance for the help.
It seems that the wraps argument could be what you want:
wraps: Item for the mock object to wrap. If wraps is not None then calling the
Mock will pass the call through to the wrapped object (returning the
real result and ignoring return_value).
However, since you only want the second call to not be mocked, I would suggest the use of mock.side_effect.
If side_effect is an iterable then each call to the mock will return
the next value from the iterable.
If you want to return a different value for each call, it's a perfect fit :
somefunction_mock.side_effect = [10, None, 10]
Only the first and third calls to somefunction will return 10.
However, if you do need to call the real function, but not the second time, you can also pass side_effect a callable, but I find it pretty ugly (there might be a smarter to do it):
class CustomMock(object):
calls = 0
def some_function(self, arg):
self.calls += 1
if self.calls != 2:
return my_real_function(arg)
else:
return DEFAULT
somefunction_mock.side_effect = CustomMock().some_function
Even simpler than creating a CustomMock class :
def side_effect(*args, **kwargs):
if side_effect.counter < 10:
side_effect.counter += 1
return my_real_function(arg)
else:
return DEFAULT
side_effect.counter = 0
mocked_method.side_effect = side_effect
I faced the same situation today. After some hesitation I found a different way to work around it.
At first, I have a function looks like this:
def reboot_and_balabala(args):
os.system('do some prepare here')
os.system('reboot')
sys.exit(0)
I want the first call to os.system be invoked, otherwise the local file is not generated, and I cannot verify it.
But I really do not want the second call to os.system be invoked, lol.
At first, I have an unittest similar to:
def test_reboot_and_balabala(self):
with patch.object(os, 'system') as mock_system:
# do some mock setup on mock_system, this is what I looked for
# but I do not found any easy and clear solution
with patch.object(sys, 'exit') as mock_exit:
my_lib.reboot_and_balabala(...)
# assert mock invoke args
# check generated files
But finally, I realized, after adjusting the code, I have a more better code structure, and unittests, by following way:
def reboot():
os.system('reboot')
sys.exit(0)
def reboot_and_balabala(args):
os.system('do some prepare here')
reboot()
And then we can test those code by:
def test_reboot(self):
with patch.object(os, 'system') as mock_system:
with patch.object(sys, 'exit') as mock_exit:
my_lib.reboot()
mock_system.assert_called_once_with('reboot')
mock_exit.assert_called_once_with(0)
def test_reboot_and_balabala(self):
with patch.object(my_lib, 'reboot') as mock_reboot:
my_lib.reboot_and_balabala(...)
# check generated files here
mock_reboot.assert_called_once()
This is not a direct answer to the question. But I think this is very inspiring.
So I was playing around with currying functions in Python and one of the things that I noticed was that functools.partial returns a partial object rather than an actual function. One of the things that annoyed me about this was that if I did something along the lines of:
five = partial(len, 'hello')
five('something')
then we get
TypeError: len() takes exactly 1 argument (2 given)
but what I want to happen is
TypeError: five() takes no arguments (1 given)
Is there a clean way to make it work like this? I wrote a workaround, but it's too hacky for my taste (doesn't work yet for functions with varargs):
def mypartial(f, *args):
argcount = f.func_code.co_argcount - len(args)
params = ''.join('a' + str(i) + ',' for i in xrange(argcount))
code = '''
def func(f, args):
def %s(%s):
return f(*(args+(%s)))
return %s
''' % (f.func_name, params, params, f.func_name)
exec code in locals()
return func(f, args)
Edit: I think it might be helpful if I added more context. I'm writing a decorator that will automatically curry a function like so:
#curry
def add(a, b, c):
return a + b + c
f = add(1, 2) # f is a function
assert f(5) == 8
I want to hide the fact that f was created from a partial (maybe a bad idea :P). The message that the TypeError message above gives is one example of where whether something is a partial can be revealed. I want to change that.
This needs to be generalizable so EnricoGiampieri's and mgilson's suggestions only work in that specific case.
You definitely don't want to do this with exec.
You can find recipes for partial in pure Python, such as this one—many of them are mislabeled as curry recipes, so look for that as well. At any rate, these will show you the proper way to do it without exec, and you can just pick one and modify it to do what you want.
Or you could just wrap partial…
However, whatever you do, there's no way the wrapper can know that it's defining a function named "five"; that's just the name of the variable you store the function in. So if you want a custom name, you'll have to pass it in to the function:
five = my_partial('five', len, 'hello')
At that point, you have to wonder why this is any better than just defining a new function.
However, I don't think this is what you actually want anyway. Your ultimate goal is to define a #curry decorator that creates a curried version of the decorated function, with the same name (and docstring, arg list, etc.) as the decorated function. The whole idea of replacing the name of the intermediate partial is a red herring; use functools.wraps properly inside your curry function, and it won't matter how you define the curried function, it'll preserve the name of the original.
In some cases, functools.wraps doesn't work. And in fact, this may be one of those times—you need to modify the arg list, for example, so curry(len) can take either 0 or 1 parameter instead of requiring 1 parameter, right? See update_wrapper, and the (very simple) source code for wraps and update_wrapper to see how the basics work, and build from there.
Expanding on the previous: To curry a function, you pretty much have to return something that takes (*args) or (*args, **kw) and parse the args explicitly, and possibly raise TypeError and other appropriate exceptions explicitly. Why? Well, if foo takes 3 params, curry(foo) takes 0, 1, 2, or 3 params, and if given 0-2 params it returns a function that takes 0 through n-1 params.
The reason you might want **kw is that it allows callers to specify params by name—although then it gets much more complicated to check when you're done accumulating arguments, and arguably this is an odd thing to do with currying—it may be better to first bind the named params with partial, then curry the result and pass in all remaining params in curried style…
If foo has default-value or keyword args, it gets even more complicated, but even without those problems, you already need to deal with this problem.
For example, let's say you implement curry as a class that holds the function and all already-curried parameters as instance members. Then you'll have something like this:
def __call__(self, *args):
if len(args) + len(self.curried_args) > self.fn.func_code.co_argcount:
raise TypeError('%s() takes exactly %d arguments (%d given)' %
(self.fn.func_name, self.fn.func_code.co_argcount,
len(args) + len(self.curried_args)))
self.curried_args += args
if len(self.curried_args) == self.fn.func_code.co_argcount:
return self.fn(*self.curried_args)
else:
return self
This is horribly oversimplified, but it shows how to handle the basics.
My guess is that the partial function just delay the execution of the function, do not create a whole new function out of it.
My guess is that is just easier to define directly a new function in place:
def five(): return len('hello')
This is a very simple line, won't clutter your code and is quite clear, so i wouldn't bother writing a function to replace it, especially if you don't need this situation in a large number of cases
I am creating a module in python that can take multiple arguments. What would be the best way to pass the arguments to the definition of a method?
def abc(arg):
...
abc({"host" : "10.1.0.100", "protocol" : "http"})
def abc(host, protocol):
...
abc("10.1.0.100", "http")
def abc(**kwargs):
...
abc(host = "10.1.0.100", protocol = "http")
Or something else?
Edit
I will actually have those arguments (username, password, protocol, host, password2) where none of them are required.
def abc(host, protocol):
...
abc("10.1.0.100", "http")
abc(host="10.1.0.100", protocol="http")
If you call it using positional args or keyword args depends on the number of arguments and if you skip any. For your example I don't see much use in calling the function with keyword args.
Now some reasons why the other solutions are bad:
abc({"host" : "10.1.0.100", "protocol" : "http"})
This is a workaround for keyword arguments commonly used in languages which lack real keyword args (usually JavaScript). In those languages it is nice, but in Python it is just wrong since you gain absolutely nothing from it. If you ever want to call the functions with args from a dict, you can always do abc(**{...}) anyway.
def abc(**kwargs):
People expect the function to accept lots of - maybe even arbitrary - arguments. Besides that, they don't see what arguments are possible/required and you'd have to write code to require certain arguments on your own.
If all the arguments are known ahead, use an explicit argument list, optionally with default values, like def abc(arg1="hello", arg2="world",...). This will make the code most readable.
When you call the function, you can use either abd("hello", "world") or abc(arg1="hello", arg2="world"). I use the longer form if there are more than 4 or 5 arguments, it's a matter of taste.
This really depends on the context, the design and the purpose of your method.
If the parameters are defined and compulsory, then the best option is:
def abc(host, protocol):
...
abc('10.1.0.100', 'http')
in case the parameters are defined but optional, then:
def abc(host=None, protocol=None): # I used None, but put a default value
...
you can call it thorugh positional arguments as abc('10.1.0.100', 'http') or by their name abc(protocol='http')
And if you don't know at first what are the arguments it will receive are (for example in string formatting) then the best option is using the **kwargs argument:
def abc(**kwargs):
...
And in this way you must call it using named arguments.