I'm creating a function that takes in a callback function as an argument. I want to be able to use it like this:
def callback1(result, found_index):
# do stuffs
def callback2(result):
# do same stuffs even though it's missing the found_index parameter
somefunct(callback1)
somefunct(callback2)
# somefunct calls the callback function like this:
def somefunct(callback):
# do stuffs, and assign result and found_index
callback(result, found_index) # should not throw error
For context, I am somewhat trying to replicate how javascript's callback functions work for the .forEach function on arrays. You can make a function that takes in only the array item on that specific iteration, or the array item and index, or even the array item, index, and original array:
let some_array = ["apple", "orange", "banana"];
function callback1(value, index) {
console.log(`Item at index ${index}: ${value}`);
}
function callback2(value) {
console.log(`Value: ${value}`);
}
some_array.forEach(callback1); // runs with no errors
some_array.forEach(callback2); // runs with no errors
Furthermore, I don't want the callback function to force the * operator, but also allow them to use it if needed. Thank you, wonderful people of python.
(Posting this separately since it's fundamentally different to my other answer.)
If you need to pass a lot of values to some callbacks, without requiring other callbacks to declare a lot of unused parameters, a neat solution is to encapsulate all of those values in a single object. You can use collections.namedtuple to define a value type with named attributes, and then the callback can take one parameter and decide which attributes to use.
from collections import namedtuple
SomeFunctionResult = namedtuple('SomeFunctionResult', 'foo bar baz qux quz')
def some_function(callback):
result = SomeFunctionResult('foo', 'bar', 'baz', 'qux', 'quz')
callback(result)
Example:
>>> some_function(lambda r: print(r.foo, r.bar))
foo bar
>>> some_function(lambda r: print(r.baz, r.qux, r.quz))
baz qux quz
The downside is that this makes some_function less usable with existing functions which might expect to receive foo directly, rather than an object with a foo attribute. In that case, you have to write some_function(lambda r: blah(r.foo)) which is not as neat as some_function(blah).
The simplest approach would be to unify the signatures of your callbacks. Let's say you defined your forEach function as follows
def forEach(iterable, callback):
for index, elem in enumerate(iterable):
callback(elem, index)
You could then define Python analogs of the callack1 and callback2 Javascript functions as
def callback1(value, index):
print(f"Item at index {index}: {value}")
def callback2(value, _index):
print(f"Value: {value})
Rather than performing any complicated parameter-count-reasoning, exception handling, or dynamic dispatch within forEach, we delegate the decision of how to handle the value and index arguments to the callbacks themselves. If you need to adapt a single-parameter callback to work with forEach, you could simply use a wrapper lambda that discards the second argument:
forEach(some_iterable, lambda value, _index: callback(value))
However, at this point, you just have an obfuscated for loop, which would be much more cleanly expressed as
for elem in some_iterable:
callback(elem)
In this case, it is easier to ask for forgiveness than permission.
def some_function(callback):
result = 'foo'
found_index = 5
try:
callback(result, found_index)
except TypeError:
callback(result)
Example:
>>> some_function(print)
foo 5
>>> some_function(lambda x: print(x))
foo
this is the modified python code snippet you have provided that produces error , this works with no problem , you just have to unify the callback arguments number and type for each callback function called within the main function and define somefunc before calling it .
def callback1(result, found_index):
# do stuffs
result="overridden result in callback 1"
found_index ="overridden found_index in callback 1"
print(result,found_index)
def callback2(result,found_index):
# do same stuffs even though it's missing the found_index parameter
result="overridden result in callback 2"
print(result,found_index)
# somefunct calls the callback function like this:
def somefunct(callback):
# do stuffs, and assign result and found_index
result = "overridden result in somefunct"
found_index = "overridden index in somefunct"
callback(result, found_index) # NOW it should not throw error as the callback is fed with the 2 arguments used in callback1 and ignored in callback2
somefunct(callback1)
somefunct(callback2)
use optional arguments and check how much elemnts returned, sort of switch case:
https://linux.die.net/diveintopython/html/power_of_introspection/optional_arguments.html
Related
I want to write a wrapper function which call one function and pass the results to another function. The arguments and return types of the functions are the same, but I have problem with returning lists and multiple values.
def foo():
return 1,2
def bar():
return (1,2)
def foo2(a,b):
print(a,b)
def bar2(p):
a,b=p
print(a,b)
def wrapper(func,func2):
a=func()
func2(a)
wrapper(bar,bar2)
wrapper(foo,foo2)
I am searching for a syntax which works with both function pairs to use it in my wrapper code.
EDIT: The definitions of at least foo2 and bar2 should stay this way. Assume that they are from an external library.
There is no distinction. return 1,2 returns a tuple. Parentheses do not define a tuple; the comma does. foo and bar are identical.
As I overlooked until JacobIRR's comment, your problem is that you need to pass an actual tuple, not the unpacked values from a tuple, to bar2:
a = foo()
foo2(*a)
a = bar()
bar2(a)
I don't necessarily agree with the design, but following your requirements in the comments (the function definitions can't change), you can write a wrapper that tries to execute each version (packed vs. unpacked) since it sounds like you might not know what the function expects. The wrapper written below, argfixer, does exactly that.
def argfixer(func):
def wrapper(arg):
try:
return func(arg)
except TypeError:
return func(*arg)
return wrapper
def foo():
return 1,2
def bar():
return (1,2)
#argfixer
def foo2(a,b):
print(a,b)
#argfixer
def bar2(p):
a,b=p
print(a,b)
a = foo()
b = bar()
foo2(a)
foo2(b)
bar2(a)
bar2(b)
However, if you aren't able to put the #argfixer on the line before the function definitions, you could alternatively wrap them like this in your own script before calling them:
foo2 = argfixer(foo2)
bar2 = argfixer(bar2)
And as mentioned in previous comments/answers, return 1,2 and return (1,2) are equivalent and both return a single tuple.
This code does not run because of arg differences. It runs if you use def foo2(*args): and def bar2(*p):.
The return 1, 2 and return (1, 2) are equivalent. The comma operator just creates a tuple, whether it is enclosed in parentheses or not.
All programming languages that I know of return a single value, so, since you want to return multiple, those values must be wrapped into a collection type, in this case, a tuple.
The problem is in the way you call the second function. Make it bar2(a) instead of bar2(*a), which breaks the tuple into separate arguments.
I recently had following code in mind and wondered what was wrong with it. Previously I used the .get method of dictionaries with success, but now i wanted to pass arguments too and this is where i noticed a somewhat weird behavior:
def string_encoder(nmstr):
return nmstr.encode('UTF-8')
def int_adder(nr_int):
return int(nr_int) + int(nr_int)
def selector(fun, val):
return {'str_en': string_encoder(val),
'nr_add': int_adder(val)}.get(fun, string_encoder(val))
selector('str_en', 'Test') -> ValueError
selector('str_en', 1) -> AttributeError
The above code will never run.
To inspect the issue i supplied a small piece of code:
def p1(pstr):
print('p1: ', pstr)
return pstr
def p2(pstr):
print('p2: ', pstr)
return pstr
def selector_2(fun, val):
return {'p1': p1(val),
'p2': p2(val)}.get(fun, p2(val))
selector_2('p1', 'Test')
Out[]: p1: Test
p2: Test
p2: Test
'Test'
I would expect the following .get('p1', 'test') to output 'p1: test' test.
But as it appears to me, every argument is evaluated, even if it is not selected. So my question is: Why is every argument evaluated with the .get method, or how can this behavior be explained?
dict creation is eager, as is argument evaluation. So before get even runs, you've called string_encoder twice, and int_adder once (and since the behaviors are largely orthogonal, you'll get an error for anything but a numeric str like "123").
You need to avoid calling the function until you know which one to call (and ideally, only call that function once).
The simplest solution is to have the dict and get call contain the functions themselves, rather than the result of calling them; you'll end up with whichever function wins, and you can then call that function. For example:
def selector(fun, val):
# Removed (val) from all mentions of functions
return {'str_en': string_encoder,
'nr_add': int_adder}.get(fun, string_encoder)(val) # <- But used it to call resulting function
Given string_encoder is your default, you could remove 'str_en' handling entirely to simplify to:
return {'nr_add': int_adder}.get(fun, string_encoder)(val)
which leads to the realization that you're not really getting anything out of the dict. dicts have cheap lookup, but you're rebuilding the dict every call, so you didn't save a thing. Given that you really only have two behaviors:
Call int_adder if fun is 'nr_add'
Otherwise, call string_encoder
the correct solution is just an if check which is more efficient, and easier to read:
def selector(fun, val):
if fun == 'nr_add':
return int_adder(val)
return string_encoder(val)
# Or if you love one-liners:
return int_adder(val) if fun == 'nr_add' else string_encoder(val)
If your real code has a lot of entries in the dict, not just two, one of which is unnecessary, then you can use a dict for performance, but build it once at global scope and reference it in the function so you're not rebuilding it every call (which loses all performance benefits of dict), e.g.:
# Built only once at global scope
_selector_lookup_table = {
'str_en': string_encoder,
'nr_add': int_adder,
'foo': some_other_func,
...
'baz': yet_another_func,
}
def selector(fun, val):
# Reused in function for each call
return _selector_lookup_table.get(fun, default_func)(val)
If you want to avoid evaluation of functions and only chooses the function, do this instead for your second block (the syntax will also work for your first block):
def selector_2(fun, val):
return {'p1': p1,
'p2': p2}.get(fun)(val)
all.
I was wondering if it was possible to set multiple keywords at once (via list?) in a function call.
For example, if you do:
foo, bar = 1, 2
print(foo, bar)
The output is (1,2).
For the function
def printer(foo, bar)
print(foo,bar)
Is it possible to do something like:
printer([foo, bar] = [1,2])
where both keywords are being set with a list?
In particular, the reason why I ask is because I have a function that returns two variables, scale and offset:
def scaleOffset(...):
'stuff happens here
return [scale, offset]
I would like to pass both of these variables to a different function that accepts them as keywords, perhaps as a nested call.
def secondFunction(scale=None, offset=None):
'more stuff
So far I haven't found a way of doing a call like this:
secondFunction([scale,offset] = scaleOffset())
To pass args as a list
arg_list = ["foo", "bar"]
my_func(*arg_list)
To pass kwargs, use a dictionary
kwarg_dict = {"keyword": "value"}
my_func(**kwarg_dict)
def apply_twice(func,arg):
return func(func(arg))
def add_five(x):
return x+5
print (apply_twice(add_five,10))
The output I get is 20.
This one is actually confusing me like how is it working.Can anybody explain me how this is working by breaking it down
The function apply_twice(func,arg) takes two arguments, a function object func and an argument to pass to the function func called arg.
In Python, functions can easily be passed around to other functions as arguments, they are not treated differently than any other argument type (i.e first class citizens).
Inside apply_twice, func is called twice in the line:
func(func(arg))
Which, alternatively, can be viewed in a more friendly way as:
res = func(arg)
func(res)
If you replace func with the name of the function passed in add_five you get the following:
res = add_five(arg) # equals: 15
add_five(res) # result: 20
which, of course, returns your expected result.
The key point to remember from this is that you shouldn't think of functions in Python as some special construct, functions are objects just like ints, listss and everything else is.
Expanding the code it executes as follows, starting with the print call:
apply_twice(add_five,10))
add_five(add_five(10)) # add_five(10) = 15
add_five(15) # add_five(15) = 20
Which gives you the result: 20.
When apply_twice is called, you are passing in a function object and a value. As you can see in the apply_twice definition, where you see func that is substituted with the function object passed to it (in this case, add_five). Then, starting with the inner func(arg) call, evaluate the result, which is then passed to add_five again, in the outer return func( ... ) call.
What you need to understand here is that
apply_twice(func,arg)
is a higher function which accepts two arguments (another function named func and an argument arg). The way it works is that it first evaluate the value of the other function, then use the value as an argument inside the higher function.
remember we have a function add_five(x) which add 5 to the argument supply in it...
then this function add_five(x) is then passed as an argument to another function called
apply_twice_(func,arg) which return func(func(arg)).
now splitting func(func(arg)) we have
func(arg) #lets called it a
then func(func(arg))==func(a) since a = func(agr)
and (a) is our add_five(x) function, after it add 5, then the value we got is re-used as another fresh argument to add another 5 to it, that is why we have 20 as our result.
Another example is:
def test(func, arg):
return func(func(arg))
def mult(x):
return x * x
print(test(mult, 2))
which give 16 as result.
How can I assign the results of a function call to multiple variables when the results are stored by name (not index-able), in python.
For example (tested in Python 3),
import random
# foo, as defined somewhere else where we can't or don't want to change it
def foo():
t = random.randint(1,100)
# put in a dummy class instead of just "return t,t+1"
# because otherwise we could subscript or just A,B = foo()
class Cat(object):
x = t
y = t + 1
return Cat()
# METHOD 1
# clearly wrong; A should be 1 more than B; they point to fields of different objects
A,B = foo().x, foo().y
print(A,B)
# METHOD 2
# correct, but requires two lines and an implicit variable
t = foo()
A,B = t.x, t.y
del t # don't really want t lying around
print(A,B)
# METHOD 3
# correct and one line, but an obfuscated mess
A,B = [ (t.x,t.y) for t in (foo(),) ][0]
print(A,B)
print(t) # this will raise an exception, but unless you know your python cold it might not be obvious before running
# METHOD 4
# Conforms to the suggestions in the links below without modifying the initial function foo or class Cat.
# But while all subsequent calls are pretty, but we have to use an otherwise meaningless shell function
def get_foo():
t = foo()
return t.x, t.y
A,B = get_foo()
What we don't want to do
If the results were indexable ( Cat extended tuple/list, we had used a namedtuple, etc.), we could simply write A,B = foo() as indicated in the comment above the Cat class. That's what's recommended here , for example.
Let's assume we have a good reason not to allow that. Maybe we like the clarity of assigning from the variable names (if they're more meaningful than x and y) or maybe the object is not primarily a container. Maybe the fields are properties, so access actually involves a method call. We don't have to assume any of those to answer this question though; the Cat class can be taken at face value.
This question already deals with how to design functions/classes the best way possible; if the function's expected return value are already well defined and does not involve tuple-like access, what is the best way to accept multiple values when returning?
I would strongly recommend either using multiple statements, or just keeping the result object without unpacking its attributes. That said, you can use operator.attrgetter for this:
from operator import attrgetter
a, b, c = attrgetter('a', 'b', 'c')(foo())