I have a simple example as so:
import numpy as np
func_dict1 = {0: np.sin, 1: np.cos, 2: np.tan}
out = map(func_dict1.get, np.array([0, 2, 0]))
Here I am picking out three functions by their dictionary keys. Now I want to pass unique arguments to each function like so:
[f(x) for f,x in zip(out, [3,1,2])]
which renders the output:
[0.1411200080598672, 1.557407724654902, 0.9092974268256817]
But how can I do this with map?
I thought this would work, but it does not:
map(out, [3,1,2])
Where am I going wrong? And is there any benefit to using map over list comprehension? My prior is that it is faster but I confess to not being an expert on the subject.
map is designed to take a single function and apply it to every item in an iterable. You are applying a different function to different items. I think the list comprehension is an elegant way of doing it.
WARNING: you probably don't want to use map and this answer might confuse you more than it helps ;).
However, as you asked how you can make map do this and as it's python, let's take the challenge: one way to achieve what you want is by wrapping your out in an object that is callable (so behaves like a function) and on each call also advances to the next function. For example like this:
# yours
import numpy as np
func_dict1 = {0: np.sin, 1: np.cos, 2: np.tan}
out = map(func_dict1.get, np.array([0, 2, 0]))
# extend like this
class FuncIterCaller:
def __init__(self, funcs):
self.funcs = funcs
def __call__(self, *args, **kwds):
return next(self.funcs)(*args, **kwds)
res = map(FuncIterCaller(out), [3,1,2])
# to see what's inside:
print(list(res))
Related
In Python we can assign a function to a variable. For example, the math.sine function:
sin = math.sin
rad = math.radians
print sin(rad(my_number_in_degrees))
Is there any easy way of assigning multiple functions (ie, a function of a function) to a variable? For example:
sin = math.sin(math.radians) # I cannot use this with brackets
print sin (my_number_in_degrees)
Just create a wrapper function:
def sin_rad(degrees):
return math.sin(math.radians(degrees))
Call your wrapper function as normal:
print sin_rad(my_number_in_degrees)
I think what the author wants is some form of functional chaining. In general, this is difficult, but may be possible for functions that
take a single argument,
return a single value,
the return values for the previous function in the list is of the same type as that of the input type of the next function is the list
Let us say that there is a list of functions that we need to chain, off of which take a single argument, and return a single argument. Also, the types are consistent. Something like this ...
functions = [np.sin, np.cos, np.abs]
Would it be possible to write a general function that chains all of these together? Well, we can use reduce although, Guido doesn't particularly like the map, reduce implementations and was about to take them out ...
Something like this ...
>>> reduce(lambda m, n: n(m), functions, 3)
0.99005908575986534
Now how do we create a function that does this? Well, just create a function that takes a value and returns a function:
import numpy as np
def chainFunctions(functions):
def innerFunction(y):
return reduce(lambda m, n: n(m), functions, y)
return innerFunction
if __name__ == '__main__':
functions = [np.sin, np.cos, np.abs]
ch = chainFunctions( functions )
print ch(3)
You could write a helper function to perform the function composition for you and use it to create the kind of variable you want. Some nice features are that it can combine a variable number of functions together that each accept a variable number of arguments.
import math
try:
reduce
except NameError: # Python 3
from functools import reduce
def compose(*funcs):
""" Compose a group of functions (f(g(h(...)))) into a single composite func. """
return reduce(lambda f, g: lambda *args, **kwargs: f(g(*args, **kwargs)), funcs)
sindeg = compose(math.sin, math.radians)
print(sindeg(90)) # -> 1.0
Assume there are some useful transformation functions, for example random_spelling_error, that we would like to apply n times.
My temporary solution looks like this:
def reapply(n, fn, arg):
for i in range(n):
arg = fn(arg)
return arg
reapply(3, random_spelling_error, "This is not a test!")
Is there a built-in or otherwise better way to do this?
It need not handle variable lengths args or keyword args, but it could. The function will be called at scale, but the values of n will be low and the size of the argument and return value will be small.
We could call this reduce but that name was of course taken for a function that can do this and too much more, and was removed in Python 3. Here is Guido's argument:
So in my mind, the applicability of reduce() is pretty much limited to
associative operators, and in all other cases it's better to write out
the accumulation loop explicitly.
reduce is still available in python 3 using the functools module. I don't really know that it's any more pythonic, but here's how you could achieve it in one line:
from functools import reduce
def reapply(n, fn, arg):
return reduce(lambda x, _: fn(x), range(n), arg)
Get rid of the custom function completely, you're trying to compress two readable lines into one confusing function call. Which one do you think is easier to read and understand, your way:
foo = reapply(3, random_spelling_error, foo)
Or a simple for loop that's one more line:
for _ in range(3):
foo = random_spelling_error(foo)
Update: According to your comment
Let's assume that there are many transformation functions I may want to apply.
Why not try something like this:
modifiers = (random_spelling_error, another_function, apply_this_too)
for modifier in modifiers:
for _ in range(3):
foo = modifier(foo)
Or if you need different amount of repeats for different functions, try creating a list of tuples:
modifiers = [
(random_spelling_error, 5),
(another_function, 3),
...
]
for modifier, count in modifiers:
for _ in range(count):
foo = modifier(foo)
some like recursion, not always obviously 'better'
def reapply(n, fn, arg):
if n:
arg = reapply(n-1, fn, fn(arg))
return arg
reapply(1, lambda x: x**2, 2)
Out[161]: 4
reapply(2, lambda x: x**2, 2)
Out[162]: 16
Suppose I have a function like this:
from toolz.curried import *
#curry
def foo(x, y):
print(x, y)
Then I can call:
foo(1,2)
foo(1)(2)
Both return the same as expected.
However, I would like to do something like this:
#curry.inverse # hypothetical
def bar(*args, last):
print(*args, last)
bar(1,2,3)(last)
The idea behind this is that I would like to pre-configure a function and then put it in a pipe like this:
pipe(data,
f1, # another function
bar(1,2,3) # unknown number of arguments
)
Then, bar(1,2,3)(data) would be called as a part of the pipe. However, I don't know how to do this. Any ideas? Thank you very much!
Edit:
A more illustrative example was asked for. Thus, here it comes:
import pandas as pd
from toolz.curried import *
df = pd.DataFrame(data)
def filter_columns(*args, df):
return df[[*args]]
pipe(df,
transformation_1,
transformation_2,
filter_columns("date", "temperature")
)
As you can see, the DataFrame is piped through the functions, and filter_columns is one of them. However, the function is pre-configured and returns a function that only takes a DataFrame, similar to a decorator. The same behaviour could be achieved with this:
def filter_columns(*args):
def f(df):
return df[[*args]]
return f
However, I would always have to run two calls then, e.g. filter_columns()(df), and that is what I would like to avoid.
well I am unfamiliar with toolz module, but it looks like there is no easy way of curry a function with arbitrary number of arguments, so lets try something else.
First as a alternative to
def filter_columns(*args):
def f(df):
return df[*args]
return f
(and by the way, df[*args] is a syntax error )
to avoid filter_columns()(data) you can just grab the last element in args and use the slice notation to grab everything else, for example
def filter_columns(*argv):
df, columns = argv[-1], argv[:-1]
return df[columns]
And use as filter_columns(df), filter_columns("date", "temperature", df), etc.
And then use functools.partial to construct your new, well partially applied, filter to build your pipe like for example
from functools import partial
from toolz.curried import pipe # always be explicit with your import, the last thing you want is import something you don't want to, that overwrite something else you use
pipe(df,
transformation_1,
transformation_2,
partial(filter_columns, "date", "temperature")
)
Is there any built-in version for this
def unpack(f, a):
return f(**a) #or ``return f(*a)''
Why isn't unpack considered to be an operator and located in operator.*?
I'm trying to do something similar to this (but of course want a general solution to the same type of problem):
from functools import partial, reduce
from operator import add
data = [{'tag':'p','inner':'Word'},{'tag':'img','inner':'lower'}]
renderer = partial(unpack, "<{tag}>{inner}</{tag}>".format)
print(reduce(add, map(renderer, data)))
as without using lambdas or comprehensions.
That is not the way to go about this. How about
print(''.join('<{tag}>{inner}</{tag}>'.format(**d) for d in data))
Same behavior in a much more Pythonic style.
Edit: Since you seem opposed to using any of the nice features of Python, how about this:
def tag_format(x):
return '<{tag}>{inner}</{tag}>'.format(tag=x['tag'], inner=x['inner'])
results = []
for d in data:
results.append(tag_format(d))
print(''.join(results))
I don't know of an operator that does what you want, but you don't really need it to avoid lambdas or comprehensions:
from functools import reduce
from operator import add
data = [{'tag':'p','inner':'Word'},{'tag':'img','inner':'lower'}]
print(reduce(add, map("<{0[tag]}>{0[inner]}</{0[tag]}>".format, data)))
Seems like it would be possible to generalize something like this if you wanted.
In Python, is it possible to encapsulate exactly the common slice syntax and pass it around? I know that I can use slice or __slice__ to emulate slicing. But I want to pass the exact same syntax that I would put in the square brackets that would get used with __getitem__.
For example, suppose I wrote a function to return some slice of a list.
def get_important_values(some_list, some_condition, slice):
elems = filter(some_condition, some_list)
return elems[slice]
This works fine if I manually pass in a slice object:
In [233]: get_important_values([1,2,3,4], lambda x: (x%2) == 0, slice(0, None))
Out[233]: [2, 4]
But what I want to let the user pass is exactly the same slicing they would have used with __getitem__:
get_important_values([1,2,3,4], lambda x: (x%2) == 0, (0:-1) )
# or
get_important_values([1,2,3,4], lambda x: (x%2) == 0, (0:) )
Obviously this generates a syntax error. But is there any way to make this work, without writing my own mini parser for the x:y:t type slices, and forcing the user to pass them as strings?
Motivation
I could just make this example function return something directly sliceable, such as filter(some_condition, some_list), which will be the whole result as a list. In my actual example, however, the internal function is much more complicated, and if I know the slice that the user wants ahead of time, I can greatly simplify the calculation. But I want the user to not have to do much extra to tell me the slice ahead of time.
Perhaps something along the following lines would work for you:
class SliceMaker(object):
def __getitem__(self, item):
return item
make_slice = SliceMaker()
print make_slice[3]
print make_slice[0:]
print make_slice[:-1]
print make_slice[1:10:2,...]
The idea is that you use make_slice[] instead of manually creating instances of slice. By doing this you'll be able to use the familiar square brackets syntax in all its glory.
In short, no. That syntax is only valid in the context of the [] operator. I might suggest accepting a tuple as input and then pass that tuple to slice(). Alternatively, maybe you could redesign whatever you're doing so that get_important_values() is somehow implemented as a sliceable object.
For example, you could do something like:
class ImportantValueGetter(object):
def __init__(self, some_list, some_condition):
self.some_list = some_list
self.some_condition = some_condition
def __getitem__(self, key):
# Here key could be an int or a slice; you can do some type checking if necessary
return filter(self.some_condition, self.some_list)[key]
You can probably do one better by turning this into a Container ABC of some sort but that's the general idea.
One way (for simple slices) would be to have the slice argument either be a dict or an int,
ie
get_important_values([1, 2, 3, 4], lambda x: (x%2) == 0, {0: -1})
or
get_important_values([1, 2, 3, 4], lambda x: (x%2) == 0, 1)
then the syntax would stay more or less the same.
This wouldn't work though, for when you want to do things like
some_list[0:6:10..]