Async lambda function in discord.py [duplicate] - python

I'm trying to do something like this:
mylist.sort(key=lambda x: await somefunction(x))
But I get this error:
SyntaxError: 'await' outside async function
Which makes sense because the lambda is not async.
I tried to use async lambda x: ... but that throws a SyntaxError: invalid syntax.
Pep 492 states:
Syntax for asynchronous lambda functions could be provided, but this construct is outside of the scope of this PEP.
But I could not find out if that syntax was implemented in CPython.
Is there a way to declare an async lambda, or to use an async function for sorting a list?

You can't. There is no async lambda, and even if there were, you coudln't pass it in as key function to list.sort(), since a key function will be called as a synchronous function and not awaited. An easy work-around is to annotate your list yourself:
mylist_annotated = [(await some_function(x), x) for x in mylist]
mylist_annotated.sort()
mylist = [x for key, x in mylist_annotated]
Note that await expressions in list comprehensions are only supported in Python 3.6+. If you're using 3.5, you can do the following:
mylist_annotated = []
for x in mylist:
mylist_annotated.append((await some_function(x), x))
mylist_annotated.sort()
mylist = [x for key, x in mylist_annotated]

An "async lambda" can be emulated by combining a lambda with an async generator:1
key=lambda x: (await somefunction(x) for _ in '_').__anext__()
It is possible to move the ( ).__anext__() to a helper, which likely makes the pattern clearer as well:
def head(async_iterator): return async_iterator.__anext__()
key=lambda x: head(await somefunction(x) for _ in '_')
Note that the sort method/function in the standard library are not async. One needs an async version, such as asyncstdlib.sorted (disclaimer: I maintain this library):
import asyncstdlib as a
mylist = await a.sorted(mylist, key=lambda x: head(await somefunction(x) for _ in '_'))
Understanding the lambda ...: (...).__anext__() pattern
An "async lambda" would be an anonymous asynchronous function, or in other words an anonymous function evaluating to an awaitable. This is in parallel to how async def defines a named function evaluating to an awaitable.
The task can be split into two parts: An anonymous function expression and a nested awaitable expression.
An anonymous function expression is exactly what a lambda ...: ... is.
An awaitable expression is only allowed inside a coroutine function; however:
An (asynchronous) generator expression implicitly creates a (coroutine) function. As an async generator only needs async to run, it can be defined in a sync function (since Python 3.7).
An asynchronous iterable can be used as an awaitable via its __anext__ method.
These three parts are directly used in the "async lambda" pattern:
# | regular lambda for the callable and scope
# | | async generator expression for an async scope
# v v v first item as an awaitable
key=lambda x: (await somefunction(x) for _ in '_').__anext__()
The for _ in '_' in the async generator is only to have exactly one iteration. Any variant with at least one iteration will do.
1Be mindful whether an "async lambda" is actually needed in the first place, since async functions are first class just like regular functions. Just as lambda x: foo(x) is redundant and should just be foo, lambda x: (await bar(x) …) is redundant and should just be bar . The function body should do more than just call-and-await, such as 3 + await bar(x) or await bar(x) or await qux(x).

await cannot be included in a lambda function.
The solutions here can be shortened to:
from asyncio import coroutine, run
my_list = [. . .]
async def some_function(x) -> coroutine:
. . .
my_list.sort(key=lambda x: await some_function(x)) # raises a SyntaxError
my_list.sort(key=lambda x: run(some_function(x)) # works

If you already defined a separate async function, you can simplify MisterMiyagi's answer even a bit more:
mylist = await a.sorted(
mylist,
key=somefunction)
If you want to change the key after awaiting it, you can use asyncstdlib.apply:
mylist = await a.sorted(
mylist,
key=lambda x: a.apply(lambda after: 1 / after, some_function(x)))
Here is a complete example program:
import asyncio
import asyncstdlib as a
async def some_function(x):
return x
async def testme():
mylist=[2, 1, 3]
mylist = await a.sorted(
mylist,
key=lambda x: a.apply(lambda after: 1 / after, some_function(x)))
print(f'mylist is: {mylist}')
if __name__ == "__main__":
asyncio.run(testme())

The answer from Sven Marnach has an Edge case.
If you try and sort a list that has 2 items that produce the same search key but are different and are not directly sortable, it will crash.
mylist = [{'score':50,'name':'bob'},{'score':50,'name':'linda'}]
mylist_annotated = [(x['score'], x) for x in mylist]
mylist_annotated.sort()
print( [x for key, x in mylist_annotated] )
Will give:
TypeError: '<' not supported between instances of 'dict' and 'dict'
Fortunately I had an easy solution - my data had a unique key in that was sortable, so I could put that as the second key:
mylist = [{'score':50,'name':'bob','unique_id':1},{'score':50,'name':'linda','unique_id':2}]
mylist_annotated = [(x['score'], x['unique_id'], x) for x in mylist]
mylist_annotated.sort()
print( [x for key, unique, x in mylist_annotated] )
I guess if your data doesn't have a naturally unique value in, you can insert one before trying to sort? A uuid maybe?
EDIT: As suggested in comment (Thanks!), you can also use operator.itemgetter:
import operator
mylist = [{'score':50,'name':'bob'},{'score':50,'name':'linda'}]
mylist_annotated = [(x['score'], x) for x in mylist]
mylist_annotated.sort(key=operator.itemgetter(0))
print( [x for key, x in mylist_annotated] )

Related

Can i use await into a lambda function? [duplicate]

I'm trying to do something like this:
mylist.sort(key=lambda x: await somefunction(x))
But I get this error:
SyntaxError: 'await' outside async function
Which makes sense because the lambda is not async.
I tried to use async lambda x: ... but that throws a SyntaxError: invalid syntax.
Pep 492 states:
Syntax for asynchronous lambda functions could be provided, but this construct is outside of the scope of this PEP.
But I could not find out if that syntax was implemented in CPython.
Is there a way to declare an async lambda, or to use an async function for sorting a list?
You can't. There is no async lambda, and even if there were, you coudln't pass it in as key function to list.sort(), since a key function will be called as a synchronous function and not awaited. An easy work-around is to annotate your list yourself:
mylist_annotated = [(await some_function(x), x) for x in mylist]
mylist_annotated.sort()
mylist = [x for key, x in mylist_annotated]
Note that await expressions in list comprehensions are only supported in Python 3.6+. If you're using 3.5, you can do the following:
mylist_annotated = []
for x in mylist:
mylist_annotated.append((await some_function(x), x))
mylist_annotated.sort()
mylist = [x for key, x in mylist_annotated]
An "async lambda" can be emulated by combining a lambda with an async generator:1
key=lambda x: (await somefunction(x) for _ in '_').__anext__()
It is possible to move the ( ).__anext__() to a helper, which likely makes the pattern clearer as well:
def head(async_iterator): return async_iterator.__anext__()
key=lambda x: head(await somefunction(x) for _ in '_')
Note that the sort method/function in the standard library are not async. One needs an async version, such as asyncstdlib.sorted (disclaimer: I maintain this library):
import asyncstdlib as a
mylist = await a.sorted(mylist, key=lambda x: head(await somefunction(x) for _ in '_'))
Understanding the lambda ...: (...).__anext__() pattern
An "async lambda" would be an anonymous asynchronous function, or in other words an anonymous function evaluating to an awaitable. This is in parallel to how async def defines a named function evaluating to an awaitable.
The task can be split into two parts: An anonymous function expression and a nested awaitable expression.
An anonymous function expression is exactly what a lambda ...: ... is.
An awaitable expression is only allowed inside a coroutine function; however:
An (asynchronous) generator expression implicitly creates a (coroutine) function. As an async generator only needs async to run, it can be defined in a sync function (since Python 3.7).
An asynchronous iterable can be used as an awaitable via its __anext__ method.
These three parts are directly used in the "async lambda" pattern:
# | regular lambda for the callable and scope
# | | async generator expression for an async scope
# v v v first item as an awaitable
key=lambda x: (await somefunction(x) for _ in '_').__anext__()
The for _ in '_' in the async generator is only to have exactly one iteration. Any variant with at least one iteration will do.
1Be mindful whether an "async lambda" is actually needed in the first place, since async functions are first class just like regular functions. Just as lambda x: foo(x) is redundant and should just be foo, lambda x: (await bar(x) …) is redundant and should just be bar . The function body should do more than just call-and-await, such as 3 + await bar(x) or await bar(x) or await qux(x).
await cannot be included in a lambda function.
The solutions here can be shortened to:
from asyncio import coroutine, run
my_list = [. . .]
async def some_function(x) -> coroutine:
. . .
my_list.sort(key=lambda x: await some_function(x)) # raises a SyntaxError
my_list.sort(key=lambda x: run(some_function(x)) # works
If you already defined a separate async function, you can simplify MisterMiyagi's answer even a bit more:
mylist = await a.sorted(
mylist,
key=somefunction)
If you want to change the key after awaiting it, you can use asyncstdlib.apply:
mylist = await a.sorted(
mylist,
key=lambda x: a.apply(lambda after: 1 / after, some_function(x)))
Here is a complete example program:
import asyncio
import asyncstdlib as a
async def some_function(x):
return x
async def testme():
mylist=[2, 1, 3]
mylist = await a.sorted(
mylist,
key=lambda x: a.apply(lambda after: 1 / after, some_function(x)))
print(f'mylist is: {mylist}')
if __name__ == "__main__":
asyncio.run(testme())
The answer from Sven Marnach has an Edge case.
If you try and sort a list that has 2 items that produce the same search key but are different and are not directly sortable, it will crash.
mylist = [{'score':50,'name':'bob'},{'score':50,'name':'linda'}]
mylist_annotated = [(x['score'], x) for x in mylist]
mylist_annotated.sort()
print( [x for key, x in mylist_annotated] )
Will give:
TypeError: '<' not supported between instances of 'dict' and 'dict'
Fortunately I had an easy solution - my data had a unique key in that was sortable, so I could put that as the second key:
mylist = [{'score':50,'name':'bob','unique_id':1},{'score':50,'name':'linda','unique_id':2}]
mylist_annotated = [(x['score'], x['unique_id'], x) for x in mylist]
mylist_annotated.sort()
print( [x for key, unique, x in mylist_annotated] )
I guess if your data doesn't have a naturally unique value in, you can insert one before trying to sort? A uuid maybe?
EDIT: As suggested in comment (Thanks!), you can also use operator.itemgetter:
import operator
mylist = [{'score':50,'name':'bob'},{'score':50,'name':'linda'}]
mylist_annotated = [(x['score'], x) for x in mylist]
mylist_annotated.sort(key=operator.itemgetter(0))
print( [x for key, x in mylist_annotated] )

asyncio.gather with generator expression

Why doesn't asyncio.gather work with a generator expression?
import asyncio
async def func():
await asyncio.sleep(2)
# Works
async def call3():
x = (func() for x in range(3))
await asyncio.gather(*x)
# Doesn't work
async def call3():
await asyncio.gather(func() for x in range(3))
# Works
async def call3():
await asyncio.gather(*[func() for x in range(3)])
asyncio.run(call3())
The second variant gives:
[...]
File "test.py", line 13, in <genexpr>
await asyncio.gather(func() for x in range(3))
RuntimeError: Task got bad yield: <coroutine object func at 0x10421dc20>
Is this expected behavior?
await asyncio.gather(func() for x in range(3))
This doesn't work because this is passing the generator object as argument to gather. gather doesn't expect an iterable, it expects coroutines as individual arguments. Which means you need to unpack the generator.
Unpack the generator:
await asyncio.gather(*(func() for i in range(10))) # star expands generator
We must expand it because asyncio.gather expects a list of arguments (i.e. asyncio.gather(coroutine0, coroutine1, coroutine2, coroutine3)), not an iterable
Python uses */** for both 'un-packing' and just 'packing' based on whether it's used for variable assignment or not.
def foo(*args,**kwargs):...
In this case, all non-keyworded args are getting put into a tuple args and all kwargs are getting packed into a new dictionary. A single variable passed in still gets packed into a tuple(*) or dict(**).
This is kind of a hybrid
first,*i_take_the_rest,last = range(10)
>>> first=0,i_take_the_rest=[1,2,3,4,5,6,7,8],last=9
*a,b = range(1)
>>> a=[],b=0
But here it unpacks:
combined_iterables = [*range(10),*range(3)]
merged_dict = {**first_dict,**second_dict}
So basically if it's on the left side of the equals or if it's used in a function/method definition like *foo it's packing stuff into a list or tuple (respectively). In comprehensions, however, it has the unpacking behavior.

How to take the logical and of a list of boolean functions in python

Say I have a list of functions li = [fun1, fun2, ..., funk] that take one argument and return a boolean. I'd like to compose them into a single function that returns the logical and of the return values of each of the fun's when evaluated at its argument. (in other words I'd like to have fun(x) = fun1(x) and fun2(x) and ... and funk(x).)
Is there an elegant way of doing this?
Use all to create the composite function
def func(x, lst):
#lst is the list of functions and x is used as an argument for functions
return all(fun(x) for fun in lst)
And then call it as
func(x,[fun1, fun2, fun3,...., funk])
If a lambda function is needed, you can do the following, however it is against PEP-8 guidelines
Always use a def statement instead of an assignment statement that binds a lambda expression directly to an identifier.
func = lambda x:all(fun(x) for fun in lst)
And call it as
func(x)
all will do it
all(func(x) for func in li)

Aggregating an async generator to a tuple

In trying to aggregate the results from an asynchronous generator, like so:
async def result_tuple():
async def result_generator():
# some await things happening in here
yield 1
yield 2
return tuple(num async for num in result_generator())
I get a
TypeError: 'async_generator' object is not iterable
when executing the async for line.
But PEP 530 seems to suggest that it should be valid:
Asynchronous Comprehensions
We propose to allow using async for inside list, set and dict comprehensions. Pending PEP 525 approval, we can also allow creation of asynchronous generator expressions.
Examples:
set comprehension: {i async for i in agen()};
list comprehension: [i async for i in agen()];
dict comprehension: {i: i ** 2 async for i in agen()};
generator expression: (i ** 2 async for i in agen()).
What's going on, and how can I aggregate an asynchronous generator into a single tuple?
In the PEP excerpt, the comprehensions are listed side-by-side in the same bullet list, but the generator expression is very different from the others.
There is no such thing as a "tuple comprehension". The argument to tuple() makes an asynchronous generator:
tuple(num async for num in result_generator())
The line is equivalent to tuple(result_generator()). The tuple then tries to iterate over the generator synchronously and raises the TypeError.
The other comprehensions will work, though, as the question expected. So it's possible to generate a tuple by first aggregating to a list, like so:
async def result_tuple():
async def result_generator():
# some await things happening in here
yield 1
yield 2
return tuple([num async for num in result_generator()])

lambda returns lambda in python

Very rarely I'll come across some code in python that uses an anonymous function which returns an anonymous function...?
Unfortunately I can't find an example on hand, but it usually takes the form like this:
g = lambda x,c: x**c lambda c: c+1
Why would someone do this? Maybe you can give an example that makes sense (I'm not sure the one I made makes any sense).
Edit: Here's an example:
swap = lambda a,x,y:(lambda f=a.__setitem__:(f(x,(a[x],a[y])),
f(y,a[x][0]),f(x,a[x][1])))()
You could use such a construct to do currying:
curry = lambda f, a: lambda x: f(a, x)
You might use it like:
>>> add = lambda x, y: x + y
>>> add5 = curry(add, 5)
>>> add5(3)
8
swap = lambda a,x,y:(lambda f=a.__setitem__:(f(x,(a[x],a[y])),
f(y,a[x][0]),f(x,a[x][1])))()
See the () at the end? The inner lambda isn't returned, its called.
The function does the equivalent of
def swap(a, x, y):
a[x] = (a[x], a[y])
a[y] = a[x][0]
a[x] = a[x][1]
But let's suppose that we want to do this in a lambda. We cannot use assignments in a lambda. However, we can call __setitem__ for the same effect.
def swap(a, x, y):
a.__setitem__(x, (a[x], a[y]))
a.__setitem__(y, a[x][0])
a.__setitem__(x, a[x][1])
But for a lambda, we can only have one expression. But since these are function calls we can wrap them up in a tuple
def swap(a, x, y):
(a.__setitem__(x, (a[x], a[y])),
a.__setitem__(y, a[x][0]),
a.__setitem__(x, a[x][1]))
However, all those __setitem__'s are getting me down, so let's factor them out:
def swap(a, x, y):
f = a.__setitem__
(f(x, (a[x], a[y])),
f(y, a[x][0]),
f(x, a[x][1]))
Dagnamit, I can't get away with adding another assignment! I know let's abuse default parameters.
def swap(a, x, y):
def inner(f = a.__setitem__):
(f(x, (a[x], a[y])),
f(y, a[x][0]),
f(x, a[x][1]))
inner()
Ok let's switch over to lambdas:
swap = lambda a, x, y: lambda f = a.__setitem__: (f(x, (a[x], a[y])), f(y, a[x][0]), f(x, a[x][1]))()
Which brings us back to the original expression (plus/minus typos)
All of this leads back to the question: Why?
The function should have been implemented as
def swap(a, x, y):
a[x],a[y] = a[y],a[x]
The original author went way out of his way to use a lambda rather then a function. It could be that he doesn't like nested function for some reason. I don't know. All I'll say is its bad code. (unless there is a mysterious justification for it.)
It can be useful for temporary placeholders. Suppose you have a decorator factory:
#call_logger(log_arguments=True, log_return=False)
def f(a, b):
pass
You can temporarily replace it with
call_logger = lambda *a, **kw: lambda f: f
It can also be useful if it indirectly returns a lambda:
import collections
collections.defaultdict(lambda: collections.defaultdict(lambda: collections.defaultdict(int)))
It's also useful for creating callable factories in the Python console.
And just because something is possible doesn't mean that you have to use it.
I did something like this just the other day to disable a test method in a unittest suite.
disable = lambda fn : lambda *args, **kwargs: None
#disable
test_method(self):
... test code that I wanted to disable ...
Easy to re-enable it later.
This can be used to pull out some common repetitive code (there are of course other ways to achieve this in python).
Maybe you're writing a a logger, and you need to prepend the level to the log string. You might write something like:
import sys
prefixer = lambda prefix: lambda message: sys.stderr.write(prefix + ":" + message + "\n")
log_error = prefixer("ERROR")
log_warning = prefixer("WARNING")
log_info = prefixer("INFO")
log_debug = prefixer("DEBUG")
log_info("An informative message")
log_error("Oh no, a fatal problem")
This program prints out
INFO:An informative message
ERROR:Oh no, a fatal problem
It is most oftenly - at least in code I come accross and that I myself write - used to "freeze" a variable with the value it has at the point the lambda function is created. Otherwise, nonlocals variable reference a variable in the scope they exist, which can lead to undesied results sometimes.
For example, if I want to create a list of ten functions, each one being a multiplier for a scalar from 0 to 9. One might be tempted to write it like this:
>>> a = [(lambda j: i * j) for i in range(10)]
>>> a[9](10)
90
Whoever, if you want to use any of the other factoried functions you get the same result:
>>> a[1](10)
90
That is because the "i" variable inside the lambda is not resolved when the lambda is created. Rather, Python keeps a reference to the "i" in the "for" statement - on the scope it was created (this reference is kept in the lambda function closure). When the lambda is executed, the variable is evaluated, and its value is the final one it had in that scope.
When one uses two nested lambdas like this:
>>> a = [(lambda k: (lambda j: k * j))(i) for i in range(10)]
The "i" variable is evaluated durint the execution of the "for" loop. Itś value is passed to "k" - and "k" is used as the non-local variable in the multiplier function we are factoring out. For each value of i, there will be a different instance of the enclosing lambda function, and a different value for the "k" variable.
So, it is possible to achieve the original intent :
>>> a = [(lambda k: (lambda j: k * j))(i) for i in range(10)]
>>> a[1](10)
10
>>> a[9](10)
90
>>>
It can be used to achieve a more continuation/trampolining style of programming,
See Continuation-passing style
Basically, with this you can modify functions instead of values
One example I stumbled with recently: To compute approximate derivatives (as functions) and use it as an input function in another place.
dx = 1/10**6
ddx = lambda f: lambda x: (f(x + dx) - f(x))/dx
f = lambda x: foo(x)
newton_method(func=ddx(f), x0=1, n=10)

Categories