In python 3.5.1 one can make use of await/async, however, to use it (as I undestand), you need to have awaitable object.
An awaitable object is an object that defines __await__() method returning an iterator. More info here.
But I can not google out any example of having this, since most examples have some sort of asyncio.sleep(x) to mimic awaitable object.
My ultimate goal is to make simple websocket serial server, however, I can't pass this first step.
This is my (non working code).
import serial
import asyncio
connected = False
port = 'COM9'
#port = '/dev/ttyAMA0'
baud = 57600
timeout=1
class startser(object):
def __init__(self, port, baud):
self.port = port
self.baud = baud
def openconn(self):
self.ser = serial.Serial(port, baud)
async def readport(self):
#gooo= await (self.ser.in_waiting > 0)
read_byte = async self.ser.read(1).decode('ascii')
self.handle_data(read_byte)
print ("42")
def handle_data(self, data):
print(data)
serr=startser(port,baud)
serr.openconn()
loop = asyncio.get_event_loop()
#loop.run_forever(serr.readport())
loop.run_until_complete(serr.readport())
loop.close()
print ("finitto")
#with serial.Serial('COM9', 115200, timeout=1) as ser:
#x = ser.read() # read one byte
#s = ser.read(10) # read up to ten bytes (timeout)
#line = ser.readline() # read a '\n' terminated line`
I guess there is still no answer because the question is not pretty clear.
You correctly said that
An awaitable object is an object that defines __await__() method returning an iterator
Not much to add here. Just return an iterator from that method.
The only thing you need to understand is how does it work. I mean, how asyncio or another similar framework achieves concurrency in a single thread. This is simple on a high level: just get all your code organized as iterators, then call them one-by-one until the values are exhausted.
So, for example, if you have two iterators, let's say first one yields letters and the second one yields numbers, event loop calls first one and gets 'A', then it calls the second one and gets 1 then it calls first one again and gets 'B' and so on and so on, until the iterators are completed. Of course, each of these iterators can do whatever you want before yielding the next value. But, the longer it takes - the longer pause between 'switching tasks' would be. You MUST keep every iteration short:
If you have inner loops, use async for - this will allow switching task without explicit yielding.
If you have a lot of code which executes for tens or even hundreds of milliseconds, consider rewriting it in smaller pieces. In a case of legacy code, you can use hacks like asyncio.sleep(0) ← this is an allowance for asyncio to switch task here.
No blocking operations! This is most important. Consider you do something like socket.recv(). All tasks will be stopped until this call ends. This is why this is called async io in the standard library: you must use theirs implementation of all I/O functions like BaseEventLoop.sock_recv().
I'd recommend you to start (if you didn't yet) with the following docs:
https://pymotw.com/3/asyncio/
https://docs.python.org/3/library/asyncio.html
https://www.python.org/dev/peps/pep-0492
Related
I have an always-on video stream being processed in an infinite loop. Once a certain object is detected, a second I/O bound method (let's refer to this as FuncIO) is triggered. Ideally, only 1 of FuncIO should run at a time. Once FuncIO completes, the parent loop should continue (i.e., wait for the next trigger of FuncIO).
Here is the pseudocode:
def FuncIO(self):
if self._funcio_running:
# Only 1 instance of FuncIO should run at a time.
# Is this the best place to enforce this?
return
self._funcio_running = true
PerformsBlockingIO()
self._funcio_running = false
return
def main_loop(self):
while True:
if detect_object():
# Run FuncIO asynchronously
else:
# Performs other tasks.
I'm a bit new to asyncio so I would like to know if there is an existing design pattern I can use to handle this scenario.
Thanks!
As far as I understand from your question, you don't leverage the advantages of asynchronous functionality with this approach, and maybe you don't even need it here.
If I didn't understand your question correctly, so there is a special lock mechanism in asyncio, you can read more here, would be something like that:
async def FuncIO(self):
await lock.acquire()
try:
PerformsBlockingIO()
finally:
lock.release()
I have some hardware devices in my network that I need to read from them data every 100ms and I need some async way to do it and not to wait to each call.
one way to do it is to use threads and other is using asyncio that uses loop.run_executer method(that create thread for each call).
In both cases it will be async so I really don't understand what asyncio is giving us that threads are not.
can someone explain what is the advantage of using asyncio and not threads?
For example How can I turn the next code to be asyncio code:
def _send(self, data):
"""Send data over current socket
:param data: registers value to write
:type data: str (Python2) or class bytes (Python3)
:returns: True if send ok or None if error
:rtype: bool or None
"""
# check link
if self.__sock is None:
self.__debug_msg('call _send on close socket')
return None
# send
data_l = len(data)
try:
send_l = self.__sock.send(data)
except socket.error:
send_l = None
# handle send error
if (send_l is None) or (send_l != data_l):
self.__last_error = const.MB_SEND_ERR
self.__debug_msg('_send error')
self.close()
return None
else:
return send_l
This code is taken from ModbusClient class
Thanks
I believe that threads uses each one of your computers threads at the same time, which means that rather than being able to read/process the data one at a time, you will be able to do as many threads as your computer has. This means that you are limited by the hardware that you are programming on.
What asyncio allows you to do is to add the processes to a future, so you get the data, but you don't do anything with it. Once amount of time, or a certain number of datapoints are collected, you can process them all at once.
In this situation asyncio would be advantageous because you can add anywhere from 0 to many thousands of tasks to your future and perform them all at once instead.
I want to use kqueue to monitor files for changes. I can see how to use select.kqueue() in a threaded way.
I'm searching for a way to use it with asyncio. I may have missed something really obvious here. I know that python uses kqueue for asyncio on macos. I'm happy for any solution to only work when kqueue selector is used.
So far the only way I can see to do this is create a thread to continually kqueue.control() from another thread and then inject the events in with asyncio.loop.call_soon_threadsafe(). I feel like there should be a better way.
You can add the FD from the kqueue objet as a reader to the control loop using loop.add_reader(). The control loop will then inform you events are ready to collect.
There's two features of doing this which might be odd to those familiar with kqueue:
select.kqueue.control is a one-shot method which first changes the monitor and waits for new events to arrive. Because we don't ever want it to block, the two actions must be split into one non-blocking call to modify the monitor and a second, later, non-blocking call to collect the resulting events.
Because we don't ever want to block, the timeout can never be used. This can be re-implemented with asyncio.wait_for()
There are more efficient ways to write this, but here's an example of how to completely replace select.kqueue.control with an async method (here named kqueue_control):
async def kqueue_control(kqueue: select.kqueue,
changes: Optional[Iterable[select.kevent]],
max_events: int,
timeout: Optional[int]):
def receive_result():
try:
# Events are ready to collect; fetch them but do not block
results = kqueue.control(None, max_events, 0)
except Exception as ex:
future.set_exception(ex)
else:
future.set_result(results)
finally:
loop.remove_reader(kqueue.fileno())
# If this call is non-blocking then just execute it
if timeout == 0 or max_events == 0:
return kqueue.control(changes, max_events, 0)
# Apply the changes, but DON'T wait for events
kqueue.control(changes, 0)
loop = asyncio.get_running_loop()
future = loop.create_future()
loop.add_reader(kqueue.fileno(), receive_result)
if timeout is None:
return await future
else:
return await asyncio.wait_for(future, timeout)
Here are two simple RequestHandlers:
class AsyncHandler(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
while True:
future = Future()
global_futures.add(future)
s = yield future
self.write(s)
self.flush()
class AsyncHandler2(tornado.web.RequestHandler):
#gen.coroutine
def get(self):
for f in global_futures:
f.set_result(str(dt.now()))
global_futures.clear()
self.write("OK")
The first one "subscribes" to the stream, second one delivers message to all subscribers.
The problem is that I cannot have more than a bunch (in my case 5-6) subscribers. As soon as I subscribe more than allowed, the next request to the second method simply hangs.
I assume this is happening due to the first handler not being properly asynchronous. Is that because I am using global object to store list of subscribers?
How can I have more streaming requests open simultaneously, and what is a logical limit?
The problem is that global_futures is being modified while you're iterating over it: when AsyncHandler.get wakes up, it runs from one yield to the next, meaning it creates its next Future and adds it to the set before control is returned to AsyncHandler2. This is undefined and the behavior depends on where the iterator is in the set: sometimes the new future is inserted "behind" the iterator and everything is fine, sometimes it's inserted "in front of" the iterator and the same consumer handler will be woken up a second time (and insert a third copy of itself which may be in front or behind...). When you only have a few consumers you'll hit the "behind" case often enough that things will work, but with too many it becomes extremely unlikely to ever finish.
The solution is to copy global_futures before iterating over it instead of clearing it at the end:
#gen.coroutine
def get(self);
fs = list(global_futures)
global_futures.clear()
for f in fs:
f.set_result(str(dt.now()))
self.write("OK")
Note that I think this is only a problem in Tornado 4.x and older. In Tornado 5 things were changed so that set_result no longer calls into the waiting handler immediately, so there is no more concurrent modification.
I'm trying to wrap my head around asyncio and aiohttp and for the first time in years programming makes me feel utterly stupid and incapable. Which is kind of beautiful, in a weirdo Zen way. But alas, there's work to get done.
I've got an existing class that can do numerous wondrous things on the web, like signing up to a web site, getting data, the works. And now I need like, 100 or 1000 of these little worker bees to sign up. Code looks roughly like this:
class Worker(object):
def signup(self, ...):
...
data = self.make_request(url, data)
self.user_id = data.get("user_id")
return self
def make_request(self, url, data):
response = requests.post(url, data=data)
return response.json()
workers = [Worker().signup() for n in range(100)]
As you can see, we're using the requests module to make a POST request. However this is blocking, so we'll have to wait for worker N to finish signing up before we start signing up worker N+1. Fortunately, the original author of the Worker class (that sounds charmingly Marxist) in her infinite wisdom wrapped every HTTP call in the self.make_request method, so making the whole Worker non blocking should just be a matter of swapping out the requests library for a non-blocking one aaaaand bob's your uncle, right? This is how far I got:
class AyncWorker(Worker):
#asyncio.coroutine
def make_request(self, url, data):
response = yield from aiohttp.request('post', url, data=data)
return (yield from response.json())
coroutines = [Worker().signup() for n in range(100)]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.wait(coroutines))
loop.close()
But this will raise an AttributeError: 'generator' object has no attribute 'get' in the signup method where I do self.user_id = data.get("user_id"). And beyond that, I still don't have the workers in a neat dictionary. I'm aware that I'm most likely completely misunderstanding how asyncio works - but I already spent a day reading through various docs, mind-shattering tutorials by David Beazly, and masses of toy examples that are simply enough that I understand them and too simple to apply to this situation. How should I structure my worker and my async loop to sign up 100 workers in parallel and eventually get a list of all workers after they signed up?
Once you use the yield (or yield from) in a function, this function becomes a coroutine. It means that you can't get a result by just calling it: you will get a generator object. You must at least do this:
#asyncio.coroutine
def some_coroutine(*args):
#...
#...
yield from tasty.asyncio.function()
return result
def coroutine_user():
# data = some_coroutine() will give you a generator object instead of result
data = yield from some_coroutine()
return data # data here is a plain result: you can call your .get or whatever
Guess what happens when you call coroutine_user():
>>> coroutine_user()
<generator object coroutine_user at 0x7fe13b8a47e0>
Lack of async.coroutine decorator doesn't help at all: coroutines are contagious! To get a result in a function, you must use yield from. It turns your function into another coroutine!
Though things aren't always that bad (usually you can manually iterate a generator object without relying on yield from), asyncio will specifically stop you from doing it: it breaks some internals (you can do it only from Future or asyncio.coroutine). So just use concurrent.futures or something similar unless you're going to turn all your code into coroutines. As some alternative, isolate all users of aiohttp.request from usual methods and work with both coroutine-based async workers and synchronous plain old code. Diving into asyncio and actually refactoring all your code is an option too, obviously: you basically need to put yield from before every call to any infected with asyncio method.