I'm developing a blender operator that needs to run an expensive for loop to generate a procedural animation. It looks roughly like this:
class MyOperator(bpy.types.Operator):
def execute(self, context):
data = []
for _ in range(10000):
data.append(run_something_expensive())
instantiate_data_into_blender(data)
return {"FINISHED"}
The problem of course is that when I run it from blender it takes a lot of time to finish and the UI becomes unresponsive. What is the recommended way to handle this? Is there a way to run this computation in another thread and potentially update the blender scene as the results get generated? (i.e. running instantiate_data_into_blender every once in a while as data becomes available)
Take a look at the asyncio module.
First: you have to import asyncio and define a loop.
import asyncio
loop = asyncio.get_event_loop()
Second: you create an async method in your operator.
async def massive_work(self, param):
returnedData = await your.longrun.func(param)
if returnedData == '':
return False
return True
Third: call the async method from your execute method
allDone = loop.run_until_complete(massive_work(dataToWorkOn))
if allDone != True:
self.report({'ERROR'}, "something went wrong!")
return {'CANCELLED'}
else:
self.report({'INFO'}, "all done!")
return {'FINISHED'}
Related
I am trying to write a small little GUI that can start an audio recording with one button and end the recording with another.
I have written a recorder class that essentially does the following
class RecordAudio:
def __init__(self):
self.rec = True
def start_recording(self):
while self.rec:
record()
def end_recording(self):
self.rec = False
What is the mechanism, that I can use such that the recording continues on, while still enabling me to stop the recording using the function end_recording()? Or more precisely, what is the best practice for this problem?
I have tried to make the start_recording function async, but this doesn't work, as start_recording never finishes its computation.
Basically I would like to be able to do something like
import asyncio
rec = True
async def start_loop():
global rec
while rec:
await asyncio.sleep(1)
print("Slept another second")
print("Stopped loop")
def stop_loop():
global rec
rec = False
print("Stopping loop")
async def main():
loop = asyncio.get_event_loop()
loop.create_task(start_loop())
await asyncio.sleep(2)
stop_loop()
But where start_loop does not sleep but is continuously performing some endless task.
As said by Michael in the comments, I was using the wrong mode. The documentation gives to modes to operate either a blocking mode or a callback mode. In the callback mode it is possible to start recording in one function, while another function can change the state such that the recording stops.
I have 4 functions that I wrote in python. I want to make first 3 functions asynchronous.
So it should look like this:
x = 3.13
def one(x):
return x**x
def two(x):
return x*x*12-314
def three(x):
return x+x+352+x**x
def final(x):
one = one(x)
two = two(x)
three = three(x)
return one, two, three
This is what I did:
async def one(x):
return x**x
async def two(x):
return x*x*12-314
async def three(x):
return x+x+352+x**x
def final(x):
loop = asyncio.get_event_loop()
one = loop.create_task(one(x))
two = loop.create_task(two(x))
three = loop.create_task(three(x))
#loop.run_until_complete('what should be here')
#loop.close()
return one, two, three
But I get this error (if lines above are not commented):
RecursionError: maximum recursion depth exceeded
I do not know what is wrong (Im new to this), I have also tried to add this:
await asyncio.wait([one,two,three])
but to be honest I do not know where and why should I add it.
Without that, my code works but it does not give me result, it prints this:
(<Task pending name='Task-1' coro=<one() running at /Users/.../Desktop/....py:63>>, <Task pending name='Task-2' coro=<two() running at /Users/.../Desktop/...py:10>>, <Task pending name='Task-3' coro=<three() running at /Users/.../Desktop/...py:91>>)
Any help?
The major purpose of async syntax and its libraries is to hide the details of event loops. Prefer to use high-level APIs directly:
def final(x):
loop = asyncio.get_event_loop()
return loop.run_until_complete( # run_until_complete fetches the task results
asyncio.gather(one(x), two(x), three(x)) # gather runs multiple tasks
)
print(final(x)) # [35.5675357348548, -196.43720000000002, 393.8275357348548]
If you want to explicitly control concurrency, i.e. when functions are launched as tasks, it is simpler to do this inside another async function. In your example, making final an async function simplifies things by giving direct access to await and the ambient loop:
async def final(x): # async def – we can use await inside
# create task in whatever loop this function runs in
task_one = asyncio.create_task(one(x))
task_two = asyncio.create_task(two(x))
task_three = asyncio.create_task(three(x))
# wait for completion and fetch result of each task
return await task_one, await task_two, await task_three
print(asyncio.run(final(x))) # (35.5675357348548, -196.43720000000002, 393.8275357348548)
I need to start another process to run parallel with the while loop:
while True:
#bunch of stuff happening
if #something happens:
#do something (here I have something that takes time and while loop will 'pause' untill this
finishes. I need the while loop to somehow continue looping parallel with
this process.)
I tried something like this:
while True:
#bunch of stuff happening
if #something happens:
exec(open("filename.py").read()) #here I tried to call for another script but the while loop
won't continue. It just runs this script and finishes, but
I need this secont script to run parallel with the while loop looping.
You could use multiprocessing for this. Check the doc here
Here's a minimalistic example, hope this helps you.
import multiprocessing
number_of_processes = 5
def exec_process(filename):
#your exec code goes here
p = multiprocessing.Pool(processes = number_of_processes)
while True:
if: #something happens
p.apply_async(exec_process, (filename,))
p.close()
p.join()
Additionally, it is also good to use callback which becomes like master to your processes where you could define your terminating conditions.
Your definition could be like:
def exec_process(filename):
try:
#do what it does
return True
except:
return False
def callback(result):
if not result:
#do what you want to do in case of failure
#something like p.terminate()
#indicate failure to global variables
#Now apply call becomes:
p.apply_async(exec_process, (filename,), callback=callback)
You can use asyncio to do that. Here's a fully working example of a basic producer/consumer:
import asyncio
import random
from datetime import datetime
from pydantic import BaseModel
class Measurement(BaseModel):
data: float
time: datetime
async def measure(queue: asyncio.Queue):
while True:
# Replicate blocking call to recieve data
await asyncio.sleep(1)
print("Measurement complete!")
for i in range(3):
data = Measurement(
data=random.random(),
time=datetime.utcnow()
)
await queue.put(data)
await queue.put(None)
async def process(queue: asyncio.Queue):
while True:
data = await queue.get()
print(f"Got measurement! {data}")
# Replicate pause for http request
await asyncio.sleep(0.3)
print("Sent data to server")
loop = asyncio.get_event_loop()
queue = asyncio.Queue(loop=loop)
meansurement = measure(queue)
processor = process(queue)
loop.run_until_complete(asyncio.gather(processor, meansurement))
loop.close()
I'm playing with Python's new(ish) asyncio stuff, trying to combine its event loop with traditional threading. I have written a class that runs the event loop in its own thread, to isolate it, and then provide a (synchronous) method that runs a coroutine on that loop and returns the result. (I realise this makes it a somewhat pointless example, because it necessarily serialises everything, but it's just as a proof-of-concept).
import asyncio
import aiohttp
from threading import Thread
class Fetcher(object):
def __init__(self):
self._loop = asyncio.new_event_loop()
# FIXME Do I need this? It works either way...
#asyncio.set_event_loop(self._loop)
self._session = aiohttp.ClientSession(loop=self._loop)
self._thread = Thread(target=self._loop.run_forever)
self._thread.start()
def __enter__(self):
return self
def __exit__(self, *e):
self._session.close()
self._loop.call_soon_threadsafe(self._loop.stop)
self._thread.join()
self._loop.close()
def __call__(self, url:str) -> str:
# FIXME Can I not get a future from some method of the loop?
future = asyncio.run_coroutine_threadsafe(self._get_response(url), self._loop)
return future.result()
async def _get_response(self, url:str) -> str:
async with self._session.get(url) as response:
assert response.status == 200
return await response.text()
if __name__ == "__main__":
with Fetcher() as fetcher:
while True:
x = input("> ")
if x.lower() == "exit":
break
try:
print(fetcher(x))
except Exception as e:
print(f"WTF? {e.__class__.__name__}")
To avoid this sounding too much like a "Code Review" question, what is the purpose of asynchio.set_event_loop and do I need it in the above? It works fine with and without. Moreover, is there a loop-level method to invoke a coroutine and return a future? It seems a bit odd to do this with a module level function.
You would need to use set_event_loop if you called get_event_loop anywhere and wanted it to return the loop created when you called new_event_loop.
From the docs
If there’s need to set this loop as the event loop for the current context, set_event_loop() must be called explicitly.
Since you do not call get_event_loop anywhere in your example, you can omit the call to set_event_loop.
I might be misinterpreting, but i think the comment by #dirn in the marked answer is incorrect in stating that get_event_loop works from a thread. See the following example:
import asyncio
import threading
async def hello():
print('started hello')
await asyncio.sleep(5)
print('finished hello')
def threaded_func():
el = asyncio.get_event_loop()
el.run_until_complete(hello())
thread = threading.Thread(target=threaded_func)
thread.start()
This produces the following error:
RuntimeError: There is no current event loop in thread 'Thread-1'.
It can be fixed by:
- el = asyncio.get_event_loop()
+ el = asyncio.new_event_loop()
The documentation also specifies that this trick (creating an eventloop by calling get_event_loop) only works on the main thread:
If there is no current event loop set in the current OS thread, the OS thread is main, and set_event_loop() has not yet been called, asyncio will create a new event loop and set it as the current one.
Finally, the docs also recommend to use get_running_loop instead of get_event_loop if you're on version 3.7 or higher
I'm working with asynchronous programming and wrote a small wrapper class for thread-safe execution of co-routines based on some ideas from this thread here: python asyncio, how to create and cancel tasks from another thread. After some debugging, I found that it hangs when calling the Thread class's join() function (I overrode it only for testing). Thinking I made a mistake, I basically copied the code that the OP said he used and tested it to find the same issue.
His mildly altered code:
import threading
import asyncio
from concurrent.futures import Future
import functools
class EventLoopOwner(threading.Thread):
class __Properties:
def __init__(self, loop, thread, evt_start):
self.loop = loop
self.thread = thread
self.evt_start = evt_start
def __init__(self):
threading.Thread.__init__(self)
self.__elo = self.__Properties(None, None, threading.Event())
def run(self):
self.__elo.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.__elo.loop)
self.__elo.thread = threading.current_thread()
self.__elo.loop.call_soon_threadsafe(self.__elo.evt_start.set)
self.__elo.loop.run_forever()
def stop(self):
self.__elo.loop.call_soon_threadsafe(self.__elo.loop.stop)
def _add_task(self, future, coro):
task = self.__elo.loop.create_task(coro)
future.set_result(task)
def add_task(self, coro):
self.__elo.evt_start.wait()
future = Future()
p = functools.partial(self._add_task, future, coro)
self.__elo.loop.call_soon_threadsafe(p)
return future.result() # block until result is available
def cancel(self, task):
self.__elo.loop.call_soon_threadsafe(task.cancel)
async def foo(i):
return 2 * i
async def main():
elo = EventLoopOwner()
elo.start()
task = elo.add_task(foo(10))
x = await task
print(x)
elo.stop(); print("Stopped")
elo.join(); print("Joined") # note: giving it a timeout does not fix it
if __name__ == "__main__":
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
assert isinstance(loop, asyncio.AbstractEventLoop)
try:
loop.run_until_complete(main())
finally:
loop.close()
About 50% of the time when I run it, It simply stalls and says "Stopped" but not "Joined". I've done some debugging and found that it is correlated to when the Task itself sent an exception. This doesn't happen every time, but since it occurs when I'm calling threading.Thread.join(), I have to assume it is related to the destruction of the loop. What could possibly be causing this?
The exception is simply: "cannot join current thread" which tells me that the .join() is sometimes being run on the thread from which I called it and sometimes from the ELO thread.
What is happening and how can I fix it?
I'm using Python 3.5.1 for this.
Note: This is not replicated on IDE One: http://ideone.com/0LO2D9