I just created a script which triggers a report from specific API and then loads it into my database.
I have already built something that works but I would like to know if there is something a bit more "precise" or efficient without the need of making my script loop over and over again.
My current script is the following:
import time
retry=1
trigger_report(report_id)
while report_id.status() != 'Complete':
time.sleep(retry * 1.3)
retry =+ 1
load_report(report_id)
EDIT:
The API doesn't provide with any wait for completion methods, the most it has is an endpoint which returns the status of the job.
It is a SOAP API.
While this post doesn't hold any relevance anymore as you said, it' a soap API. But I put the work into it, so I'll post it anyway. :)
To answer your question. I don't see any more efficient methods than polling (aka. looping over and over again)
There are multiple ways to do it.
The first way is implementing some sort of callback that is triggered when the task is completed. It will look something like this:
import time
def expensive_operation(callback):
time.sleep(20)
callback(6)
expensive_operation(lambda x:print("Done", x))
As you can see, the Message "Done 6" will be printed as soon as the operation has been completed.
You can rewrite this with Future-objects.
from concurrent.futures import Future
import threading
import time
def expensive_operation_impl():
time.sleep(20)
return 6
def expensive_operation():
fut = Future()
def _op_wrapper():
try:
result = expensive_operation_impl()
except Exception as e:
fut.set_exception(e)
else:
fut.set_result(result)
thr = threading.Thread(target=_op_wrapper)
thr.start()
return fut
future = expensive_operation()
print(future.result()) # Will block until the operation is done.
Since this looks complicated, there are some high-level functions implementing thread scheduling for you.
import concurrent.futures import ThreadPoolExecutor
import time
def expensive_operation():
time.sleep(20)
return 6
executor = ThreadPoolExecutor(1)
future = executor.submit(expensive_operation)
print(future.result())
Rather use events, not polling. There are a lot of options for how to implement events in Python. There was a discussion here already on stack overflow.
Here is a synthetic example uses zope.event and an event handler
import zope.event
import time
def trigger_report(report_id):
#do expensive operation like SOAP call
print('start expensive operation')
time.sleep(5)
print('5 seconds later...')
zope.event.notify('Success') #triggers 'replied' function
def replied(event): #this is the event handler
#event contains the text 'Success'
print(event)
def calling_function():
zope.event.subscribers.append(replied)
trigger_report('1')
But futures as in accepted answer is also neat. Depends on what floats your boat.
Related
Problem
It's very common for beginners to solve IO waiting while concurrent processing in an similar way like here:
#!/usr/bin/env python3
"""Loop example."""
from time import sleep
WAITING: bool = True
COUNTER: int = 10
def process() -> None:
"""Non-blocking routine, that needs to be invoked periodically."""
global COUNTER # pylint: disable=global-statement
print(f"Done in {COUNTER}.")
COUNTER -= 1
sleep(1)
# Mimicking incoming IO callback
if COUNTER <= 0:
event()
def event() -> None:
"""Incoming IO callback routine."""
global WAITING # pylint: disable=global-statement
WAITING = False
try:
while WAITING:
process()
except KeyboardInterrupt:
print("Canceled.")
Possible applications might be servers, what are listening for incomming messages, while still processing some other internal stuff.
Possible Solution 1
Threading might in some cases a good solution.
But after some research it seems that threading adds a lot of overheading for the communcation between the threads.
One example for this might be the 'Warning' in the osc4py3 package documentation below the headline 'No thread'.
Also i have read somewhere the thumb rule, that 'Threading suits not for slow IO' (sorry, lost the source of this rule).
Possible Solution 2
Asynchronous processing (with the asyncio package) might be another solution.
Especially because the ominous thumb rule also says that 'For slow IO is asyncio efficient'.
What i tried
So i tried to rewrite this example with asyncio but failed completely, even after reading about Tasks, Futures and Awaitables in general in the Python asyncio documentation.
My problem was to solve the perodically (instead of one time) call while waiting.
Of course there are infinite loops possible, but all examples i found in the internet are still using 'While-True'-Loops what does not look like an improvement to me.
For example this snippet:
import asyncio
async def work():
while True:
await asyncio.sleep(1)
print("Task Executed")
loop = asyncio.get_event_loop()
try:
asyncio.ensure_future(work())
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
print("Closing Loop")
loop.close()
Source: https://tutorialedge.net/python/concurrency/asyncio-event-loops-tutorial/#the-run_forever-method
What i want
To know the most elegant and efficient way of rewriting these stupid general 'While-True'-Loop from my first example code.
If my 'While-True'-Loop is still the best way to solve it (beside my global variables), then it's also okay to me.
I just want to improve my code, if possible.
What you describe is some kind of polling operation and is similar to busy waiting. You should rarely rely on those methods as they can incur a serious performance penalty if used incorrectly. Instead, you should rely on concurrency primitives provided by the OS of a concurrency library.
As said in a comment, you could rely on a condition or an event (and more broadly on mutexes) to schedule some come to run after an event occurs. For I/O operations you can also rely on low-level OS facilities such as select, poll and signals/interruptions.
Possible applications might be servers, what are listening for
incomming messages, while still processing some other internal stuff.
For such use cases you should really use a dedicated library to do that efficiently. For instance, here is an example of a minimal server developed with AsyncIO's low-level socket operations. Internally, AsyncIO probably uses the select system call and exposes a friendly interface with async-await.
Solution with asyncio:
#!/usr/bin/env python3
"""Asyncronous loop example."""
from typing import Callable
from asyncio import Event, get_event_loop
DONE = Event()
def callback():
"""Incoming IO callback routine."""
DONE.set()
def process():
"""Non-blocking routine, that needs to be invoked periodically."""
print('Test.')
try:
loop = get_event_loop()
run: Callable = lambda loop, processing: (
processing(),
loop.call_soon(run, loop, processing)
)
loop.call_soon(run, loop, process)
loop.call_later(1, callback) # Mimicking incoming IO callback after 1 sec
loop.run_until_complete(DONE.wait())
except KeyboardInterrupt:
print("Canceled.")
finally:
loop.close()
print("Bye.")
I am working on a chatbot, where before I reply to the user I make a DB call to save the chat in a table. This will be done each time user types something, and it increases the response time.
So to decrease the response time, we need to call this asynchronously.
How to do this in Python 3?
I have read tutorials of asyncio library, but did not understand it completely and could not understand how to make it work.
Another workaround is to use queuing system, but that sounds like an overkill.
Example:
request = get_request_from_chat
res = call_some_function_to_prepare_response()
save_data() # this will be call asynchronously
reply() # this should not wait save_data() to finish
Any suggestions are welcome.
Use loop.create_task(some_async_function()) to run an async function "in the background". For example, this answer shows how to do that in case of a trivial client-server communication.
In your case the pseudo-code would look like this:
request = await get_request_from_chat()
res = call_some_function_to_prepare_response()
loop = asyncio.get_event_loop()
loop.create_task(save_data()) # runs in the "background"
reply() # doesn't wait for save_data() to finish
For this to work, of course, the program must be written for asyncio and save_data must be a coroutine. For a chat server it's a good approach to follow anyway, so I would recommend to give asyncio a chance.
Because you mentioned
Another workaround is to use queuing system, but that sounds like an
overkill.
I assume you are open to other solutions so I will propose multi-threading approach:
from concurrent.futures import ThreadPoolExecutor
from time import sleep
def long_runnig_funciton(param1):
print(param1)
sleep(10)
return "Complete"
with ThreadPoolExecutor(max_workers=10) as executor:
future = executor.submit(long_runnig_funciton,["Param1"])
print(future.result(timeout=12))
Steps:
1) You create a ThreadPoolExecutor and define maximum number of concurrent tasks.
2) You submit a function with arguments it needs
3) You call result() on the return value from submit() when you need the results
Note that the result() can throw exception if exception was thrown in the submitted function
You can also check if the result of your call is ready with future.done() which returns True or False
I am looking for a way to understand ioloop in tornado, since I read the official doc several times, but can't understand it. Specifically, why it exists.
from tornado.concurrent import Future
from tornado.httpclient import AsyncHTTPClient
from tornado.ioloop import IOLoop
def async_fetch_future():
http_client = AsyncHTTPClient()
future = Future()
fetch_future = http_client.fetch(
"http://mock.kite.com/text")
fetch_future.add_done_callback(
lambda f: future.set_result(f.result()))
return future
response = IOLoop.current().run_sync(async_fetch_future)
# why get current IO of this thread? display IO, hard drive IO, or network IO?
print response.body
I know what is IO, input and output, e.g. read a hard drive, display graph on the screen, get keyboard input.
by definition, IOLoop.current() returns the current io loop of this thread.
There are many IO device on my laptop running this python code. Which IO does this IOLoop.current() return? I never heard of IO loop in javascript nodejs.
Furthermore, why do I care this low level thing if I just want to do a database query, read a file?
I never heard of IO loop in javascript nodejs.
In node.js, the equivalent concept is the event loop. The node event loop is mostly invisible because all programs use it - it's what's running in between your callbacks.
In Python, most programs don't use an event loop, so when you want one, you have to run it yourself. This can be a Tornado IOLoop, a Twisted Reactor, or an asyncio event loop (all of these are specific types of event loops).
Tornado's IOLoop is perhaps confusingly named - it doesn't do any IO directly. Instead, it coordinates all the different IO (mainly network IO) that may be happening in the program. It may help you to think of it as an "event loop" or "callback runner".
Rather to say it is IOLoop, maybe EventLoop is clearer for you to understand.
IOLoop.current() doesn't really return an IO device but just a pure python event loop which is basically the same as asyncio.get_event_loop() or the underlying event loop in nodejs.
The reason why you need event loop to just do a database query is that you are using event-driven structure to do databse query(In your example, you are doing http request).
Most of time you do not need to care about this low level structure. Instead you just need to use async&await keywords.
Let's say there is a lib which supports asynchronous database access:
async def get_user(user_id):
user = await async_cursor.execute("select * from user where user_id = %s" % user_id)
return user
Then you just need to use this function in your handler:
class YourHandler(tornado.web.RequestHandler):
async def get():
user = await get_user(self.get_cookie("user_id"))
if user is None:
return self.finish("No such user")
return self.finish("Your are %s" % user.user_name)
In Bash, it is possible to execute a command in the background by appending &. How can I do it in Python?
while True:
data = raw_input('Enter something: ')
requests.post(url, data=data) # Don't wait for it to finish.
print('Sending POST request...') # This should appear immediately.
Here's a hacky way to do it:
try:
requests.get("http://127.0.0.1:8000/test/",timeout=0.0000000001)
except requests.exceptions.ReadTimeout:
pass
Edit: for those of you that observed that this will not await a response - that is my understanding of the question "fire and forget... do not wait for it to finish". There are much more thorough and complete ways to do it with threads or async if you need response context, error handling, etc.
I use multiprocessing.dummy.Pool. I create a singleton thread pool at the module level, and then use pool.apply_async(requests.get, [params]) to launch the task.
This command gives me a future, which I can add to a list with other futures indefinitely until I'd like to collect all or some of the results.
multiprocessing.dummy.Pool is, against all logic and reason, a THREAD pool and not a process pool.
Example (works in both Python 2 and 3, as long as requests is installed):
from multiprocessing.dummy import Pool
import requests
pool = Pool(10) # Creates a pool with ten threads; more threads = more concurrency.
# "pool" is a module attribute; you can be sure there will only
# be one of them in your application
# as modules are cached after initialization.
if __name__ == '__main__':
futures = []
for x in range(10):
futures.append(pool.apply_async(requests.get, ['http://example.com/']))
# futures is now a list of 10 futures.
for future in futures:
print(future.get()) # For each future, wait until the request is
# finished and then print the response object.
The requests will be executed concurrently, so running all ten of these requests should take no longer than the longest one. This strategy will only use one CPU core, but that shouldn't be an issue because almost all of the time will be spent waiting for I/O.
Elegant solution from Andrew Gorcester. In addition, without using futures, it is possible to use the callback and error_callback attributes (see
doc) in order to perform asynchronous processing:
def on_success(r: Response):
if r.status_code == 200:
print(f'Post succeed: {r}')
else:
print(f'Post failed: {r}')
def on_error(ex: Exception):
print(f'Post requests failed: {ex}')
pool.apply_async(requests.post, args=['http://server.host'], kwargs={'json': {'key':'value'},
callback=on_success, error_callback=on_error))
According to the doc, you should move to another library :
Blocking Or Non-Blocking?
With the default Transport Adapter in place, Requests does not provide
any kind of non-blocking IO. The Response.content property will block
until the entire response has been downloaded. If you require more
granularity, the streaming features of the library (see Streaming
Requests) allow you to retrieve smaller quantities of the response at
a time. However, these calls will still block.
If you are concerned about the use of blocking IO, there are lots of
projects out there that combine Requests with one of Python’s
asynchronicity frameworks.
Two excellent examples are
grequests and
requests-futures.
Simplest and Most Pythonic Solution using threading
A Simple way to go ahead and send POST/GET or to execute any other function without waiting for it to finish is using the built-in Python Module threading.
import threading
import requests
def send_req():
requests.get("http://127.0.0.1:8000/test/")
for x in range(100):
threading.Thread(target=send_req).start() # start's a new thread and continues.
Other Important Features of threading
You can turn these threads into daemons using thread_obj.daemon = True
You can go ahead and wait for one to complete executing and then continue using thread_obj.join()
You can check if a thread is alive using thread_obj.is_alive() bool: True/False
You can even check the active thread count as well by threading.active_count()
Official Documentation
If you can write the code to be executed separately in a separate python program, here is a possible solution based on subprocessing.
Otherwise you may find useful this question and related answer: the trick is to use the threading library to start a separate thread that will execute the separated task.
A caveat with both approach could be the number of items (that's to say the number of threads) you have to manage. If the items in parent are too many, you may consider halting every batch of items till at least some threads have finished, but I think this kind of management is non-trivial.
For more sophisticated approach you can use an actor based approach, I have not used this library myself but I think it could help in that case.
from multiprocessing.dummy import Pool
import requests
pool = Pool()
def on_success(r):
print('Post succeed')
def on_error(ex):
print('Post requests failed')
def call_api(url, data, headers):
requests.post(url=url, data=data, headers=headers)
def pool_processing_create(url, data, headers):
pool.apply_async(call_api, args=[url, data, headers],
callback=on_success, error_callback=on_error)
I'm trying to accomplish something without using threading
I'd like to execute a function within a function, but I dont want the first function's flow to stop. Its just a procedure and I don't expect any return and I also need this to keep the execution for some reasons.
Here is a snippet code of what I'd like to do:
function foo():
a = 5
dosomething()
# I dont wan't to wait until dosomething finish. Just call and follow it
return a
Is there any way to do this?
Thanks in advance.
You can use https://docs.python.org/3/library/concurrent.futures.html to achieve fire-and-forget behavior.
import concurrent.futures
def foo():
a = 5
with ThreadPoolExecutor(max_workers=1) as executor:
future = executor.submit(dosomething)
future.add_done_callback(on_something_done)
#print(future.result())
#continue without waiting dosomething()
#future.cancel() #To cancel dosomething
#future.done() #return True if done.
return a
def on_something_done(future):
print(future.result())
[updates]
concurrent.futures is built-in since python 3
for Python 2.x you can download futures 2.1.6 here
Python is synchronous, you'll have to use asynchronous processing to accomplish this.
While there are many many ways that you can execute a function asynchronously, one way is to use python-rq. Python-rq allows you to queue jobs for processing in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
For example:
from rq import Queue, use_connection
def foo():
use_connection()
q = Queue()
# do some things
a = 5
# now process something else asynchronously
q.enqueue(do_something)
# do more here
return a