I want to create a simple scheduler in Tornado, where along the course of the app, some job with a (time,callback) is generated dynamically, for example,
Send a push notification 30 mins before an event,
but this reminder is created only after the job is created by the server, which may be through a POST request.
I wanted to achieve this through PeriodicCallback, but I read that IOLoop.start() must be called after the PeriodicCallback is created. How can I add PeriodicCallback to already running IOLoop or is there any other way?
There is no requirement that PeriodicCallbacks be started before the IOLoop. You can start them while the IOLoop is running. You have to schedule something before calling IOLoop.start() since that will run forever, but whatever you schedule on the IOLoop can go on to schedule other stuff.
Related
I have an API route that calls a function (ie: doSomethingForALongTime) that takes some time to finish. If we assume that the function doesn't return anything to the client, is there a work around to just call the function and send the status code 200 while the function is doing it's job?
#application.route('/api', methods=['GET'])
def api_route():
doSomethingForALongTime()
return 200
Yes, with some caveats. The usual way to handle this is to use a 'task queue', such as
celery or
rq
(there's a walk-through of how to use rq in chapter 22 of the
flask mega tutorial). The approaches require that, at least, you have redis running, and are running separate worker processes.
The idea is hand a task off to the task queue in your handler (route), then return a response to the browser while a worker in a separate process picks the task up from the queue and runs it.
It's also possible to run a 'worker thread' in your app, and have the handler queue up work for it. I have a proof-of-concept for that here, with the caveat that I've only used it for personal apps. The caveat is that this is only really suitable for a personal webapp.
I'm trying to find a way to constantly poll a server every x seconds from the ready() function of Django, basically something which looks like this:
from django.apps import AppConfig
class ApiConfig(AppConfig):
name = 'api'
def ready(self):
import threading
import time
from django.conf import settings
from api import utils
def refresh_ndt_servers_list():
while True:
utils.refresh_servers_list()
time.sleep(settings.WAIT_SECONDS_SERVER_POLL)
thread1 = threading.Thread(target=refresh_ndt_servers_list)
thread1.start()
I just want my utils.refresh_servers_list() to be executed when Django starts/is ready and re-execute that same method (which populates my DB) every settings.WAIT_SECONDS_SERVER_POLL seconds indefinitely. The problem with that is if I run python manage.py migrate the ready() function gets called and never finishes. I would like to avoid calling this function during migration.
Thanks!
AppConfig.ready() is to "... perform initialization tasks ..." and make your app ready to run / serve requests. Actual app working logic should be run after django app is initialized.
For launching task at regular intervals cron job can be used.
Or, setup a periodic celery task with celery beat.
Also, provided task seems to be performing updates in database (good for it to be atomic). It may be critical for it to have only single running instance of it. One instance of cronjob or one celery task take care of that.
However, the next job may still run if previous one has not yet finished or just be launched manually for some reason - adding some locking logic into task to check that only one is running (or lock database table for the run) may be desired.
I'm trying to write a tornado web application that uses sqlalchemy in some request handlers. These handlers have two parts: one that takes a long time to complete, and another that uses sqlalchemy and is relatively fast.
I would like to make the slow part of the request asynchronous, but not the sqlalchemy part. Can I do something like the following code and be safe?
class ExampleHandler(BaseHandler):
async def post(self):
loop = asyncio.get_event_loop()
await loop.run_in_executor(...) # very slow (no sqlalchemy here)
with self.db_session() as s: # sqlalchemy session
s.add(...)
s.commit()
self.render(...)
The idea is to have sqlalchemy still blocking, but have the computational heavy part not blocking the application.
The tornado web server uses asynchronous code to get around the limit of the python Global Interpreter Lock. The GIL, as it is colloquially known, allows only one thread of execution to take place in the python interpreter process. Tornado is able to answer many requests simultaneously because of its use of an event loop. The event loop can perform one small task at a time. Let's take your own post handler to understand this better.
In this handler, when the python interpreter gets to the await keyword, it pauses the execution of the function and queues it for later on its event loop. It then checks the event loop to respond to other events that may have queued up there, like responding to a new connection or servicing another handler.
When you block in an asynchronous function, you freeze the entire event loop as it is unable to pause your function and service anything else. What this actually means for you is that your web server will not accept or service any requests while your async function blocks. It will appear as if your web server is hanging and indeed it is stuck.
To keep the server responsive, you have to find a way to execute your sqlalchemy query in an asynchronous non-blocking manner.
I am using Flask-SocketIO to create a real-time notification system. There is an external API server that calls the socketio server in a separate thread via an RPC. The method invoked by the RPC creates a Celery task that when consumed, calls a method that invokes socketio.emit(). However, the message doesn't seem to actually be sent as no message is received in the javascript client. My instinct tells me that as the Celery worker is running in a separate process, the socketio.emit() method being called is not sending to the connected client although the objects exist at the same place in memory. The server is running gevent and Celery is receiving and completing the tasks as seen by the logs. Further I have verified that socketio.emit() is being called by the Celery worker and I have verified that when the task is called directly, bypassing Celery, socketio works as expected. Any ideas for how to get socketio to communicate correctly when it is being referenced by a celery task in a separate process?
Did you forget adding the message_queue ?
socketio.init_app(app, message_queue='redis://localhost:6379/0')
You can run Celery in multiprocessing or eventlet mode.
By default, Celery uses multiprocessing to set up a new process for a new worker. Eventlet uses threads, which I believe is what you want to use in this scenario since you want shared memory.
You may find this documentation useful.
We're trying to use the new python 2.7 threading ability in Google App Engine and it seems like the created thread is getting killed before it finishes running. Our scenario:
User sends a message to the server
We update the user's data
We spawn a thread to do some more heavy duty processing
We return a response to the user before waiting for the heavy duty processing to finish
My assumption was that the thread would continue to run after the request had returned, as long as it did not exceed the total request time limit. What we're seeing though is that the thread is randomly killed partway through it's execution. No exceptions, no errors, nothing. It just stops running.
Are threads allowed to exist after the response has been returned? This does not repro on the dev server, only on live servers.
We could of course use a task queue instead, but that's a real pain since we'd have to set up a url for the action and serialize/deserialize the data.
The 'Sandboxing' section of this page:
http://code.google.com/appengine/docs/python/python27/using27.html#Sandboxing
indicates that threads cannot run past the end of the request.
Deferred tasks are the way to do this. You don't need a URL or serialization to use them:
from google.appengine.ext import deferred
deferred.defer(myfunction, arg1, arg2)