I have an app in flask that I need to perform some asynchronous action on. I've read about celery, but not sure if it's correct.
Basically I have a button that takes input and runs a query to return back to the template, and this is quick, but I want it to also run another task (passing a SOAP envelope against a web service), and this is slow. I don't want the user to have to wait for the web-service call to finish. I'd like for the query running the return back to the template with new data to happen as quickly as possible and the web-service call to happen in the background.
Is this doable?
I know there are lots of Celery related threads here, but this might provide some service.
Using Celery for asynchronous activity requires more than just installing and importing the lib.
Requirements:
Celery lib
Queue broker, like Redis (in memory db), installed
Separate file that creates celery object
I found the Flask documentation on Celery with flask lacking. My preferred method was to create a tasks.py file and put in
from celery import Celery
# Other imports for functionality here
app = Celery('tasks', broker='redis://localhost:6379')
#app.tasks
def your_function(args):
do something with args
return something
Then in the application file make sure this is imported:
from tasks import your_function
And then use it where you need to in the app
your_function(args)
Then you must make sure that a celery daemon/worker is running. This can be done by init, by systemd, by launchctl or manually at the CLI (not ideal). Redis must also be running and listening on the url you give it.
I hope this helps someone else.
sounds like you need tornado! Asynchronous web server gateway compatible with flask
from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from YourModule import app
http_server = HTTPServer(WSGIContainer(app))
http_server.listen(8080)
IOLoop.instance().start()
I prefer tornado for its speed, reliability, and simplicity with Flask, which I love for its beauty
Related
I've created a python main application main.py, which I invoke with uvicorn main.main --reload. Which of course runs the following code...
if __name__ == '__main__':
main()
That part of the application runs constantly, reads data an processes it until the application is aborted manually. I use asyncio to run coroutines.
Task
I would like to build a small html dashboard on it, which can display the data that is constantly computed.
Question
How can I run these background calculations of main.py and still implement a dashboard/website with fastapi and jinja2?
What is the best practice/architecture to structure the files: the background and fastapi app code? e.g. Is there a initial startup function in fastapi where I could invoke the background computation in a coroutine or the other way around?
How would you invoke the application according to your recommendation?
What I have achieved so far
I can run the main application without any fastapi code. And I can run the dashboard without the background tasks. Both work fine independently. But fastapi does not run, when I add its code to the main application with the background computation. (How could it?!? I can only invoke either the main application or the fastapi app.)
Any architectural concepts are appreciated.
Thank you.
Fastapi doesn't run because it cant be reached by python interpreter untill it complete your computations. You should start your web app independently of the main process, I strongly recommend you to use docker-compose.
As fastapi recommends you, you should use Dramatiq or Celery for huge background tasks, or you can just run separate service in compose services, for example:
# background.py
if __name__ == '__main__':
main()
# main.py
app = FastAPI()
docker-compose.yml:
services:
web-app-interface:
command: uvicorn main.main ...
my-daemon:
command: python background.py
You can make them communicate with a message broker, such as RabbitMQ etc.
And never use multiprocessing with uvicorn, it can cause process leak, bcz uvicorn rules it's own workers.
A good approch is to use the on_event decorator with startup. The only thing to remain is to use asyncio.create_task to invoke the background task. As long as you don't await it, it will not block and thus fastapi/uvicorn can continue to serve any http request.
my_service = MyService()
#app.on_event('startup')
async def service_tasks_startup():
"""Start all the non-blocking service tasks, which run in the background."""
asyncio.create_task(my_service.start_processing_data())
Also, with this said, any request can consume the data of this background service.
#app.get("/")
def root():
return my_service.value
Think of MyService as any class of your liking. Kafka consumption, computations, etc. Of course, the value is just an example attribute of the Class MyService.
I wrote a Django website that handles concurrent database requests and subprocess calls perfectly fine, if I just run "python manage.py runserver"
This is my model
class MyModel:
...
def foo(self):
args = [......]
pipe = subprocess.Popen(args, stdout=subproccess.PIPE, stderr=subprocess.PIPE)
In my view:
def call_foo(request):
my_model = MyModel()
my_model.foo()
However, after I wrap it using Tornado server, it's no longer able to handle concurrent request. When I click my website where it sends async get request to this call_foo() function, it seems like my app is not able to handle other requests. For example, if I open the home page url, it keeps waiting and won't display until the above subprocess call in foo() has finished.
If I do not use Tornado, everything works fine.
Below is my code to start the tornado server. Is there anything that I did wrong?
MAX_WAIT_SECONDS_BEFORE_SHUTDOWN = 5
def sig_handler(sig, frame):
logging.warning('Caught signal: %s', sig)
tornado.ioloop.IOLoop.instance().add_callback(force_shutdown)
def force_shutdown():
logging.info("Stopping tornado server")
server.stop()
logging.info('Will shutdown in %s seconds ...', MAX_WAIT_SECONDS_BEFORE_SHUTDOWN)
io_loop = tornado.ioloop.IOLoop.instance()
deadline = time.time() + MAX_WAIT_SECONDS_BEFORE_SHUTDOWN
def stop_loop():
now = time.time()
if now < deadline and (io_loop._callbacks or io_loop._timeouts):
io_loop.add_timeout(now + 1, stop_loop)
else:
io_loop.stop()
logging.info('Force Shutdown')
stop_loop()
def main():
parse_command_line()
logging.info("starting tornado web server")
os.environ['DJANGO_SETTINGS_MODULE'] = 'mydjango.settings'
django.setup()
wsgi_app = tornado.wsgi.WSGIContainer(django.core.handlers.wsgi.WSGIHandler())
tornado_app = tornado.web.Application([
(r'/(favicon\.ico)', tornado.web.StaticFileHandler, {'path': "static"}),
(r'/static/(.*)', tornado.web.StaticFileHandler, {'path': "static"}),
('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)),
])
global server
server = tornado.httpserver.HTTPServer(tornado_app)
server.listen(options.port)
signal.signal(signal.SIGTERM, sig_handler)
signal.signal(signal.SIGINT, sig_handler)
tornado.ioloop.IOLoop.instance().start()
logging.info("Exit...")
if __name__ == '__main__':
main()
There is nothing wrong with your set-up. This is by design.
So, WSGI protocol (and so Django) uses syncronous model. It means that when your app starts processing a request it takes control and gives it back only when request is finished. That's why it can process single request at once. To allow simultaneous requests one usually launches wsgi application in multithreaded or multiprocessed mode.
The Tornado server on other side uses asynchronous model. The idea here is to have own scheduler instead of OS scheduler that works with threads and processes. So your code runs some logic, then launches some long task (DB call, URL fetch), sets up what to run when task finishes and gives control back to scheduler.
Giving controll back to scheduler is crucial part, it allows async server to work fast because it can start processing new request while previous is waiting for data.
This answer explains sync/async detailed. It focuses on client, but I think you can see the idea.
So whats wrong with your code: Popen does not give control to IOLoop. Python does nothing until your subprocess is finished, and so can not process other requests, even not Django's requests. runserver "works" here because it's multithreaded. So while locking entirely the thread, other threads can still process requests.
For this reason it's usually not recommended to run WSGI apps under async server like tornado. The doc claims it will be less scalable, but you can see the problem on your own code. So if you need both servers (e.g. Tornado for sockets and Django for main site), I'd suggest to run both behind nginx, and use uwsgi or gunicorn to run Django. Or take a look at django-channels app instead of tornado.
Besides, while it works on test environment, I guess it's not a recomended way to do what you try to achieve. It's hard to suggest the solution, as I don't know what do you call with Popen, but it seams to be something long running. Maybe you should take a look at Celery project. It's a package for running long-term background job.
However, back to running sub-processes. In Tornado you can use tornado.process.Subprocess. It's a wrapper over Popen to allow it to work with IOLoop. Unfortunately I don't know if you can use it in wsgi part under tornado. There are some projects I remember, like django futures but it seems to be abandoned.
As another quick and dirty fix - you can run Tornado with several processes. Check this example on how to fork server. But I will not recoment using this in production anyway (fork is OK, running wsgi fallback is not).
So to summarize, I would rewrite your code to do one of the following:
Run the Popen call in some background queue, like Celery
Process such views with Tornado and use tornado.processes module to run subprocess.
And overall, I'd seek for another deployment infrastructure, and would not run Durango under tornado.
I have a job added in apscheduler which loads some data in memory and I am deleting all the objects after the job is complete. Now if I run this job with python it works successfully and memory drop after process exits successfully.But in case of apscheduler the memory usage is not coming down.I am using BackgroundScheduler.Thanks in advance.
I was running quite a few tasks via apscheduler. I suspected this setup led to R14 errors on Heroku, with dyno memory overload, crashes and restarts occurring daily. So I spun up another dyno and scheduled a few jobs to run very frequently.
Watching the metrics tab in Heroku, it immediately became clear that apscheduler was the culprit.
Removing jobs after they're run was recommended to me. But this is of course a bad idea when running cron and interval jobs as they won't run again.
What finally solved it was tweaking the threadpoolexecutioner (lowering max number of workers), see this answer on Stackoverflow and this and this post on Github. I definitely suggest you read the docs on this.
Other diagnostics resources: 1, 2.
Example code:
import logging
from apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutor
from apscheduler.schedulers.blocking import BlockingScheduler
from tests import overloadcheck
logging.basicConfig()
logging.getLogger('apscheduler').setLevel(logging.DEBUG)
sched = BlockingScheduler(
executors={
'threadpool': ThreadPoolExecutor(max_workers=9),
'processpool': ProcessPoolExecutor(max_workers=3)
}
)
#sched.scheduled_job('interval', minutes=10, executor='threadpool')
def message_overloadcheck():
overloadcheck()
sched.start()
Or, if you like I do, love to run heavy tasks—try the ProcessPoolExecutor as an alternative, or addition to the ThreadPool, but make sure to call it from specific jobs in such case.
Update: And, you need to import ProcessPoolExecutor as well if you wish to use it, added this to code.
Just in case anyone is using Flask-APScheduler and having memory leak issues, it took me a while to realize that it expects any configuration settings to be in your Flask Config, not when you instantiate the scheduler.
So if you (like me) did something like this:
from flask_apscheduler import APScheduler
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.executors.pool import ThreadPoolExecutor
bg_scheduler = BackgroundScheduler(executors={'threadpool': ThreadPoolExecutor(max_workers=1)})
scheduler = APScheduler(scheduler=bg_scheduler)
scheduler.init_app(app)
scheduler.start()
then for whatever reason, when jobs are run in the Flask request context, it will not recognize the executor, 'threadpool', or any other configuration settings you may have set.
However, if you set these same options in the Flask Config class as:
class Config(object):
#: Enable build completion checks?
SCHEDULER_API_ENABLED = True
#: Sets max workers to 1 which reduces memory footprint
SCHEDULER_EXECUTORS = {"default": {"type": "threadpool", "max_workers": 1}
# ... other Flask configuration options
and then do (back in the main script)
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
then the configuration settings actually do get set. I'm guessing when I called scheduler.init_app in the original script, Flask-APScheduler saw that I hadn't set any of those settings in my Flask Config and so overwrote them with default values, but not 100% sure.
Regardless, hopefully this helps anyone who has tried the top-rated answer but is also using Flask-APScheduler as a wrapper and might still be seeing memory issues.
I need to start a long-running background process with subprocess when someone visits a particular view.
My code:
from flask import Flask
import subprocess
app = Flask(__name__)
#app.route("/")
def index():
subprocess.Popen(["sleep", "10"])
return "hi\n"
if __name__ == "__main__":
app.run(debug=True)
This works great, for the most part.
The problem is that when the process (sleep) ends, ps -Af | grep sleep shows it as [sleep] <defunct>.
From what I've read, this is because I still have a reference to the process in flask.
Is there a way to drop this reference after the process exits?
I tried doing g.subprocess = subprocess.Popen(["sleep", "10"]), and waiting for the process to end in #app.after_request(response) so I can use del on it, but this prevents flask from returning the response until the subprocess exits - I need it to return the response before the subprocess exits.
Note:
I need the subprocess.Popen operation to be non-blocking - this is important.
As I've suggested in the comments, one of the cleanest and most robust way of achieving this kind of thing in Python is by using celery.
Celery requires a broker transport for messaging, for which rabbitmq is the default, and at least a process with workers running. However, the thing that increases readbility an dmaintanability is that the worker code can co-exist in the same file or files than your server app. You invoke the remote procedures as though it where a simple function call.
Celery can handle retries, post-task events, and lots of other things for free, everything with mature code hardened by years of use in production.
This is your example after re-writting it for use with Celery:
from flask import Flask
from celery import Celery
import subprocess
app = Flask(__name__)
celery_app = Celery("test")
#celery_app.task
def run_process():
subprocess.Popen(["sleep", "5"])
#app.route("/")
def index():
run_process.delay()
return "hi\n"
if __name__ == "__main__":
app.run(debug=True, port=8080)
With this code, in a system with the rabbitmq server running with default options (I installed the package, and started the service - no configurations whatsoever. Of course on production you would have to tune that - but if everything is to be on the same server, it may not even be needed.)
With rabbitmq in place, one starts the worker process with a command line like: celery worker -A bla1.celery_app -D (pip install celery on the same virtualenv you have your Flask). Then just launch the flask server and see it working.
Of course this has even more advantages if you are doing more work in Python itself than just calling an external process. It can have access to your database models, and you can perform assynchronous actions that modify objects in there (and eventually trigger responses for the user, as "flash" messages on the user session, or e-mails)
I've seen a lot of "poor man's parallel processing" using subprocess.Popen and letting it run freely, but that's often leading to zombie problems as you noted.
You could run your process in a thread (in that case, no need for Popen, just use call or check_call if you want to raise an exception if process failed). call or check_call (or run since Python 3.5) waits for the process to complete so no zombies, and since you're running it in a thread you're not blocked.
import threading
def in_background():
subprocess.call(["sleep", "10"])
#app.route("/")
def index():
t = threading.Thread(target=in_background)
t.start()
return "hi\n"
Note: To wait for thread completion you'd have to use t.join() and for that you'd have to keep a reference on the t thread object.
BTW, I suppose that your real process isn't sleep, or it's not very useful and time.sleep(10) does the same (always in a thread of course!)
I needed to send mail from my plain Flask app, so I thought the simplest way would be to send it using smtplib. But I had to do it asynchronously - you can't just insert a 3 second delay into the request - right? So I add the email to a queue (psql table), and send it from another program that reads this table and uses smptlib.
This second program (maildonkey) is running as a separate process, in an independent upstart service.
Now I need another one of those little asynchoronous services, and I'm thinking if I should write another python script (third, counting my Flask app and 'maildonkey') or should I use something like Python's 'multiprocess', or even 'threads' and rewrite the second program?
(When I was programming in Clojure, I could easily run code in a separate thread with 'futures', so normally I would do that.)
You should consider using Celery. It is very widely used in web frameworks for asynchronous processing and supports a lot of different backends like AMQP, databases etc.
Try Gevent.
You can create Greenlet object for your long task.
This greenlet is green thread.
from gevent import monkey
monkey.patch_all()
import gevent
from gevent import Greenlet
class Task(Greenlet):
def __init__(self, name):
Greenlet.__init__(self)
self.name = name
def _run(self):
print "Task %s: some task..." % self.name
t1 = Task("long task")
t1.start()
# here we are waiting task
gevent.joinall([t1])
Also you can use Gevent as a server for Flask:
from gevent.wsgi import WSGIServer
from yourapplication import app
http_server = WSGIServer(('', 5000), app)
http_server.serve_forever()