I've been pulling my hair out trying to figure this one out, hoping someone else has already encountered this and knows how to solve it :)
I'm trying to build a very simple Flask endpoint that just needs to call a long running, blocking php script (think while true {...}). I've tried a few different methods to async launch the script, but the problem is my browser never actually receives the response back, even though the code for generating the response after running the script is executed.
I've tried using both multiprocessing and threading, neither seem to work:
# multiprocessing attempt
#app.route('/endpoint')
def endpoint():
def worker():
subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp)
p = multiprocessing.Process(target=worker)
print '111111'
p.start()
print '222222'
return json.dumps({
'success': True
})
# threading attempt
#app.route('/endpoint')
def endpoint():
def thread_func():
subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp)
t = threading.Thread(target=thread_func)
print '111111'
t.start()
print '222222'
return json.dumps({
'success': True
})
In both scenarios I see the 111111 and 222222, yet my browser still hangs on the response from the endpoint. I've tried p.daemon = True for the process, as well as p.terminate() but no luck. I had hoped launching a script with nohup in a different shell and separate processs/thread would just work, but somehow Flask or uWSGI is impacted by it.
Update
Since this does work locally on my Mac when I start my Flask app directly with python app.py and hit it directly without going through my Nginx proxy and uWSGI, I'm starting to believe it may not be the code itself that is having issues. And because my Nginx just forwards the request to uWSGI, I believe it may possibly be something there that's causing it.
Here is my ini configuration for the domain for uWSGI, which I'm running in emperor mode:
[uwsgi]
protocol = uwsgi
max-requests = 5000
chmod-socket = 660
master = True
vacuum = True
enable-threads = True
auto-procname = True
procname-prefix = michael-
chdir = /srv/www/mysite.com
module = app
callable = app
socket = /tmp/mysite.com.sock
This kind of stuff is the actual and probably main use case for Python Celery (https://docs.celeryproject.org/). As a general rule, do not run long-running jobs that are CPU-bound in the wsgi process. It's tricky, it's inefficient, and most important thing, it's more complicated than setting up an async task in a celery worker. If you want to just prototype you can set the broker to memory and not using an external server, or run a single-threaded redis on the very same machine.
This way you can launch the task, call task.result() which is blocking, but it blocks in an IO-bound fashion, or even better you can just return immediately by retrieving the task_id and build a second endpoint /result?task_id=<task_id> that checks if result is available:
result = AsyncResult(task_id, app=app)
if result.state == "SUCCESS":
return result.get()
else:
return result.state # or do something else depending on the state
This way you have a non-blocking wsgi app that does what is best suited for: short time CPU-unbound calls that have IO calls at most with OS-level scheduling, then you can rely directly to the wsgi server workers|processes|threads or whatever you need to scale the API in whatever wsgi-server like uwsgi, gunicorn, etc. for the 99% of workloads as celery scales horizontally by increasing the number of worker processes.
This approach works for me, it calls the timeout command (sleep 10s) in the command line and lets it work in the background. It returns the response immediately.
#app.route('/endpoint1')
def endpoint1():
subprocess.Popen('timeout 10', shell=True)
return 'success1'
However, not testing on WSGI server, but just locally.
Would it be enough to use a background task? Then you only need to import threading e.g.
import threading
import ....
def endpoint():
"""My endpoint."""
try:
t = BackgroundTasks()
t.start()
except RuntimeError as exception:
return f"An error occurred during endpoint: {exception}", 400
return "successful.", 200
return "successfully started.", 200
class BackgroundTasks(threading.Thread):
def run(self,*args,**kwargs):
...do long running stuff
Related
I have an application on a Flask and uWSGI with a jobstore in a SQLite. I start the scheduler along with the application, and add new tasks through add_task when some url is visited.
I see that the tasks are saved correctly in the jobstore, I can view them through the API, but it does not execute at the appointed time.
A few important data:
uwsgi.ini
processes = 1
enable-threads = true
__init__.py
scheduler = APScheduler()
scheduler.init_app(app)
with app.app_context():
scheduler.start()
main.py
scheduler.add_job(
id='{}{}'.format(test.id, g.user.id),
func = pay_day,
args = [test.id, g.user.id],
trigger ='interval',
minutes=test.timer
)
in service.py
def pay_day(tid, uid):
with scheduler.app.app_context():
*some code here*
Interesting behavior: if you create a task by going to the URL and restart the application after that, the task will be executed. But if the application is running and one of the users creates a task by going to the URL, then this task will not be completed until the application is restarted.
I don't get any errors or exceptions, even in the scheduler logs.
I already have no idea how to make it work and what I did wrong. I need a hint.
uWSGI employs some tricks which disable the Global Interpreter Lock and with it, the use of threads which are vital to the operation of APScheduler. To fix this, you need to re-enable the GIL using the --enable-threads switch. See the uWSGI documentation for more details.
I know that you had enable-threads = true in uwsgi.ini, but try the to enable it using the command line.
I'd like to be able to trigger a long-running python script via a web request, in bare-bones fashion. Also, I'd like to be able to trigger other copies of the script with different parameters while initial copies are still running.
I've looked at flask, aiohttp, and queueing possibilities. Flask and aiohttp seem to have the least overhead to set up. I plan on executing the existing python script via subprocess.run (however, I did consider refactoring the script into libraries that could be used in the web response function).
With aiohttp, I'm trying something like:
ingestion_service.py:
from aiohttp import web
from pprint import pprint
routes = web.RouteTableDef()
#routes.get("/ingest_pipeline")
async def test_ingest_pipeline(request):
'''
Get the job_conf specified from the request and activate the script
'''
#subprocess.run the command with lookup of job conf file
response = web.Response(text=f"Received data ingestion request")
await response.prepare(request)
await response.write_eof()
#eventually this would be subprocess.run call
time.sleep(80)
return response
def init_func(argv):
app = web.Application()
app.add_routes(routes)
return app
But though the initial request returns immediately, subsequent requests block until the initial request is complete. I'm running a server via:
python -m aiohttp.web -H localhost -P 8080 ingestion_service:init_func
I know that multithreading and concurrency may provide better solutions than asyncio. In this case, I'm not looking for a robust solution, just something that will allow me to run multiple scripts at once via http request, ideally with minimal memory costs.
OK, there were a couple of issues with what I was doing. Namely, time.sleep() is blocking, so asyncio.sleep() should be used. However, since I'm interested in spawning a subprocess, I can use asyncio.subprocess to do that in a non-blocking fashion.
nb:
asyncio: run one function threaded with multiple requests from websocket clients
https://docs.python.org/3/library/asyncio-subprocess.html.
Using these help, but there's still an issue with the webhandler terminating the subprocess. Luckily, there's a solution here:
https://docs.aiohttp.org/en/stable/web_advanced.html
aiojobs has a decorator "atomic" that will protect the process until it is complete. So, code along these lines will function:
from aiojobs.aiohttp import setup, atomic
import asyncio
import os
from aiohttp import web
#atomic
async def ingest_pipeline(request):
#be careful what you pass through to shell, lest you
#give away the keys to the kingdom
shell_command = "[your command here]"
response_text = f"running {shell_command}"
response_code = 200
response = web.Response(text=response_text, status=response_code)
await response.prepare(request)
await response.write_eof()
ingestion_process = await asyncio.create_subprocess_shell(shell_command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await ingestion_process.communicate()
return response
def init_func(argv):
app = web.Application()
setup(app)
app.router.add_get('/ingest_pipeline', ingest_pipeline)
return app
This is very bare bones, but might help others looking for a quick skeleton for a temporary internal solution.
I have designed a REST API which receives inputs through POST requests and then applies some logic to the inputs and returns to the callback uri which is part of the inputs.
This design was working fine for single input, but then i want to implement multithreading so that i can handle multiple POST requests. I have tried using 'app.run(threaded=True)' but was not successful.
I am running this code on linux platform. Not sure what is wrong in the following code, and am not so good at using threads in python, would appreciate if someone can let me know where the issue is:
I am able to get the '200' response once there is a POST request and the inputs are appended to 'inp_params', after which there is no processing in the thread.
from flask import Flask, jsonify, request
import time
import json
import os
import threading
import Queue
import test_func_module as tf
app = Flask(__name__)
inp_params = []
# Create the queue and threader
q = Queue.Queue()
#app.route('/', methods = ['GET', 'POST'] )
def get_data():
if request.method == 'GET':
return 'RESTful API'
elif request.method == 'POST':
global inp_params
inputs = {"fileName": request.json["fileName"], "fileId": request.json["fileId"], "ModuleId": request.json["ModuleId"], "WorkflowId": request.json["WorkflowId"],"Language": request.json["Language"], "callbackuri": request.json["callbackuri"]}
inp_params.append(inputs)
return '200'
def test_integrate(worker):
TF_output = tf.test_func(worker)
return TF_output
def threader():
while True:
# gets an worker from the queue
worker = q.get()
# Run the example job with the avail worker in queue (thread)
test_integrate(worker)
# completed with the job
q.task_done()
if __name__ == '__main__':.
for worker in inp_params:
q.put(worker)
for x in range(4): #4 cores
t = threading.Thread(target=threader)
# classifying as a daemon, so they will die when the main dies
t.daemon = True
# begins, must come after daemon definition
t.start()
# wait until the thread terminates.
q.join()
app.run(threaded=True)
#Shilparani Since you mentioned
I have tried using 'app.run(threaded=True)' but was not successful.
May not be exact answer for your question but I would like to share my experience for achieving concurrency through uwsgi/gunicorn :
Keep it simple by coding Flask for REST endpoints and move Multithreading , MultiProcessing logic to gunicorn or uwsgi where you can mention threads and workers which help for achieving concurrency , parallelism if that's what you are trying to achieve.
gunicorn -b localhost:8080 -w 4 -t 4 app:app
Based on your need and operations:
If tasks are CPU intensive try to keep #workers as #CPU-cores
If tasks are I/O intensive may be safe to try with more threads
Is there any way to execute some code just before a worker is turned off?
I'm not too confident on execution model of flask\werkzeug, the situation is this:
During the creation of flask application i start a deamon thread to do some external stuff (waiting on a queue essentially); i've setup this thread as demon because i don't want it to prevent the shut down of the worker running the flask application when it's needed.
there is my problem: i need to execute some clean up code just before the thread it's been killed by the worker, and my solution is to do those operations on a termination event (if any) of the worker
With python you can use the uwsgi.atexit hook. The function callback will be executed before exit.
import uwsgi, os
from flask import Flask
app = Flask('demo')
#app.route('/')
def index():
return "Hello World"
def callback():
print "Worker %i exinting" % os.getpid()
uwsgi.atexit = callback
I'm trying to send ~400 HTTP GET requests and collect the results.
I'm running from django.
My solution was to use celery with gevent.
To start the celery tasks I call get_reports :
def get_reports(self, clients, *args, **kw):
sub_tasks = []
for client in clients:
s = self.get_report_task.s(self, client, *args, **kw).set(queue='io_bound')
sub_tasks.append(s)
res = celery.group(*sub_tasks)()
reports = res.get(timeout=30, interval=0.001)
return reports
#celery.task
def get_report_task(self, client, *args, **kw):
report = send_http_request(...)
return report
I use 4 workers:
manage celery worker -P gevent --concurrency=100 -n a0 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a1 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a2 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a3 -Q io_bound
And I use RabbitMq as the broker.
And although it works much faster than running the requests sequentially (400 requests took ~23 seconds), I noticed that most of that time was overhead from celery itself, i.e. if I changed get_report_task like this:
#celery.task
def get_report_task(self, client, *args, **kw):
return []
this whole operation took ~19 seconds.
That means that I spend 19 seconds only on sending all the tasks to celery and getting the results back
The queuing rate of messages to rabbit mq is seems to be bound to 28 messages / sec and I think that this is my bottleneck.
I'm running on a win 8 machine if that matters.
some of the things I've tried:
using redis as broker
using redis as results backend
tweaking with those settings
BROKER_POOL_LIMIT = 500
CELERYD_PREFETCH_MULTIPLIER = 0
CELERYD_MAX_TASKS_PER_CHILD = 100
CELERY_ACKS_LATE = False
CELERY_DISABLE_RATE_LIMITS = True
I'm looking for any suggestions that will help speed things up.
Are you really running on Windows 8 without a Virtual Machine? I did the following simple test on 2 Core Macbook 8GB RAM running OS X 10.7:
import celery
from time import time
#celery.task
def test_task(i):
return i
grp = celery.group(test_task.s(i) for i in range(400))
tic1 = time(); res = grp(); tac1 = time()
print 'queued in', tac1 - tic1
tic2 = time(); vals = res.get(); tac2 = time()
print 'executed in', tac2 - tic2
I'm using Redis as broker, Postgres as a result backend and default worker with --concurrency=4. Guess what is the output? Here it is:
queued in 3.5009469986
executed in 2.99818301201
Well it turnes out I had 2 separate issues.
First off, the task was a member method. After extracting it out of the class, the time went down to about 12 seconds. I can only assume it has something to do with the pickling of self.
The second thing was the fact that it ran on windows.
After running it on my linux machine, the run time was less than 2 seconds.
Guess windows just isn't cut for high performance..
How about using twisted instead? You can reach for much simpler application structure. You can send all 400 requests from the django process at once and wait for all of them to finish. This works simultaneously because twisted sets the sockets into non-blocking mode and only reads the data when its available.
I had a similar problem a while ago and I've developed a nice bridge between twisted and django. I'm running it in production environment for almost a year now. You can find it here: https://github.com/kowalski/featdjango/. In simple words it has the main application thread running the main twisted reactor loop and the django view results is delegated to a thread. It use a special threadpool, which exposes methods to interact with reactor and use its asynchronous capabilities.
If you use it, your code would look like this:
from twisted.internet import defer
from twisted.web.client import getPage
import threading
def get_reports(self, urls, *args, **kw):
ct = threading.current_thread()
defers = list()
for url in urls:
# here the Deferred is created which will fire when
# the call is complete
d = ct.call_async(getPage, args=[url] + args, kwargs=kw)
# here we keep it for reference
defers.append(d)
# here we create a Deferred which will fire when all the
# consiting Deferreds are completed
deferred_list = defer.DeferredList(defers, consumeErrors=True)
# here we tell the current thread to wait until we are done
results = ct.wait_for_defer(deferred_list)
# the results is a list of the form (C{bool} success flag, result)
# below unpack it
reports = list()
for success, result in results:
if success:
reports.append(result)
else:
# here handle the failure, or just ignore
pass
return reports
This still is something you can optimize a lot. Here, every call to getPage() would create a separate TCP connection and close it when its done. This is as optimal as it can be, providing that each of your 400 requests is sent to a different host. If this is not a case, you can use a http connection pool, which uses persistent connections and http pipelineing. You instantiate it like this:
from feat.web import httpclient
pool = httpclient.ConnectionPool(host, port, maximum_connections=3)
Than a single request is perform like this (this goes instead the getPage() call):
d = ct.call_async(pool.request, args=(method, path, headers, body))