I am trying to be able to respond incoming web requests simultaneously, while processing of a request includes quite long IO call. I'm going to use gevent, as it's supposed to be "non-blocking"
The problem I found is that requests are processed sequentially even though I have a lot of gevent threads. For some reason requests get served by single green thread.
I have nginx (with default config which isn't relevant here I think), also I have uwsgi and simple wsgi app that emulates IO-blocking call as gevent.sleep(). Here they are:
uwsgi.ini
[uwsgi]
chdir = /srv/website
home = /srv/website/env
module = wsgi:app
socket = /tmp/uwsgi_mead.sock
#daemonize = /data/work/zx900/mob-effect.mead/logs/uwsgi.log
processes = 1
gevent = 100
gevent-monkey-patch
wsgi.py
import gevent
import time
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
t0 = time.time()
gevent.sleep(10.0)
t1 = time.time()
return "{1} - {0} = {2}".format(t0, t1, t1 - t0)
then I simultaneously (almost) open two tabs in my browser, and here is what I get as result:
1392297388.98 - 1392297378.98 = 10.0021491051
# first tab, processing finished at 1392297378.98
1392297398.99 - 1392297388.99 = 10.0081849098
# second tab, processing started at 1392297398.99
As you can see, first call blocked execution of the view. What did I wrong?
Send requests with curl or anything else than browser as browser has a limit on the number of simultaneous connections per site or per address. Or use two different browsers.
Related
I have a time-consuming task in route /fibo and a route /home that return hello.
from flask import Flask, request
from fibo import fib_t
from datetime import datetime
app = Flask(__name__)
#app.route('/home')
def hello():
return 'hello '
#app.route('/fibo')
def cal_fibo():
request_rec_t = datetime.now().strftime("%M:%S")
v, duretion = fib_t(37)
return f'''
<div><h1>request recieved by server:</h1>
<p>{request_rec_t}</p>
<p>
<ul>
<li><b>calculated value</b>:{v}</li>
<li><b>that takes</b>: {duretion} seconds</li>
<ul>
</p>
</div>
<div>
<h1>resonse send by server:</h1>
<p>{datetime.now().strftime("%M:%S")}</p>
</div>'''
if __name__ == '__main__':
app.run(debug=True)
the fibo module
import time
def fib(a):
if a == 1 or a==2:
return 1
else:
return fib(a-1) + fib(a-2)
def fib_t(a):
t = time.time()
v = fib(a)
return v, f'{time.time()-t:.2f}'
if I made two requests to /fibo the second one will start running after the first one finishes his work.
but if I send a request to the time-consuming task and another one to /home I will receive the /home route response immediately while still first request isn't resolved.
if this is matters, I make the requests from different tabs of the browser.
for example, if I request /fibo in two different tabs. the page starts to load. the captured time for the first request is 10:10 and 10:20(the execution of Fibonacci number is tasked 10 seconds) during this request to be finished, the second tab still is loading. after that the second request is finished i see that the process started at the 10:20(the time that first request finishes process) and finished in 10:29(the execution of Fibonacci number is tasked 9 seconds).
but if I request /home while the /fibo in the first tab isn't returned, I will get the response.
why this is happening?
my general question is how and by whom the multiple requests are being handled?
how can I log the time requests reached the server?
EDIT:
if I add another route fibo2 with the same logic as cal_fibo view function, now I can run them concurrently(like /fibo and /home that I showed that are running concurrently, /fibo and /fibo2 are also running concurrently.)
EDIT 2:
as shown in the image, the second request was sent after the first request's response was received. why and how did this has happened?
(the related TCP handshake of each request has happened close to each other but how and who managed that the related HTTP get request is sent after the first request's response received?)
actually you have only one process running with only one thread all the time to enable threading and running more than one process this depends on the context (development or production) .
for development purposes
to enable threading you need to run
app.run(host="your.host", port=4321, threaded=True)
also to enable running more than one process e.g. 3 according to documentation
you need to run the following line
app.run(host="your.host", port=4321, processes=3)
for production purposes
when you run the app in production environment it is the responsability of the WSGI gateway (eg. gunicorn) that you configure on the cloud provider like heroku or aws to support the handling of multiple requests.
we use this method in production as it is more robust to crashes and also more efficent.
this was just a summary of the answers in this question
Context:
For the Raspberry Pi i am developing some home automation tools.
At one side i have my main application that reads a CSV file, that consists of date+time entries with a GPIO port number and a duration it needs to send a signal to that port.
My main app reads this CSV, creates a small list of entries of this and then basically checks every 60 seconds if there is any job to do.
So far so good, this works like a charm.
Now on the other half, i am trying to run a Flask webservice so i can directly interact with this schedule, overwrite, push to refresh the csv, and so on.
Later on (future music) i am thinking of making some android app that has a nice GUI that talks with this webservice.
But i keep struggling to start the webservice and then kick off the main app (read csv; execute loop)
some code snipit:
import threading
from flask import Flask, render_template, request
from dwe_homeautomation_app import runMainWorker
app = Flask(__name__)
# Some routing samples
#app.route('/app/breakLoop')
def breakLoop():
m_worker.breakLoop = True # set global var to exit the 60 sec loop
return "break!"
if __name__ == '__main__':
# TODO: how to run this parallel ?
t1 = threading.Thread(target=app.run(debug=True, use_reloader=False, port=5000, host='0.0.0.0')) # Flask webserver
t2 = threading.Thread(target=runMainWorker()) # The main app that reads the csv and executes the 60 sec loop
t1.start()
t2.start()
As i was reading some topics trough google and stack overflow, but i couldn't really figure out how to get this working in my code; i saw some advice about multi threading, though the info and advice doesn't seem to be very in sync with eachother.
For some reason t1 (the webservice) starts, but t2 doesn't start at all.
Im relative new to Python, so i might be missing the obvious here.
Any advice, pointing me in the right direction, or pointing me my mistake in the code sample is much appreciated.
Try that:
from flask import Flask, render_template, request
from threading import Thread
app = Flask(__name__)
# Some routing samples
#app.route('/app/breakLoop')
def breakLoop():
m_worker.breakLoop = True # set global var to exit the 60 sec loop
return "break!"
def runApp():
app.run(debug=True, use_reloader=False, port=5000, host='0.0.0.0')
if __name__ == '__main__':
# TODO: how to run this parallel ?
Thread(target = runApp).start()
Thread(target = runMainWorker).start()
Check the threading.Thread docs:
https://docs.python.org/3/library/threading.html#thread-objects
You have to pass the target without brackets and args/kwargs as defined in the docs.
I am trying to build REST API with only one call.
Sometimes it takes up to 30 seconds for a program to return a response. But if user thinks that service is lagging - he makes a new call and my app returns response with error code 500 (Internal Server Error).
For now it is enough for me to block any new requests if last one is not ready. Is there any simple way to do it?
I know that there is a lot of queueing managers like Celery, but I prefer not to overload my app with any large dependencies/etc.
You could use Flask-Limiter to ignore new requests from that remote address.
pip install Flask-Limiter
Check this quickstart:
from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
app = Flask(__name__)
limiter = Limiter(
app,
key_func=get_remote_address,
default_limits=["200 per day", "50 per hour"]
)
#app.route("/slow")
#limiter.limit("1 per day")
def slow():
return "24"
#app.route("/fast")
def fast():
return "42"
#app.route("/ping")
#limiter.exempt
def ping():
return "PONG"
As you can see, you could ignore the remote IP address for a certain amount of time meanwhile you finish the process you´re running
DOCS
Check these two links:
Flasf-Limiter Documentation
Flasf-Limiter Quick start
I'm trying to send ~400 HTTP GET requests and collect the results.
I'm running from django.
My solution was to use celery with gevent.
To start the celery tasks I call get_reports :
def get_reports(self, clients, *args, **kw):
sub_tasks = []
for client in clients:
s = self.get_report_task.s(self, client, *args, **kw).set(queue='io_bound')
sub_tasks.append(s)
res = celery.group(*sub_tasks)()
reports = res.get(timeout=30, interval=0.001)
return reports
#celery.task
def get_report_task(self, client, *args, **kw):
report = send_http_request(...)
return report
I use 4 workers:
manage celery worker -P gevent --concurrency=100 -n a0 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a1 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a2 -Q io_bound
manage celery worker -P gevent --concurrency=100 -n a3 -Q io_bound
And I use RabbitMq as the broker.
And although it works much faster than running the requests sequentially (400 requests took ~23 seconds), I noticed that most of that time was overhead from celery itself, i.e. if I changed get_report_task like this:
#celery.task
def get_report_task(self, client, *args, **kw):
return []
this whole operation took ~19 seconds.
That means that I spend 19 seconds only on sending all the tasks to celery and getting the results back
The queuing rate of messages to rabbit mq is seems to be bound to 28 messages / sec and I think that this is my bottleneck.
I'm running on a win 8 machine if that matters.
some of the things I've tried:
using redis as broker
using redis as results backend
tweaking with those settings
BROKER_POOL_LIMIT = 500
CELERYD_PREFETCH_MULTIPLIER = 0
CELERYD_MAX_TASKS_PER_CHILD = 100
CELERY_ACKS_LATE = False
CELERY_DISABLE_RATE_LIMITS = True
I'm looking for any suggestions that will help speed things up.
Are you really running on Windows 8 without a Virtual Machine? I did the following simple test on 2 Core Macbook 8GB RAM running OS X 10.7:
import celery
from time import time
#celery.task
def test_task(i):
return i
grp = celery.group(test_task.s(i) for i in range(400))
tic1 = time(); res = grp(); tac1 = time()
print 'queued in', tac1 - tic1
tic2 = time(); vals = res.get(); tac2 = time()
print 'executed in', tac2 - tic2
I'm using Redis as broker, Postgres as a result backend and default worker with --concurrency=4. Guess what is the output? Here it is:
queued in 3.5009469986
executed in 2.99818301201
Well it turnes out I had 2 separate issues.
First off, the task was a member method. After extracting it out of the class, the time went down to about 12 seconds. I can only assume it has something to do with the pickling of self.
The second thing was the fact that it ran on windows.
After running it on my linux machine, the run time was less than 2 seconds.
Guess windows just isn't cut for high performance..
How about using twisted instead? You can reach for much simpler application structure. You can send all 400 requests from the django process at once and wait for all of them to finish. This works simultaneously because twisted sets the sockets into non-blocking mode and only reads the data when its available.
I had a similar problem a while ago and I've developed a nice bridge between twisted and django. I'm running it in production environment for almost a year now. You can find it here: https://github.com/kowalski/featdjango/. In simple words it has the main application thread running the main twisted reactor loop and the django view results is delegated to a thread. It use a special threadpool, which exposes methods to interact with reactor and use its asynchronous capabilities.
If you use it, your code would look like this:
from twisted.internet import defer
from twisted.web.client import getPage
import threading
def get_reports(self, urls, *args, **kw):
ct = threading.current_thread()
defers = list()
for url in urls:
# here the Deferred is created which will fire when
# the call is complete
d = ct.call_async(getPage, args=[url] + args, kwargs=kw)
# here we keep it for reference
defers.append(d)
# here we create a Deferred which will fire when all the
# consiting Deferreds are completed
deferred_list = defer.DeferredList(defers, consumeErrors=True)
# here we tell the current thread to wait until we are done
results = ct.wait_for_defer(deferred_list)
# the results is a list of the form (C{bool} success flag, result)
# below unpack it
reports = list()
for success, result in results:
if success:
reports.append(result)
else:
# here handle the failure, or just ignore
pass
return reports
This still is something you can optimize a lot. Here, every call to getPage() would create a separate TCP connection and close it when its done. This is as optimal as it can be, providing that each of your 400 requests is sent to a different host. If this is not a case, you can use a http connection pool, which uses persistent connections and http pipelineing. You instantiate it like this:
from feat.web import httpclient
pool = httpclient.ConnectionPool(host, port, maximum_connections=3)
Than a single request is perform like this (this goes instead the getPage() call):
d = ct.call_async(pool.request, args=(method, path, headers, body))
I'm running a socketio server with a flask app using gevent. My namespace code is here:
class ConversationNamespace(BaseNamespace):
def __init__(self, *args, **kwargs):
request = kwargs.get('request', None)
if request:
self.current_app = request['current_app']
self.current_user = request['current_user']
super(ConversationNamespace, self).__init__(*args, **kwargs)
def listener(self):
r = StrictRedis(host=self.current_app.config['REDIS_HOST'])
p = r.pubsub()
p.subscribe(self.current_app.config['REDIS_CHANNEL_CONVERSATION_KEY'] + self.current_user.user_id)
conversation_keys = r.lrange(self.current_app.config['REDIS_CONVERSATION_LIST_KEY'] +
self.current_user.user_id, 0, -1)
# Reverse conversations so the newest is up top.
conversation_keys.reverse()
# Emit conversation history.
pipe = r.pipeline()
for key in conversation_keys:
pipe.hgetall(self.current_app.config['REDIS_CONVERSATION_KEY'] + key)
self.emit(self.current_app.config['SOCKETIO_CHANNEL_CONVERSATION'] + self.current_user.user_id, pipe.execute())
# Listen for new conversations..
for m in p.listen():
conversation = r.hgetall(self.current_app.config['REDIS_CONVERSATION_KEY'] + str(m['data']))
self.emit(self.current_app.config['SOCKETIO_CHANNEL_CONVERSATION'] +
self.current_user.user_id, conversation)
def on_subscribe(self):
self.spawn(self.listener)
What I'm noticing in my app is that when I first start the SocketIO server (code below), the clients are able to connect via a websocket in firefox and chrome
#!vendor/venv/bin/python
from gevent import monkey
monkey.patch_all()
from yellowtomato import app_instance
import werkzeug.serving
from socketio.server import SocketIOServer
app = app_instance('sockets')
#werkzeug.serving.run_with_reloader
def runServer():
SocketIOServer(('0.0.0.0', app.config['SOCKET_PORT']), app, resource='socket.io').serve_forever()
runServer()
After sometime (maybe an hour or so), when I try to connect to that namespace via the browser client, it no longer communicates with a websocket but rather xhr-polling. Moreover, it takes about 20 seconds before the first response comes from the server. It gives the end user the perception that things have become very slow (but its only when rendering the page on the first subscibe, the xhr polling happens frequently and events get pushed to clients in a timely fashion).
What is triggering this latency and how can I assure that clients connect quickly using websockets.
Figured it out - I was running via the command line in an ssh session. Ending the sessions killed the parent process which was causing gevent to not work properly.
Forking the SocketIOServer process in a screen session fixed the problem