Method execute server doesn't response for 7 sec - python

I have function named as validate_account() that return boolean. It goes to db, and do some manipulation with duration of 7 seconds. So, when I make another requests to server, it doesn't response during these 7 seconds for any request. How can I fix it? Maybe by starting new process?
#login_required
#csrf_protect
def check_account(request):
username = request.session['current_account']
account = get_object_or_404(Account, username=username)
# takes 7 seconds
login_status = validate_account(account.username, account.password)
response = {
'loginStatus': login_status
}
response = json.dumps(response)
return JsonResponse(response, safe=False)
I am running server as python manage.py runserver --nothreading --noreload

The --nothreading option disables multithreading, so you will only have one thread responding to requests. Since each thread handles requests synchronously, this causes the exact behaviour you describe.
Simply remove the --nothreading option, and multithreading will allow the server to respond to multiple requests at the same time. In production, you should also use multiple threads and/or processes to run your WSGI server

Related

how concurrency works in flask and is handled by whom?

I have a time-consuming task in route /fibo and a route /home that return hello.
from flask import Flask, request
from fibo import fib_t
from datetime import datetime
app = Flask(__name__)
#app.route('/home')
def hello():
return 'hello '
#app.route('/fibo')
def cal_fibo():
request_rec_t = datetime.now().strftime("%M:%S")
v, duretion = fib_t(37)
return f'''
<div><h1>request recieved by server:</h1>
<p>{request_rec_t}</p>
<p>
<ul>
<li><b>calculated value</b>:{v}</li>
<li><b>that takes</b>: {duretion} seconds</li>
<ul>
</p>
</div>
<div>
<h1>resonse send by server:</h1>
<p>{datetime.now().strftime("%M:%S")}</p>
</div>'''
if __name__ == '__main__':
app.run(debug=True)
the fibo module
import time
def fib(a):
if a == 1 or a==2:
return 1
else:
return fib(a-1) + fib(a-2)
def fib_t(a):
t = time.time()
v = fib(a)
return v, f'{time.time()-t:.2f}'
if I made two requests to /fibo the second one will start running after the first one finishes his work.
but if I send a request to the time-consuming task and another one to /home I will receive the /home route response immediately while still first request isn't resolved.
if this is matters, I make the requests from different tabs of the browser.
for example, if I request /fibo in two different tabs. the page starts to load. the captured time for the first request is 10:10 and 10:20(the execution of Fibonacci number is tasked 10 seconds) during this request to be finished, the second tab still is loading. after that the second request is finished i see that the process started at the 10:20(the time that first request finishes process) and finished in 10:29(the execution of Fibonacci number is tasked 9 seconds).
but if I request /home while the /fibo in the first tab isn't returned, I will get the response.
why this is happening?
my general question is how and by whom the multiple requests are being handled?
how can I log the time requests reached the server?
EDIT:
if I add another route fibo2 with the same logic as cal_fibo view function, now I can run them concurrently(like /fibo and /home that I showed that are running concurrently, /fibo and /fibo2 are also running concurrently.)
EDIT 2:
as shown in the image, the second request was sent after the first request's response was received. why and how did this has happened?
(the related TCP handshake of each request has happened close to each other but how and who managed that the related HTTP get request is sent after the first request's response received?)
actually you have only one process running with only one thread all the time to enable threading and running more than one process this depends on the context (development or production) .
for development purposes
to enable threading you need to run
app.run(host="your.host", port=4321, threaded=True)
also to enable running more than one process e.g. 3 according to documentation
you need to run the following line
app.run(host="your.host", port=4321, processes=3)
for production purposes
when you run the app in production environment it is the responsability of the WSGI gateway (eg. gunicorn) that you configure on the cloud provider like heroku or aws to support the handling of multiple requests.
we use this method in production as it is more robust to crashes and also more efficent.
this was just a summary of the answers in this question

Django steps or process messages via REST

For learning purpose I want to implement the next thing:
I have a script that runs selenium for example in the background and I have some log messages that help me to see what is going on in the terminal.
But I want to get the same messages in my REST request to the Angular app.
print('Started')
print('Logged in')
...
print('Processing')
...
print('Success')
In my view.py file
class RunTask(viewsets.ViewSet):
queryset = Task.objects.all()
#action(detail=False, methods=['GET'], name='Run Test Script')
def run(self, request, *args, **kwargs):
task = task()
if valid['success']:
return Response(data=task)
else:
return Response(data=task['message'])
def task()
print('Staring')
print('Logged in')
...
print('Processing')
...
print('Success')
return {
'success': True/False,
'message': 'my status message'
}
Now it shows me only the result of the task. But I want to get the same messages to indicate process status in frontend.
And I can't understand how to organize it.
Or how I can tell angular about my process status?
Unfortunately, it's not that simple. Indeed, the REST API lets you start the task, but since it runs in the same thread, the HTTP request will block until the task is finished before sending the response. Your print statements won't appear in the HTTP response but on your server output (if you look at the shell where you ran python manage.py runserver, you'll see those print statements).
Now, if you wish to have those output in real-time, you'll have to look for WebSockets. They allow you to open a "tunnel" between the browser and the server, and send/receive messages in real-time. The django-channels library allow you to implement them.
However, for long-running background tasks (like a Selenium scraper), I would advise to look into the Celery task queue. Basically, your Django process will schedule task into the queue. The tasks into the queue will then be executed by one (or more !) "worker" processes. The advantage of this is that your Django process won't be blocked by the long task: it justs add some work into the queue and then respond.
When you add tasks in the queue, Celery will give you a unique identifier for this task, that you can return in the HTTP response. You can then very well implement another endpoint which takes a task id in parameter and return the state of the task (is it pending ? done ? failed ?).
For this to work, you'll have to setup a "broker", a kind of database that will store the tasks to do and their results (typically RabbitMQ or Redis). Celery documentation explains this well: https://docs.celeryproject.org/en/latest/getting-started/brokers/index.html
Either way you choose, it's not a trivial thing and will need quite some work before having some results ; but it's interesting to see how it expands the possibilities of a classical HTTP server.

Long running script from flask endpoint

I've been pulling my hair out trying to figure this one out, hoping someone else has already encountered this and knows how to solve it :)
I'm trying to build a very simple Flask endpoint that just needs to call a long running, blocking php script (think while true {...}). I've tried a few different methods to async launch the script, but the problem is my browser never actually receives the response back, even though the code for generating the response after running the script is executed.
I've tried using both multiprocessing and threading, neither seem to work:
# multiprocessing attempt
#app.route('/endpoint')
def endpoint():
def worker():
subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp)
p = multiprocessing.Process(target=worker)
print '111111'
p.start()
print '222222'
return json.dumps({
'success': True
})
# threading attempt
#app.route('/endpoint')
def endpoint():
def thread_func():
subprocess.Popen('nohup php script.php &', shell=True, preexec_fn=os.setpgrp)
t = threading.Thread(target=thread_func)
print '111111'
t.start()
print '222222'
return json.dumps({
'success': True
})
In both scenarios I see the 111111 and 222222, yet my browser still hangs on the response from the endpoint. I've tried p.daemon = True for the process, as well as p.terminate() but no luck. I had hoped launching a script with nohup in a different shell and separate processs/thread would just work, but somehow Flask or uWSGI is impacted by it.
Update
Since this does work locally on my Mac when I start my Flask app directly with python app.py and hit it directly without going through my Nginx proxy and uWSGI, I'm starting to believe it may not be the code itself that is having issues. And because my Nginx just forwards the request to uWSGI, I believe it may possibly be something there that's causing it.
Here is my ini configuration for the domain for uWSGI, which I'm running in emperor mode:
[uwsgi]
protocol = uwsgi
max-requests = 5000
chmod-socket = 660
master = True
vacuum = True
enable-threads = True
auto-procname = True
procname-prefix = michael-
chdir = /srv/www/mysite.com
module = app
callable = app
socket = /tmp/mysite.com.sock
This kind of stuff is the actual and probably main use case for Python Celery (https://docs.celeryproject.org/). As a general rule, do not run long-running jobs that are CPU-bound in the wsgi process. It's tricky, it's inefficient, and most important thing, it's more complicated than setting up an async task in a celery worker. If you want to just prototype you can set the broker to memory and not using an external server, or run a single-threaded redis on the very same machine.
This way you can launch the task, call task.result() which is blocking, but it blocks in an IO-bound fashion, or even better you can just return immediately by retrieving the task_id and build a second endpoint /result?task_id=<task_id> that checks if result is available:
result = AsyncResult(task_id, app=app)
if result.state == "SUCCESS":
return result.get()
else:
return result.state # or do something else depending on the state
This way you have a non-blocking wsgi app that does what is best suited for: short time CPU-unbound calls that have IO calls at most with OS-level scheduling, then you can rely directly to the wsgi server workers|processes|threads or whatever you need to scale the API in whatever wsgi-server like uwsgi, gunicorn, etc. for the 99% of workloads as celery scales horizontally by increasing the number of worker processes.
This approach works for me, it calls the timeout command (sleep 10s) in the command line and lets it work in the background. It returns the response immediately.
#app.route('/endpoint1')
def endpoint1():
subprocess.Popen('timeout 10', shell=True)
return 'success1'
However, not testing on WSGI server, but just locally.
Would it be enough to use a background task? Then you only need to import threading e.g.
import threading
import ....
def endpoint():
"""My endpoint."""
try:
t = BackgroundTasks()
t.start()
except RuntimeError as exception:
return f"An error occurred during endpoint: {exception}", 400
return "successful.", 200
return "successfully started.", 200
class BackgroundTasks(threading.Thread):
def run(self,*args,**kwargs):
...do long running stuff

Django `python manage.py runserver` does not support asyncio&aiohttp

In my Django app, I need to proxy a request from the user to other servers. And I use asyncio/aiohttp client.
#user->request
.....
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(self.run(t1, t2, t3))
loop.run_until_complete(future)
......
# response
When my django server is started with python manager.py runserver,the following error occurs when the user requests.
RuntimeError: There is no current event loop in thread 'Thread-1'.
But when I start with Gunicorn, everything is ok.
Maybe I should use new_event_loop?
Why there is no problem with Gunicorn?
Try following:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
But use of aiohttp in synchronous django will not increase its speed unless you are sending a lot of requests in a view. If you do so, it is better to move that task to some worker (e.g. Celery) or use aiohttp for server too instead of Django.

bottle.py stalls when client disconnects

I have a python server written with bottle. When I access the server from a website using Ajax, and then close the website before the server can send its response, the server gets stuck trying to send the response to a destination that no longer exists. When this happens, the server becomes unresponsive to any requests for about 10 seconds, before resuming normal operations.
How can I prevent this? I would like bottle to immediately stop trying if the website that made the request no longer exists.
I start the server like this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True)
and the only url exposed by the server is this:
#bottle.route('/', method='POST')
def main_server_input():
request_data = bottle.request.forms['request_data']
request_data = json.loads(request_data)
try:
response_data = process_message_from_scenario(request_data)
except:
error_message = utilities.get_error_message_details()
error_message = "Exception during processing of command:\n%s" % (error_message,)
print(error_message)
response_data = {
'success' : False,
'error_message' : error_message,
}
return(json.dumps(response_data))
Is process_message_from_scenario a long-running function? (Say, 10 seconds?)
If so, your one-and-only server thread will be tied up in that function, and no subsequent requests will be serviced during that time. Have you tried running a concurrent server, like gevent? Try this:
bottle.run(host='localhost', port=port_to_listen_to, quiet=True, server='gevent')

Categories