Django `python manage.py runserver` does not support asyncio&aiohttp - python

In my Django app, I need to proxy a request from the user to other servers. And I use asyncio/aiohttp client.
#user->request
.....
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(self.run(t1, t2, t3))
loop.run_until_complete(future)
......
# response
When my django server is started with python manager.py runserver,the following error occurs when the user requests.
RuntimeError: There is no current event loop in thread 'Thread-1'.
But when I start with Gunicorn, everything is ok.
Maybe I should use new_event_loop?
Why there is no problem with Gunicorn?

Try following:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
But use of aiohttp in synchronous django will not increase its speed unless you are sending a lot of requests in a view. If you do so, it is better to move that task to some worker (e.g. Celery) or use aiohttp for server too instead of Django.

Related

APScheduler does not run scheduled tasks: Flask + uWSGI

I have an application on a Flask and uWSGI with a jobstore in a SQLite. I start the scheduler along with the application, and add new tasks through add_task when some url is visited.
I see that the tasks are saved correctly in the jobstore, I can view them through the API, but it does not execute at the appointed time.
A few important data:
uwsgi.ini
processes = 1
enable-threads = true
__init__.py
scheduler = APScheduler()
scheduler.init_app(app)
with app.app_context():
scheduler.start()
main.py
scheduler.add_job(
id='{}{}'.format(test.id, g.user.id),
func = pay_day,
args = [test.id, g.user.id],
trigger ='interval',
minutes=test.timer
)
in service.py
def pay_day(tid, uid):
with scheduler.app.app_context():
*some code here*
Interesting behavior: if you create a task by going to the URL and restart the application after that, the task will be executed. But if the application is running and one of the users creates a task by going to the URL, then this task will not be completed until the application is restarted.
I don't get any errors or exceptions, even in the scheduler logs.
I already have no idea how to make it work and what I did wrong. I need a hint.
uWSGI employs some tricks which disable the Global Interpreter Lock and with it, the use of threads which are vital to the operation of APScheduler. To fix this, you need to re-enable the GIL using the --enable-threads switch. See the uWSGI documentation for more details.
I know that you had enable-threads = true in uwsgi.ini, but try the to enable it using the command line.

Change gunicorn worker timeout inside request

I have a slow request, and I want to change timeout for the worker during handling of that request and only for that request.
Basically, I have a flask application:
class Slow(Resource):
def post(self):
if slow_condition():
gunicorn.how.to.extend.processing.time.here()
do_something_slow()
api = Api(application)
api.add_resource(Slow, "/slow")
and I want to extend processing time if slow_condition returned True. How can I change timeout for the single request?
No way.
Flask is just web framework. Framework doesn't know anything about production server and settings, workers etc. By the way to change sever configuration you need to reload all workers(restart gunicorn, uWSGI, waitress etc). So you can only increase timeout parameter.

run celery and aiohttp in the same service

my goal is to send live notifications to the user.
the message will arrive from a celery worker.
and will be sent to the user using aiohttp through sockjs.
how can i run both on the same app ? or receive the messages on the aiohttp instance where i have data of the authenticated users in memory ?
what is the best approach to achieve that ?
i have tried running them together using the aiohttp on_startup. but celery is blocking the main thread so its not possible.
async def run_celery(app):
.... run celery
app = web.Application(loop=asyncio.get_event_loop())
app.on_startup.append(run_celery)
sockjs.add_endpoint(app, msg_handler, name='messeging', prefix='/sockjs/')
thank you very much.
shay

HTTP server kick-off background python script without blocking

I'd like to be able to trigger a long-running python script via a web request, in bare-bones fashion. Also, I'd like to be able to trigger other copies of the script with different parameters while initial copies are still running.
I've looked at flask, aiohttp, and queueing possibilities. Flask and aiohttp seem to have the least overhead to set up. I plan on executing the existing python script via subprocess.run (however, I did consider refactoring the script into libraries that could be used in the web response function).
With aiohttp, I'm trying something like:
ingestion_service.py:
from aiohttp import web
from pprint import pprint
routes = web.RouteTableDef()
#routes.get("/ingest_pipeline")
async def test_ingest_pipeline(request):
'''
Get the job_conf specified from the request and activate the script
'''
#subprocess.run the command with lookup of job conf file
response = web.Response(text=f"Received data ingestion request")
await response.prepare(request)
await response.write_eof()
#eventually this would be subprocess.run call
time.sleep(80)
return response
def init_func(argv):
app = web.Application()
app.add_routes(routes)
return app
But though the initial request returns immediately, subsequent requests block until the initial request is complete. I'm running a server via:
python -m aiohttp.web -H localhost -P 8080 ingestion_service:init_func
I know that multithreading and concurrency may provide better solutions than asyncio. In this case, I'm not looking for a robust solution, just something that will allow me to run multiple scripts at once via http request, ideally with minimal memory costs.
OK, there were a couple of issues with what I was doing. Namely, time.sleep() is blocking, so asyncio.sleep() should be used. However, since I'm interested in spawning a subprocess, I can use asyncio.subprocess to do that in a non-blocking fashion.
nb:
asyncio: run one function threaded with multiple requests from websocket clients
https://docs.python.org/3/library/asyncio-subprocess.html.
Using these help, but there's still an issue with the webhandler terminating the subprocess. Luckily, there's a solution here:
https://docs.aiohttp.org/en/stable/web_advanced.html
aiojobs has a decorator "atomic" that will protect the process until it is complete. So, code along these lines will function:
from aiojobs.aiohttp import setup, atomic
import asyncio
import os
from aiohttp import web
#atomic
async def ingest_pipeline(request):
#be careful what you pass through to shell, lest you
#give away the keys to the kingdom
shell_command = "[your command here]"
response_text = f"running {shell_command}"
response_code = 200
response = web.Response(text=response_text, status=response_code)
await response.prepare(request)
await response.write_eof()
ingestion_process = await asyncio.create_subprocess_shell(shell_command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE)
stdout, stderr = await ingestion_process.communicate()
return response
def init_func(argv):
app = web.Application()
setup(app)
app.router.add_get('/ingest_pipeline', ingest_pipeline)
return app
This is very bare bones, but might help others looking for a quick skeleton for a temporary internal solution.

Method execute server doesn't response for 7 sec

I have function named as validate_account() that return boolean. It goes to db, and do some manipulation with duration of 7 seconds. So, when I make another requests to server, it doesn't response during these 7 seconds for any request. How can I fix it? Maybe by starting new process?
#login_required
#csrf_protect
def check_account(request):
username = request.session['current_account']
account = get_object_or_404(Account, username=username)
# takes 7 seconds
login_status = validate_account(account.username, account.password)
response = {
'loginStatus': login_status
}
response = json.dumps(response)
return JsonResponse(response, safe=False)
I am running server as python manage.py runserver --nothreading --noreload
The --nothreading option disables multithreading, so you will only have one thread responding to requests. Since each thread handles requests synchronously, this causes the exact behaviour you describe.
Simply remove the --nothreading option, and multithreading will allow the server to respond to multiple requests at the same time. In production, you should also use multiple threads and/or processes to run your WSGI server

Categories