python flask thread is always running, can't stop - python

I use flask as http server, for Multi-threaded support, I set threaded=True
The Call Stack shows every thread is running and not cancel when function finished.
And the memory has been growing.

I use tornado framework solved it.

Related

uWSGI mules VS native Python threads

I have read the documentation of uWSGI mules, and some info from other sources. But I'm confused about the differences between mules and python threads. Can anyone explain to me, what can mules do that threads cannot and why do mules even exist?
uWSGI mule can be thought of as a separate worker process, which is not accessible via sockets (eg. direct web requests). It executes an instance of your application and can be used for offloading tasks using mulefunc Python decorator for example. Also, as mentioned in the documentation, mule can be configured to execute custom logic.
On the other hand, a thread runs in its parent's (uWSGI worker) address space. So if the worker dies or is reloaded, the thread behaves the same way. It can handle requests and also can execute specified tasks (functions) via thread decorator.
Python threads do no span on multiple CPUs, roughly said can't use all the CPU power, this is a Python GIL limitation What is the global interpreter lock (GIL) in CPython?
This is one of the reasons for using web servers, their duty is to spawn a process worker or use idle one for each task received (http request).
A mule function on the same principal, but is particular in a sense that it is intended to run tasks outside of an http request context. the idea behind, is that you could reserve some mules, each will be running in a separate process (span on multiple CPUs) as regular workers do, but they don't serve any http request, only tasks to be setup as mentioned in the uwsgi documentation.
Worth to mention that mules are also monitored by the master process of the web server, such they are respawned when killed or dead.

Is the python subprocess blocking the IO?

I am using CherryPy as a web server, after my web server request, it may run a long long process. I don't want the web server busy on handling the process, so I separate the execution on in a separate script, and using a subprocess to call this script. But it seems that the 'subprocess' will wait the process finish. Can I do something that after the computer called the subprocess, it executed in the background on it own? Thanks.

Django with uwsgi(multiple progress and multiple threading): How to run a function when server exits?

I am writing a Django project with uwsgi(multiple progress and multiple threading mode) using grpc.
Right now, when the server exits, the grpc python client can not be closed because it used threading and rewrite the exit function. so I have to make a clean function to close the grpc client when the uwsgi sever is reload or exit, and I wish to organise it so that this function is called automatically when the server quits.
Any help would be greatly appreciated.
As of grpcio 0.15.0 clients no longer need to be closed.

How does apache runs a application when a request comes in?

I have a python web application running on apache2 deployed with mod_wsgi. The application has a thread continuously running. This thread is a ZeroMQ thread and listening to a port in loop. The application is not maintaining session. Now if I open the browser and sends a request to the apache server the data is accepted for the first time. Now when second time I send the request It shows Internal server error. When I checked the error log file for traceback, It shows the ZMQError:- The address already in use.
Does apache reloads the application on each request sent from the browser since so that the ZeroMQ thread is being created everytime and being assigned the port but since the port has already been assigned it shows error....
It looks like your application is using zmq to bind to some port.
As you have suspected already, each request can be run as independent process, thus competing in access to the port to bind to.
There can be so called workers, each running one process processing http/wsgi requests, and each trying to bind.
You shall redesign your app not to use bind, but connect, this will probably require having another process with zeromq serving something you do with that (but this last line is dependent on what you do in your app).

Flask auto-reload and long-running thread

I'm implementing a long-running thread within a Flask application. In debug mode, with the reloader activated, the long-running thread is not killed upon reload.
Instead, because the code that creates and starts the thread is run after reloading, each cycle creates an additional thread.
How can I prevent this, other than disabling the reloader?
Will the same happen when running under mod_wsgi, with its auto-reload feature?
Update: the long-running thread was actually killed by Werkzeug upon reloading. There is an extra copy, which is due to Werkzeug's reloader taking an extra thread which runs the initialization code.
The mod_wsgi reloading is described in:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
In the case of a long running request, by default if it doesn't complete within 5 seconds the process will be forcibly killed anyway. This is to avoid problem of process locking up because a request will not finish.

Categories