Run Python gRPC server without thread executor - python

I've run successfully example gRPC server from github with thread executor with max workers=10. Also, I could run the same server with so_reuseport options multiple times. Each server process will spawn 10 threads to process incoming requests.
The problem is to run server without thread executor, so one server should process request in main thread. It's incompatible to run server with thread executor with max workers=1
I didn't find any documentation on that. Could some one point me to an example?

Related

Linux/pm2 is killing my Flask service using Python's multiprocessing library

I have a Flask service running on a particular port xxxx. Inside this flask service is an endpoint:
/buildGlobalIdsPool
This endpoint uses Python's multiprocessing library's Pool object to run parallel processes of a function:
with Pool() as p:
p.starmap(api.build_global_ids_with_recordlinkage, args)
We use pm2 process manager on a Linux server to manage our services. I am hitting the endpoint from Postman and everything works fine up until the code above is reached. As soon as processes are supposed to spawn, pm2 will kill the main Flask process, but the spawned processes will persist (I check using lsof -i:xxxx and I see multiple python3 processes running on this port). This happens whether I run the service using pm2 or if I simply run python3 app.py. My program works on my local Windows 10 machine.
Just curious what I could be missing that is native to Linux or pm2 that is killing this process or not allowing multiple processes on the same port, while my local machine handles the program just fine.
Thanks!

Defunct processes in Docker container

I have a Docker container in which I am running a Python Flask API with GUnicorn and a server process, also written in Python. This server processes spawns long-running child processes and waits for them (one observation thread per child process). The root process is tini. This runs supervisord, which in turn spawns GUnicorn and the server process. The processes spawned by the server process are using multiple threads.
Sometimes, I observe defunct processes appearing in the process list. To counter this problem, I originally introduced tini as root process, but apparently, this is not enough.
As far as I understand, defunct processes are processes, which have been terminated, but not yet collected by their parent process. However, my server process specifically joins its child processes (using threads) and I would assume, GUnicorn and supervisord do the same.
How can I determine where these defunct processes are coming from and how can I debug/handle this problem?

Is the python subprocess blocking the IO?

I am using CherryPy as a web server, after my web server request, it may run a long long process. I don't want the web server busy on handling the process, so I separate the execution on in a separate script, and using a subprocess to call this script. But it seems that the 'subprocess' will wait the process finish. Can I do something that after the computer called the subprocess, it executed in the background on it own? Thanks.

uWSGI: Spawning a long-lived process

I would like to run some code in a uWSGI app, but on a long-lived process, not inside workers. That's because the process blocks on a socket recv() call, and only one thread of execution should do this.
I am hoping to avoiding creating my own daemon by somehow starting a long-lived process on uWSGI startup that does not get spawned in each worker.
Does uWSGI support anything like this?
uWSGI Mules are like workers but without network access:
http://uwsgi-docs.readthedocs.org/en/latest/Mules.html

How does apache runs a application when a request comes in?

I have a python web application running on apache2 deployed with mod_wsgi. The application has a thread continuously running. This thread is a ZeroMQ thread and listening to a port in loop. The application is not maintaining session. Now if I open the browser and sends a request to the apache server the data is accepted for the first time. Now when second time I send the request It shows Internal server error. When I checked the error log file for traceback, It shows the ZMQError:- The address already in use.
Does apache reloads the application on each request sent from the browser since so that the ZeroMQ thread is being created everytime and being assigned the port but since the port has already been assigned it shows error....
It looks like your application is using zmq to bind to some port.
As you have suspected already, each request can be run as independent process, thus competing in access to the port to bind to.
There can be so called workers, each running one process processing http/wsgi requests, and each trying to bind.
You shall redesign your app not to use bind, but connect, this will probably require having another process with zeromq serving something you do with that (but this last line is dependent on what you do in your app).

Categories